WorldWideScience

Sample records for optimal sequential selection

  1. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  2. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

  3. A Bayesian Optimal Design for Sequential Accelerated Degradation Testing

    Directory of Open Access Journals (Sweden)

    Xiaoyang Li

    2017-07-01

    Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

  4. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  5. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  6. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  7. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  8. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  9. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  10. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  11. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  12. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  13. Optimal Sequential Resource Sharing and Exchange in Multi-Agent Systems

    OpenAIRE

    Xiao, Yuanzhang

    2014-01-01

    Central to the design of many engineering systems and social networks is to solve the underlying resource sharing and exchange problems, in which multiple decentralized agents make sequential decisions over time to optimize some long-term performance metrics. It is challenging for the decentralized agents to make optimal sequential decisions because of the complicated coupling among the agents and across time. In this dissertation, we mainly focus on three important classes of multi-agent seq...

  14. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  15. Selectivity assessment of an arsenic sequential extraction procedure for evaluating mobility in mine wastes

    International Nuclear Information System (INIS)

    Drahota, Petr; Grösslová, Zuzana; Kindlová, Helena

    2014-01-01

    Highlights: • Extraction efficiency and selectivity of phosphate and oxalate were tested. • Pure As-bearing mineral phases and mine wastes were used. • The reagents were found to be specific and selective for most major forms of As. • An optimized sequential extraction scheme for mine wastes has been developed. • It has been tested over a model mineral mixtures and natural mine waste materials. - Abstract: An optimized sequential extraction (SE) scheme for mine waste materials has been developed and tested for As partitioning over a range of pure As-bearing mineral phases, their model mixtures, and natural mine waste materials. This optimized SE procedure employs five extraction steps: (1) nitrogen-purged deionized water, 10 h; (2) 0.01 M NH 4 H 2 PO 4 , 16 h; (3) 0.2 M NH 4 -oxalate in the dark, pH3, 2 h; (4) 0.2 M NH 4 -oxalate, pH3/80 °C, 4 h; (5) KClO 3 /HCl/HNO 3 digestion. Selectivity and specificity tests on natural mine wastes and major pure As-bearing mineral phases showed that these As fractions appear to be primarily associated with: (1) readily soluble; (2) adsorbed; (3) amorphous and poorly-crystalline arsenates, oxides and hydroxosulfates of Fe; (4) well-crystalline arsenates, oxides, and hydroxosulfates of Fe; as well as (5) sulfides and arsenides. The specificity and selectivity of extractants, and the reproducibility of the optimized SE procedure were further verified by artificial model mineral mixtures and different natural mine waste materials. Partitioning data for extraction steps 3, 4, and 5 showed good agreement with those calculated in the model mineral mixtures (<15% difference), as well as that expected in different natural mine waste materials. The sum of the As recovered in the different extractant pools was not significantly different (89–112%) than the results for acid digestion. This suggests that the optimized SE scheme can reliably be employed for As partitioning in mine waste materials

  16. On the equivalence of optimality criterion and sequential approximate optimization methods in the classical layout problem

    NARCIS (Netherlands)

    Groenwold, A.A.; Etman, L.F.P.

    2008-01-01

    We study the classical topology optimization problem, in which minimum compliance is sought, subject to linear constraints. Using a dual statement, we propose two separable and strictly convex subproblems for use in sequential approximate optimization (SAO) algorithms.Respectively, the subproblems

  17. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  18. Multi-Stage Recognition of Speech Emotion Using Sequential Forward Feature Selection

    Directory of Open Access Journals (Sweden)

    Liogienė Tatjana

    2016-07-01

    Full Text Available The intensive research of speech emotion recognition introduced a huge collection of speech emotion features. Large feature sets complicate the speech emotion recognition task. Among various feature selection and transformation techniques for one-stage classification, multiple classifier systems were proposed. The main idea of multiple classifiers is to arrange the emotion classification process in stages. Besides parallel and serial cases, the hierarchical arrangement of multi-stage classification is most widely used for speech emotion recognition. In this paper, we present a sequential-forward-feature-selection-based multi-stage classification scheme. The Sequential Forward Selection (SFS and Sequential Floating Forward Selection (SFFS techniques were employed for every stage of the multi-stage classification scheme. Experimental testing of the proposed scheme was performed using the German and Lithuanian emotional speech datasets. Sequential-feature-selection-based multi-stage classification outperformed the single-stage scheme by 12–42 % for different emotion sets. The multi-stage scheme has shown higher robustness to the growth of emotion set. The decrease in recognition rate with the increase in emotion set for multi-stage scheme was lower by 10–20 % in comparison with the single-stage case. Differences in SFS and SFFS employment for feature selection were negligible.

  19. Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization

    Science.gov (United States)

    Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.

    2008-01-01

    The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929

  20. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  1. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  2. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  3. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  4. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali

    2011-05-14

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows to represent the initial set of paths and the set of optimal paths after each application of optimization procedure in the form of a directed acyclic graph.

  5. A Sequential Convex Semidefinite Programming Algorithm for Multiple-Load Free Material Optimization

    Czech Academy of Sciences Publication Activity Database

    Stingl, M.; Kočvara, Michal; Leugering, G.

    2009-01-01

    Roč. 20, č. 1 (2009), s. 130-155 ISSN 1052-6234 R&D Projects: GA AV ČR IAA1075402 Grant - others:commision EU(XE) EU-FP6-30717 Institutional research plan: CEZ:AV0Z10750506 Keywords : structural optimization * material optimization * semidefinite programming * sequential convex programming Subject RIV: BA - General Mathematics Impact factor: 1.429, year: 2009

  6. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows

  7. Building a Lego wall: Sequential action selection.

    Science.gov (United States)

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  9. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  10. Constrained treatment planning using sequential beam selection

    International Nuclear Information System (INIS)

    Woudstra, E.; Storchi, P.R.M.

    2000-01-01

    In this paper an algorithm is described for automated treatment plan generation. The algorithm aims at delivery of the prescribed dose to the target volume without violation of constraints for target, organs at risk and the surrounding normal tissue. Pre-calculated dose distributions for all candidate orientations are used as input. Treatment beams are selected in a sequential way. A score function designed for beam selection is used for the simultaneous selection of beam orientations and weights. In order to determine the optimum choice for the orientation and the corresponding weight of each new beam, the score function is first redefined to account for the dose distribution of the previously selected beams. Addition of more beams to the plan is stopped when the target dose is reached or when no additional dose can be delivered without violating a constraint. In the latter case the score function is modified by importance factor changes to enforce better sparing of the organ with the limiting constraint and the algorithm is run again. (author)

  11. A sequential fuzzy diagnosis method for rotating machinery using ant colony optimization and possibility theory

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hao; Ping, Xueliang; Cao, Yi; Lie, Ke [Jiangnan University, Wuxi (China); Chen, Peng [Mie University, Mie (Japan); Wang, Huaqing [Beijing University, Beijing (China)

    2014-04-15

    This study proposes a novel intelligent fault diagnosis method for rotating machinery using ant colony optimization (ACO) and possibility theory. The non-dimensional symptom parameters (NSPs) in the frequency domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using principal component analysis (PCA) is proposed for detecting and distinguishing faults in rotating machinery. By using ACO clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. A fuzzy diagnosis method using sequential inference and possibility theory is also proposed, by which the conditions of the machinery can be identified sequentially. Lastly, the proposed method is compared with a conventional neural networks (NN) method. Practical examples of diagnosis for a V-belt driving equipment used in a centrifugal fan are provided to verify the effectiveness of the proposed method. The results verify that the faults that often occur in V-belt driving equipment, such as a pulley defect state, a belt defect state and a belt looseness state, are effectively identified by the proposed method, while these faults are difficult to detect using conventional NN.

  12. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  13. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique.

    Science.gov (United States)

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising

  14. Parallel algorithms for islanded microgrid with photovoltaic and energy storage systems planning optimization problem: Material selection and quantity demand optimization

    Science.gov (United States)

    Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun

    2017-02-01

    With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.

  15. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach

  16. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  17. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  18. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  19. Sequential optimization of a polygeneration plant

    International Nuclear Information System (INIS)

    Rubio-Maya, Carlos; Uche, Javier; Martinez, Amaya

    2011-01-01

    Highlights: → A two-steps optimization procedure of a polygeneration unit was tested. → First step was the synthesis and design; the superstructure definition was used. → Second step optimized the operation with hourly data and energy storage systems. → Remarkable benefits for the analyzed case study (Spanish hotel) were found. - Abstract: This paper presents a two-steps optimization procedure of a polygeneration unit. The unit simultaneously provides power, heat, cooling and fresh water to a Spanish tourist resort (450 rooms). The first step consist on the synthesis and design of the polygeneration scheme: a 'superstructure' was constructed to allow the selection of the appropriate choice and size of the plant components, from both economic and environmental considerations. At that first step, only monthly averaged requirements are considered. The second step includes hourly data and analysis as well as energy storage systems. A detailed modelling of pre-selected devices is then required to also fulfil economic and environmental constraints. As a result, a better performance is obtained compared to the first step. Thus, the two-steps procedure explained here permits the complete design and operation of a decentralized plant producing simultaneously energy (power, heat and cooling) but also desalted water (that is, trigeneration + desalination). Remarkable benefits for the analyzed case study are found: a Net Present Value of almost 300,000 Euro , a primary energy saving ratio of about 18% and more than 850 ton per year of avoided CO 2 emissions.

  20. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  1. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach. The results of experiments for decision tables from UCI Machine Learning Repository are discussed. © 2013 Springer-Verlag.

  2. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique

    Directory of Open Access Journals (Sweden)

    Karthivashan G

    2016-07-01

    Full Text Available Govindarajan Karthivashan,1 Mas Jaffri Masarudin,2 Aminu Umar Kura,1 Faridah Abas,3,4 Sharida Fakurazi1,5 1Laboratory of Vaccines and Immunotherapeutics, Institute of Bioscience, 2Department of Cell and Molecular Biology, Faculty of Biotechnology and Biomolecular Sciences, 3Department of Food Science, Faculty of Food Science and Technology, 4Laboratory of Natural Products, Institute of Bioscience, 5Department of Human Anatomy, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang, Selangor, Malaysia Abstract: This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q, kaempferol (K, and apigenin (A, were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine through four different methods of synthesis – bulk (M1 and serialized (M2 co-sonication and bulk (M3 and sequential (M4 co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG. Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0

  3. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  4. Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    2008-01-01

    We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the 'extended lead time'. The derived performance measures...

  5. Time-Sequential Working Wavelength-Selective Filter for Flat Autostereoscopic Displays

    Directory of Open Access Journals (Sweden)

    René de la Barré

    2017-02-01

    Full Text Available A time-sequential working, spatially-multiplexed autostereoscopic 3D display design consisting of a fast switchable RGB-color filter array and a fast color display is presented. The newly-introduced 3D display design is usable as a multi-user display, as well as a single-user system. The wavelength-selective filter barrier emits the light from a larger aperture than common autostereoscopic barrier displays with similar barrier pitch and ascent. Measurements on a demonstrator with commercial display components, simulations and computational evaluations have been carried out to describe the proposed wavelength-selective display design in static states and to show the weak spots of display filters in commercial displays. An optical modelling of wavelength-selective barriers has been used for instance to calculate the light ray distribution properties of that arrangement. In the time-sequential implementation, it is important to avoid that quick eye or eyelid movement leads to visible color artifacts. Therefore, color filter cells, switching faster than conventional LC display cells, must distribute directed light from different primaries at the same time, to create a 3D presentation. For that, electric tunable liquid crystal Fabry–Pérot color filters are presented. They switch on-off the colors red, green and blue in the millisecond regime. Their active areas consist of a sub-micrometer-thick nematic layer sandwiched between dielectric mirrors and indium tin oxide (ITO-electrodes. These cells shall switch narrowband light of red, green or blue. A barrier filter array for a high resolution, glasses-free 3D display has to be equipped with several thousand switchable filter elements having different color apertures.

  6. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-05-01

    The purpose of this dissertation is to present a methodology to model global sequence alignment problem as directed acyclic graph which helps to extract all possible optimal alignments. Moreover, a mechanism to sequentially optimize sequence alignment problem relative to different cost functions is suggested. Sequence alignment is mostly important in computational biology. It is used to find evolutionary relationships between biological sequences. There are many algo- rithms that have been developed to solve this problem. The most famous algorithms are Needleman-Wunsch and Smith-Waterman that are based on dynamic program- ming. In dynamic programming, problem is divided into a set of overlapping sub- problems and then the solution of each subproblem is found. Finally, the solutions to these subproblems are combined into a final solution. In this thesis it has been proved that for two sequences of length m and n over a fixed alphabet, the suggested optimization procedure requires O(mn) arithmetic operations per cost function on a single processor machine. The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  7. Note: Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    We show that well-known textbook formulae for determining the optimal base stock of the inventory system with continuous review and constant lead time can easily be extended to the case with periodic review and stochastic, sequential lead times. The provided performance measures and conditions...

  8. Effects of simultaneous and optimized sequential cardiac resynchronization therapy on myocardial oxidative metabolism and efficiency.

    Science.gov (United States)

    Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J

    2008-02-01

    Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.

  9. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  10. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-07-01

    Finding optimal paths in directed graphs is a wide area of research that has received much of attention in theoretical computer science due to its importance in many applications (e.g., computer networks and road maps). Many algorithms have been developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from the dynamic programming approach as it solves the problem sequentially and works on directed graphs with positive weights and no loop edges. The aim of this thesis is to implement and evaluate that algorithm to find the optimal paths in directed graphs relative to two different cost functions ( , ). A possible interpretation of a directed graph is a network of roads so the weights for the function represent the length of roads, whereas the weights for the function represent a constraint of the width or weight of a vehicle. The optimization aim for those two functions is to minimize the cost relative to the function and maximize the constraint value associated with the function. This thesis also includes finding and proving the relation between the two different cost functions ( , ). When given a value of one function, we can find the best possible value for the other function. This relation is proven theoretically and also implemented and experimented using Matlab®[2].

  11. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    Science.gov (United States)

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  12. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  13. Selective evolutionary generation systems: Theory and applications

    Science.gov (United States)

    Menezes, Amor A.

    This dissertation is devoted to the problem of behavior design, which is a generalization of the standard global optimization problem: instead of generating the optimizer, the generalization produces, on the space of candidate optimizers, a probability density function referred to as the behavior. The generalization depends on a parameter, the level of selectivity, such that as this parameter tends to infinity, the behavior becomes a delta function at the location of the global optimizer. The motivation for this generalization is that traditional off-line global optimization is non-resilient and non-opportunistic. That is, traditional global optimization is unresponsive to perturbations of the objective function. On-line optimization methods that are more resilient and opportunistic than their off-line counterparts typically consist of the computationally expensive sequential repetition of off-line techniques. A novel approach to inexpensive resilience and opportunism is to utilize the theory of Selective Evolutionary Generation Systems (SECS), which sequentially and probabilistically selects a candidate optimizer based on the ratio of the fitness values of two candidates and the level of selectivity. Using time-homogeneous, irreducible, ergodic Markov chains to model a sequence of local, and hence inexpensive, dynamic transitions, this dissertation proves that such transitions result in behavior that is called rational; such behavior is desirable because it can lead to both efficient search for an optimizer as well as resilient and opportunistic behavior. The dissertation also identifies system-theoretic properties of the proposed scheme, including equilibria, their stability and their optimality. Moreover, this dissertation demonstrates that the canonical genetic algorithm with fitness proportional selection and the (1+1) evolutionary strategy are particular cases of the scheme. Applications in three areas illustrate the versatility of the SECS theory: flight

  14. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  15. Feature Selection via Chaotic Antlion Optimization.

    Directory of Open Access Journals (Sweden)

    Hossam M Zawbaa

    Full Text Available Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting while minimizing the number of features used.We propose an optimization approach for the feature selection problem that considers a "chaotic" version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics.

  16. Optimization of Cu-Zn Massive Sulphide Flotation by Selective Reagents

    Science.gov (United States)

    Soltani, F.; Koleini, S. M. J.; Abdollahy, M.

    2014-10-01

    Selective floatation of base metal sulphide minerals can be achieved by using selective reagents. Sequential floatation of chalcopyrite-sphalerite from Taknar (Iran) massive sulphide ore with 3.5 % Zn and 1.26 % Cu was studied. D-optimal design of response surface methodology was used. Four mixed collector types (Aer238 + SIPX, Aero3477 + SIPX, TC1000 + SIPX and X231 + SIPX), two depressant systems (CuCN-ZnSO4 and dextrin-ZnSO4), pH and ZnSO4 dosage were considered as operational factors in the first stage of flotation. Different conditions of pH, CuSO4 dosage and SIPX dosage were studied for sphalerite flotation from first stage tailings. Aero238 + SIPX induced better selectivity for chalcopyrite against pyrite and sphalerite. Dextrin-ZnSO4 was as effective as CuCN-ZnSO4 in sphalerite-pyrite depression. Under optimum conditions, Cu recovery, Zn recovery and pyrite content in Cu concentrate were 88.99, 33.49 and 1.34 % by using Aero238 + SIPX as mixed collector, CuCN-ZnSO4 as depressant system, at ZnSO4 dosage of 200 g/t and pH 10.54. When CuCN was used at the first stage, CuSO4 consumption increased and Zn recovery decreased during the second stage. Maximum Zn recovery was 72.19 % by using 343.66 g/t of CuSO4, 22.22 g/t of SIPX and pH 9.99 at the second stage.

  17. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    Science.gov (United States)

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  18. Speciation of heavy metals in garden soils. Evidences from selective and sequential chemical leaching

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Zhongqi; Lee, Leda; Dayan, Sara; Grinshtein, Michael [Brooklyn College of The City Univ. of New York, Brooklyn, NY (United States). Environmental Sciences Analytical Cnter; Shaw, Richard [USDA-NRCS NYC Soil Survey, Staten Island, NY (United States)

    2011-06-15

    Purpose: Gardening (especially food growing) in urban areas is becoming popular, but urban soils are often very contaminated for historical reasons. There is lack of sufficient information as to the bioavailability of soil heavy metals to plants and human in urban environments. This study examines the relative leachability of Cr, Ni, As, Cd, Zn, and Pb for soils with varying characteristics. The speciation and mobility of these metals can be qualitatively inferred from the leaching experiments. The goal is to use the data to shed some light on their bioavailability to plant and human, as well as the basis for soil remediation. Materials and methods: Selective and sequential chemical leaching methods were both used to evaluate the speciation of Cr, Ni, As, Cd, Zn, and Pb in soil samples collected from New York City residential and community gardens. The sequential leaching experiment followed a standard BCR four-step procedure, while selective leaching involved seven different chemical extractants. Results and discussion: The results from selective and sequential leaching methods are consistent. In general, very little of the heavy metals were found in the easily soluble or exchangeable fractions. Larger fractions of Cd and Zn can be leached out than other metals. Lead appears predominantly in the organic or carbonate fractions, of which {proportional_to} 30-60% is in the easily soluble organic fraction. Most As cannot be leached out by any of the extractants used, but it could have been complicated by the ineffective dissolution of oxides by ammonium hydroxylamine. Ni and Cr were mostly in the residual fractions but some released in the oxidizable fractions. Therefore, the leachability of metals follow the order Cd/Zn > Pb > Ni/Cr. Conclusions: Despite of the controversy and inaccuracy surrounding chemical leaching methods for the speciation of metals, chemical leaching data provide important, general, and easy-to-access information on the mobility of heavy metals

  19. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    Science.gov (United States)

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  20. Seed selection by dark-eyed juncos (Junco hyemalis): optimal foraging with nutrient constraints?

    Science.gov (United States)

    Thompson, D B; Tomback, D F; Cunningham, M A; Baker, M C

    1987-11-01

    Observations of the foraging behavior of six captive dark-eyed juncos (Junco hyemalis) are used to test the assumptions and predictions of optimal diet choice models (Pyke et al. 1977) that include nutrients (Pulliam 1975). The birds sequentially encountered single seeds of niger thistle (Guizotia abyssinica) and of canary grass (Phalaris canariensis) on an artificial substrate in the laboratory. Niger thistle seeds were preferred by all birds although their profitability in terms of energy intake (J/s) was less than the profitability of canary grass seeds. Of four nutritional components used to calculate profitabilities (mg/s) lipid content was the only characteristic that could explain the junco's seed preference. As predicted by optimal diet theory the probability of consuming niger thistle seeds was independent of seed abundance. However, the consumption of 71-84% rather than 100% of the seeds encountered is not consistent with the prediction of all-or-nothing selection. Canary grass seeds were consumed at a constant rate (no./s) independent of the number of seeds encountered. This consumption pattern invalidates a model that assumes strict maximization. However, it is consistent with the assumption that canary grass seeds contain a nutrient which is required in minimum amounts to meet physiological demands (Pulliam 1975). These experiments emphasize the importance of incorporating nutrients into optimal foraging models and of combining seed preference studies with studies of the metabolic requirements of consumers.

  1. Sequential Test Selection by Quantifying of the Reduction in Diagnostic Uncertainty for the Diagnosis of Proximal Caries

    Directory of Open Access Journals (Sweden)

    Umut Arslan

    2013-06-01

    Full Text Available Background: In order to determine the presence or absence of a certain disease, multiple diagnostic tests may be necessary. Performance of these tests can be sequentially evaluated. Aims: The aim of the study is to determine the contribution of the test in each step, in reducing diagnostic uncertainty when multiple tests are sequentially used for the diagnosis. Study Design: Diagnostic accuracy study Methods: Radiographs of seventy-three patients of the Department of Dento-Maxillofacial Radiology of Hacettepe University Faculty of Dentistry were assessed. Panoramic (PAN, full mouth intraoral (FM, and bitewing (BW radiographs were used for the diagnosis of proximal caries in the maxillary and mandibular molar regions. Diagnostic performance of radiography was sequentially evaluated by using the reduction in diagnostic uncertainty. Results: FM provided maximum diagnostic information for ruling in potential in the maxillary and mandibular molar regions in the first step. FM provided more diagnostic information than BW radiographs for ruling in the mandibular region in the second step. In the mandibular region, BW radiographs provided more diagnostic information than FM for ruling out in the first step. Conclusion: The presented method in this study provides the clinicians with a solution for the decision of the sequential selection of diagnostic tests for the correct diagnosis of the presence or absence of a certain disease.

  2. Optimal periodic inspection of a deterioration process with sequential condition states

    International Nuclear Information System (INIS)

    Kallen, M.J.; Noortwijk, J.M. van

    2006-01-01

    The condition of components subject to visual inspections is often evaluated on a discrete scale. If at each inspection a decision is made to do nothing or to perform preventive or corrective maintenance, the proposed decision model allows us to determine the optimal time between periodic inspections, such that the expected average costs per unit of time are minimized. The model which describes the uncertain condition over time is based on a Markov process with sequential phases. The key quantities involved in the model are the probabilities of having to perform either preventive or corrective maintenance before or after an inspection. The costs functions for two scenarios are presented: a scenario in which failure is immediately detected without the need to perform an inspection and a scenario in which failure is only detected by inspection of the object. Analytical results for a special case and algorithmic results for a broad class of Markov processes are derived. The model is illustrated using an application to the periodic inspection of road bridges

  3. Optimization Strategies for Bruch's Membrane Opening Minimum Rim Area Calculation: Sequential versus Simultaneous Minimization.

    Science.gov (United States)

    Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M

    2017-10-24

    To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.

  4. Optimal selection of TLD chips

    International Nuclear Information System (INIS)

    Phung, P.; Nicoll, J.J.; Edmonds, P.; Paris, M.; Thompson, C.

    1996-01-01

    Large sets of TLD chips are often used to measure beam dose characteristics in radiotherapy. A sorting method is presented to allow optimal selection of chips from a chosen set. This method considers the variation

  5. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  6. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    Science.gov (United States)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  7. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  8. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  9. Sequential algorithm analysis to facilitate selective biliary access for difficult biliary cannulation in ERCP: a prospective clinical study.

    Science.gov (United States)

    Lee, Tae Hoon; Hwang, Soon Oh; Choi, Hyun Jong; Jung, Yunho; Cha, Sang Woo; Chung, Il-Kwun; Moon, Jong Ho; Cho, Young Deok; Park, Sang-Heum; Kim, Sun-Joo

    2014-02-17

    Numerous clinical trials to improve the success rate of biliary access in difficult biliary cannulation (DBC) during ERCP have been reported. However, standard guidelines or sequential protocol analysis according to different methods are limited in place. We planned to investigate a sequential protocol to facilitate selective biliary access for DBC during ERCP. This prospective clinical study enrolled 711 patients with naïve papillae at a tertiary referral center. If wire-guided cannulation was deemed to have failed due to the DBC criteria, then according to the cannulation algorithm early precut fistulotomy (EPF; cannulation time > 5 min, papillary contacts > 5 times, or hook-nose-shaped papilla), double-guidewire cannulation (DGC; unintentional pancreatic duct cannulation ≥ 3 times), and precut after placement of a pancreatic stent (PPS; if DGC was difficult or failed) were performed sequentially. The main outcome measurements were the technical success, procedure outcomes, and complications. Initially, a total of 140 (19.7%) patients with DBC underwent EPF (n = 71) and DGC (n = 69). Then, in DGC group 36 patients switched to PPS due to difficulty criteria. The successful biliary cannulation rate was 97.1% (136/140; 94.4% [67/71] with EPF, 47.8% [33/69] with DGC, and 100% [36/36] with PPS; P EPF, 314.8 (65.2) seconds in DGC, and 706.0 (469.4) seconds in PPS (P EPF, DGC, and PPS may be safe and feasible for DBC. The use of EPF in selected DBC criteria, DGC in unintentional pancreatic duct cannulations, and PPS in failed or difficult DGC may facilitate successful biliary cannulation.

  10. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    Science.gov (United States)

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  11. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  12. Biocontrol of Phytophthora Blight and Anthracnose in Pepper by Sequentially Selected Antagonistic Rhizobacteria against Phytophthora capsici.

    Science.gov (United States)

    Sang, Mee Kyung; Shrestha, Anupama; Kim, Du-Yeon; Park, Kyungseok; Pak, Chun Ho; Kim, Ki Deok

    2013-06-01

    We previously developed a sequential screening procedure to select antagonistic bacterial strains against Phytophthora capsici in pepper plants. In this study, we used a modified screening procedure to select effective biocontrol strains against P. capsici; we evaluated the effect of selected strains on Phytophthora blight and anthracnose occurrence and fruit yield in pepper plants under field and plastic house conditions from 2007 to 2009. We selected four potential biocontrol strains (Pseudomonas otitidis YJR27, P. putida YJR92, Tsukamurella tyrosinosolvens YJR102, and Novosphingobium capsulatum YJR107) among 239 bacterial strains. In the 3-year field tests, all the selected strains significantly (P anthracnose incidence in at least one of the test years, but their biocontrol activities were variable. In addition, strains YJR27, YJR92, and YJR102, in certain harvests, increased pepper fruit numbers in field tests and red fruit weights in plastic house tests. Taken together, these results indicate that the screening procedure is rapid and reliable for the selection of potential biocontrol strains against P. capsici in pepper plants. In addition, these selected strains exhibited biocontrol activities against anthracnose, and some of the strains showed plant growth-promotion activities on pepper fruit.

  13. Optimal control of bond selectivity in unimolecular reactions

    International Nuclear Information System (INIS)

    Shi Shenghua; Rabitz, H.

    1991-01-01

    The optimal control theory approach to designing optimal fields for bond-selective unimolecular reactions is presented. A set of equations for determining the optimal fields, which will lead to the achievement of the objective of bond-selective dissociation is developed. The numerical procedure given for solving these equations requires the repeated calculation of the time propagator for the system with the time-dependent Hamiltonian. The splitting approximation combined with the fast Fourier transform algorithm is used for computing the short time propagator. As an illustrative example, a model linear triatomic molecule is treated. The model system consists of two Morse oscillators coupled via kinetic coupling. The magnitude of the dipoles of the two Morse oscillators are the same, the fundamental frequencies are almost the same, but the dissociation energies are different. The rather demanding objective under these conditions is to break the stronger bond while leaving the weaker one intact. It is encouraging that the present computational method efficiently gives rise to the optimal field, which leads to the excellent achievement of the objective of bond selective dissociation. (orig.)

  14. Sequential and selective localized optical heating in water via on-chip dielectric nanopatterning.

    Science.gov (United States)

    Morsy, Ahmed M; Biswas, Roshni; Povinelli, Michelle L

    2017-07-24

    We study the use of nanopatterned silicon membranes to obtain optically-induced heating in water. We show that by varying the detuning between an absorptive optical resonance of the patterned membrane and an illumination laser, both the magnitude and response time of the temperature rise can be controlled. This allows for either sequential or selective heating of different patterned areas on chip. We obtain a steady-state temperature of approximately 100 °C for a 805.5nm CW laser power density of 66 µW/μm 2 and observe microbubble formation. The ability to spatially and temporally control temperature on the microscale should enable the study of heat-induced effects in a variety of chemical and biological lab-on-chip applications.

  15. Feature selection for portfolio optimization

    DEFF Research Database (Denmark)

    Bjerring, Thomas Trier; Ross, Omri; Weissensteiner, Alex

    2016-01-01

    Most portfolio selection rules based on the sample mean and covariance matrix perform poorly out-of-sample. Moreover, there is a growing body of evidence that such optimization rules are not able to beat simple rules of thumb, such as 1/N. Parameter uncertainty has been identified as one major....... While most of the diversification benefits are preserved, the parameter estimation problem is alleviated. We conduct out-of-sample back-tests to show that in most cases different well-established portfolio selection rules applied on the reduced asset universe are able to improve alpha relative...

  16. WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Craft, D [Massachusetts General Hospital, Cambridge, MA (United States); Balvert, M [Tilburg University, Tilburg (Netherlands)

    2016-06-15

    Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that the original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.

  17. Feature Import Vector Machine: A General Classifier with Flexible Feature Selection.

    Science.gov (United States)

    Ghosh, Samiran; Wang, Yazhen

    2015-02-01

    The support vector machine (SVM) and other reproducing kernel Hilbert space (RKHS) based classifier systems are drawing much attention recently due to its robustness and generalization capability. General theme here is to construct classifiers based on the training data in a high dimensional space by using all available dimensions. The SVM achieves huge data compression by selecting only few observations which lie close to the boundary of the classifier function. However when the number of observations are not very large (small n ) but the number of dimensions/features are large (large p ), then it is not necessary that all available features are of equal importance in the classification context. Possible selection of an useful fraction of the available features may result in huge data compression. In this paper we propose an algorithmic approach by means of which such an optimal set of features could be selected. In short, we reverse the traditional sequential observation selection strategy of SVM to that of sequential feature selection. To achieve this we have modified the solution proposed by Zhu and Hastie (2005) in the context of import vector machine (IVM), to select an optimal sub-dimensional model to build the final classifier with sufficient accuracy.

  18. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  19. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  20. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  1. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  2. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  3. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  4. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  5. Risk-aware multi-armed bandit problem with application to portfolio selection.

    Science.gov (United States)

    Huo, Xiaoguang; Fu, Feng

    2017-11-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.

  6. An iterative approach for the optimization of pavement maintenance management at the network level.

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  7. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Directory of Open Access Journals (Sweden)

    Cristina Torres-Machí

    2014-01-01

    Full Text Available Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  8. SeGRAm - A practical and versatile tool for spacecraft trajectory optimization

    Science.gov (United States)

    Rishikof, Brian H.; Mccormick, Bernell R.; Pritchard, Robert E.; Sponaugle, Steven J.

    1991-01-01

    An implementation of the Sequential Gradient/Restoration Algorithm, SeGRAm, is presented along with selected examples. This spacecraft trajectory optimization and simulation program uses variational calculus to solve problems of spacecraft flying under the influence of one or more gravitational bodies. It produces a series of feasible solutions to problems involving a wide range of vehicles, environments and optimization functions, until an optimal solution is found. The examples included highlight the various capabilities of the program and emphasize in particular its versatility over a wide spectrum of applications from ascent to interplanetary trajectories.

  9. Sequential and parallel image restoration: neural network implementations.

    Science.gov (United States)

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  10. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    Science.gov (United States)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  11. Exploring selection and recruitment processes for newly qualified nurses: a sequential-explanatory mixed-method study.

    Science.gov (United States)

    Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda

    2015-01-01

    To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.

  12. Optimal tariff design under consumer self-selection

    Energy Technology Data Exchange (ETDEWEB)

    Raesaenen, M.; Ruusunen, J.; Haemaelaeinen, R.

    1995-12-31

    This report considers the design of electricity tariffs which guides an individual consumer to select the tariff designed for his consumption pattern. In the model the utility maximizes the weighted sum of individual consumers` benefits of electricity consumption subject to the utility`s revenue requirement constraints. The consumers` free choice of tariffs is ensured with the so-called self-selection constraints. The relationship between the consumers` optimal choice of tariffs and the weights in the aggregated consumers` benefit function is analyzed. If such weights exist, they will guarantee both the consumers` optimal choice of tariffs and the efficient consumption patterns. Also the welfare effects are analyzed by using demand parameters estimated from a Finnish dynamic pricing experiment. The results indicate that it is possible to design an efficient tariff menu with the welfare losses caused by the self-selection constraints being small compared with the costs created when some consumers choose tariffs other than assigned for them. (author)

  13. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  14. Sequential Optimization Methods for Augmentation of Marine Enzymes Production in Solid-State Fermentation: l-Glutaminase Production a Case Study.

    Science.gov (United States)

    Sathish, T; Uppuluri, K B; Veera Bramha Chari, P; Kezia, D

    There is an increased l-glutaminase market worldwide due to its relevant industrial applications. Salt tolerance l-glutaminases play a vital role in the increase of flavor of different types of foods like soya sauce and tofu. This chapter is presenting the economically viable l-glutaminases production in solid-state fermentation (SSF) by Aspergillus flavus MTCC 9972 as a case study. The enzyme production was improved following a three step optimization process. Initially mixture design (MD) (augmented simplex lattice design) was employed to optimize the solid substrate mixture. Such solid substrate mixture consisted of 59:41 of wheat bran and Bengal gram husk has given higher amounts of l-glutaminase. Glucose and l-glutamine were screened as a finest additional carbon and nitrogen sources for l-glutaminase production with help of Plackett-Burman Design (PBD). l-Glutamine also acting as a nitrogen source as well as inducer for secretion of l-glutaminase from A. flavus MTCC 9972. In the final step of optimization various environmental and nutritive parameters such as pH, temperature, moisture content, inoculum concentration, glucose, and l-glutamine levels were optimized through the use of hybrid feed forward neural networks (FFNNs) and genetic algorithm (GA). Through sequential optimization methods MD-PBD-FFNN-GA, the l-glutaminase production in SSF could be improved by 2.7-fold (453-1690U/g). © 2016 Elsevier Inc. All rights reserved.

  15. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  16. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  17. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  18. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  19. The pursuit of balance in sequential randomized trials

    Directory of Open Access Journals (Sweden)

    Raymond P. Guiteras

    2016-06-01

    Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.

  20. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  1. Expected Improvement in Efficient Global Optimization Through Bootstrapped Kriging - Replaced by CentER DP 2011-015

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; van Beers, W.C.M.; van Nieuwenhuyse, I.

    2010-01-01

    This paper uses a sequentialized experimental design to select simulation input com- binations for global optimization, based on Kriging (also called Gaussian process or spatial correlation modeling); this Kriging is used to analyze the input/output data of the simulation model (computer code). This

  2. Tank Waste Remediation System optimized processing strategy

    International Nuclear Information System (INIS)

    Slaathaug, E.J.; Boldt, A.L.; Boomer, K.D.; Galbraith, J.D.; Leach, C.E.; Waldo, T.L.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility

  3. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    Science.gov (United States)

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  4. Optimal infrastructure selection to boost regional sustainable economy

    OpenAIRE

    Martín Utrillas, Manuel Guzmán; Juan-Garcia, F.; Cantó Perelló, Julián; Curiel Esparza, Jorge

    2015-01-01

    The role of infrastructures in boosting the economic growth of the regions is widely recognized. In many cases, an infrastructure is selected by subjective reasons. Selection of the optimal infrastructure for sustainable economic development of a region should be based on objective and founded reasons, not only economical, but also environmental and social. In this paper is developed such selection through a hybrid method based on Delphi, analytical hierarchy process (AHP), and VIKOR (from Se...

  5. Optimal Contracting under Adverse Selection

    DEFF Research Database (Denmark)

    Lenells, Jonatan; Stea, Diego; Foss, Nicolai Juul

    2015-01-01

    We study a model of adverse selection, hard and soft information, and mentalizing ability--the human capacity to represent others' intentions, knowledge, and beliefs. By allowing for a continuous range of different information types, as well as for different means of acquiring information, we dev...... of that information. This strategy affects the properties of the optimal contract, which grows closer to the first best. This research provides insights into the implications of mentalizing for agency theory....

  6. Properties of simultaneous and sequential two-nucleon transfer

    International Nuclear Information System (INIS)

    Pinkston, W.T.; Satchler, G.R.

    1982-01-01

    Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

  7. Sequentially optimized reconstruction strategy: A meta-strategy for perimetry testing.

    Directory of Open Access Journals (Sweden)

    Şerife Seda Kucur

    Full Text Available Perimetry testing is an automated method to measure visual function and is heavily used for diagnosing ophthalmic and neurological conditions. Its working principle is to sequentially query a subject about perceived light using different brightness levels at different visual field locations. At a given location, this query-patient-feedback process is expected to converge at a perceived sensitivity, such that a shown stimulus intensity is observed and reported 50% of the time. Given this inherently time-intensive and noisy process, fast testing strategies are necessary in order to measure existing regions more effectively and reliably. In this work, we present a novel meta-strategy which relies on the correlative nature of visual field locations in order to strongly reduce the necessary number of locations that need to be examined. To do this, we sequentially determine locations that most effectively reduce visual field estimation errors in an initial training phase. We then exploit these locations at examination time and show that our approach can easily be combined with existing perceived sensitivity estimation schemes to speed up the examinations. Compared to state-of-the-art strategies, our approach shows marked performance gains with a better accuracy-speed trade-off regime for both mixed and sub-populations.

  8. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  9. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  10. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Compensatory Analysis and Optimization for MADM for Heterogeneous Wireless Network Selection

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-01-01

    Full Text Available In the next-generation heterogeneous wireless networks, a mobile terminal with a multi-interface may have network access from different service providers using various technologies. In spite of this heterogeneity, seamless intersystem mobility is a mandatory requirement. One of the major challenges for seamless mobility is the creation of a network selection scheme, which is for users that select an optimal network with best comprehensive performance between different types of networks. However, the optimal network may be not the most reasonable one due to compensation of MADM (Multiple Attribute Decision Making, and the network is called pseudo-optimal network. This paper conducts a performance evaluation of a number of widely used MADM-based methods for network selection that aim to keep the mobile users always best connected anywhere and anytime, where subjective weight and objective weight are all considered. The performance analysis shows that the selection scheme based on MEW (weighted multiplicative method and combination weight can better avoid accessing pseudo-optimal network for balancing network load and reducing ping-pong effect in comparison with three other MADM solutions.

  12. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  13. Sequentially solution-processed, nanostructured polymer photovoltaics using selective solvents

    KAUST Repository

    Kim, Do Hwan; Mei, Jianguo; Ayzner, Alexander L.; Schmidt, Kristin; Giri, Gaurav; Appleton, Anthony L.; Toney, Michael F.; Bao, Zhenan

    2014-01-01

    We demonstrate high-performance sequentially solution-processed organic photovoltaics (OPVs) with a power conversion efficiency (PCE) of 5% for blend films using a donor polymer based on the isoindigo-bithiophene repeat unit (PII2T-C10C8) and a fullerene derivative [6,6]-phenyl-C[71]-butyric acid methyl ester (PC71BM). This has been accomplished by systematically controlling the swelling and intermixing processes of the layer with various processing solvents during deposition of the fullerene. We find that among the solvents used for fullerene deposition that primarily swell but do not re-dissolve the polymer underlayer, there were significant microstructural differences between chloro and o-dichlorobenzene solvents (CB and ODCB, respectively). Specifically, we show that the polymer crystallite orientation distribution in films where ODCB was used to cast the fullerene is broad. This indicates that out-of-plane charge transport through a tortuous transport network is relatively efficient due to a large density of inter-grain connections. In contrast, using CB results in primarily edge-on oriented polymer crystallites, which leads to diminished out-of-plane charge transport. We correlate these microstructural differences with photocurrent measurements, which clearly show that casting the fullerene out of ODCB leads to significantly enhanced power conversion efficiencies. Thus, we believe that tuning the processing solvents used to cast the electron acceptor in sequentially-processed devices is a viable way to controllably tune the blend film microstructure. © 2014 The Royal Society of Chemistry.

  14. Optimized Power Allocation and Relay Location Selection in Cooperative Relay Networks

    Directory of Open Access Journals (Sweden)

    Jianrong Bao

    2017-01-01

    Full Text Available An incremental selection hybrid decode-amplify forward (ISHDAF scheme for the two-hop single relay systems and a relay selection strategy based on the hybrid decode-amplify-and-forward (HDAF scheme for the multirelay systems are proposed along with an optimized power allocation for the Internet of Thing (IoT. Given total power as the constraint and outage probability as an objective function, the proposed scheme possesses good power efficiency better than the equal power allocation. By the ISHDAF scheme and HDAF relay selection strategy, an optimized power allocation for both the source and relay nodes is obtained, as well as an effective reduction of outage probability. In addition, the optimal relay location for maximizing the gain of the proposed algorithm is also investigated and designed. Simulation results show that, in both single relay and multirelay selection systems, some outage probability gains by the proposed scheme can be obtained. In the comparison of the optimized power allocation scheme with the equal power allocation one, nearly 0.1695 gains are obtained in the ISHDAF single relay network at a total power of 2 dB, and about 0.083 gains are obtained in the HDAF relay selection system with 2 relays at a total power of 2 dB.

  15. Optimality Theory and Lexical Interpretation and Selection

    NARCIS (Netherlands)

    Hogeweg, L.; Legendre, G.; Putnam, M.T.; de Swart, H.; Zaroukian, E.

    2016-01-01

    This chapter argues for an optimization approach to the selection and interpretation of words. Several advantages of such an approach to lexical semantics are discussed. First of all, it will be argued that competition, entailing that words and interpretations are always judged in relation to other

  16. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  17. Optimal Allocation of Power-Electronic Interfaced Wind Turbines Using a Genetic Algorithm - Monte Carlo Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe

    2010-01-01

    determined by the wind resource and geographic conditions, the location of wind turbines in a power system network may significantly affect the distribution of power flow, power losses, etc. Furthermore, modern WTs with power-electronic interface have the capability of controlling reactive power output...... limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...... setting of WTs. The sequential MCS takes into account the stochastic behaviour of wind power generation and load. The proposed hybrid optimization method is demonstrated on an 11 kV 69-bus distribution system....

  18. A sequential extraction procedure to determine Ra and U isotopes by alpha-particle spectrometry in selective leachates

    International Nuclear Information System (INIS)

    Aguado, J.L.; Bolivar, J.P.; San-Miguel, E.G.; Garcia-Tenorio, R.

    2003-01-01

    A radiochemical sequential extraction procedure has been developed in our laboratory to determine 226 Ra and 234,238 U by alpha spectrometry in environmental samples. This method has been validated for both radionuclides by comparing in selected samples the values obtained through its application with the results obtained by applying alternative procedures. Recoveries obtained, counting periods applied and background levels found in the alpha spectra give suitable detection limits to allow the Ra and U determination in operational forms defined in riverbed contaminated sediments. Results obtained in these speciation studies show that 226 Ra and 234,238 U contamination tend to be associated to precipitated forms of the sediments. (author)

  19. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  20. The New Multipoint Relays Selection in OLSR using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Razali Ngah

    2012-06-01

    Full Text Available The standard Optimized Link State Routing (OLSR introduces an interesting concept, the multipoint relays (MPRs, to mitigate message overhead during the flooding process. We propose a new algorithm for MPRs selection to enhance the performance of OLSR using Particle Swarm Optimization Sigmoid Increasing Inertia Weight (PSOSIIW. The sigmoid increasing inertia weight has significance improve the particle swarm optimization (PSO in terms of simplicity and quick convergence towards optimum solution. The new fitness function of PSO-SIIW, packet delay of each node and degree of willingness are introduced to support MPRs selection in OLSR. We examine the throughput, packet loss and end-to-end delay of the proposed method using network simulator 2 (ns2.  Overall results indicate that OLSR-PSOSIIW has shown good performance compared to the standard OLSR and OLSR-PSO, particularly for the throughput and end-to-end delay. Generally the proposed OLSR-PSOSIIW shows advantage of using PSO for optimizing routing paths in the MPRs selection algorithm.

  1. Adaptive feature selection using v-shaped binary particle swarm optimization.

    Science.gov (United States)

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  2. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  3. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  4. Immediately sequential bilateral cataract surgery: advantages and disadvantages.

    Science.gov (United States)

    Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace

    2017-01-01

    The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.

  5. Mathematical programming model for heat exchanger design through optimization of partial objectives

    International Nuclear Information System (INIS)

    Onishi, Viviani C.; Ravagnani, Mauro A.S.S.; Caballero, José A.

    2013-01-01

    Highlights: • Rigorous design of shell-and-tube heat exchangers according to TEMA standards. • Division of the problem into sets of equations that are easier to solve. • Selected heuristic objective functions based on the physical behavior of the problem. • Sequential optimization approach to avoid solutions stuck in local minimum. • The results obtained with this model improved the values reported in the literature. - Abstract: Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature

  6. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  7. Optimization of the analysis by means of liquid chromatography of metabolites of the Uncaria Tomentosa plant (cat's claw) using the sequential simplex method

    International Nuclear Information System (INIS)

    Romero Blanco, Eric

    2005-01-01

    A new method was developed for the analysis using liquid chromatography of the metabolites present in extracts of root bark of Uncaria Tomentosa (cat's claw) by applying the simplex sequential technique to determine the magnitude of the chromatographic variables; i.e. flow, temperature and stationary-phase composition, which allowed the optimizing the elusion time and the resolution of the chromatographic separation. The chromatographic analysis was performed in isocratic mode using a C12 (-urea) column of 15 cm in length and 4,6 mm of diameter and a UV detector. The magnitude of the chromatographic variables that optimized the separation turned out to be: flow of 1.80 mL/min, temperature of 27.5 centigrade and a mobile phase composition of 22:78 (Methanol: to butter). (Author) [es

  8. Crashworthiness design optimization using multipoint sequential linear programming

    NARCIS (Netherlands)

    Etman, L.F.P.; Adriaens, J.M.T.A.; Slagmaat, van M.T.P.; Schoofs, A.J.G.

    1996-01-01

    A design optimization tool has been developed for the crash victim simulation software MADYMO. The crash worthiness optimization problem is characterized by a noisy behaviour of objective function and constraints. Additionally, objective function and constraint values follow from a computationally

  9. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  10. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  11. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  12. Optimized Policies for Improving Fairness of Location-based Relay Selection

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2013-01-01

    For WLAN systems in which relaying is used to improve throughput performance for nodes located at the cell edge, node mobility and information collection delays can have a significant impact on the performance of a relay selection scheme. In this paper we extend our existing Markov Chain modeling...... framework for relay selection to allow for efficient calculation of relay policies given either mean throughput or kth throughput percentile as optimization criterium. In a scenario with static access point, static relay, and a mobile destination node, the kth throughput percentile optimization...

  13. Selection on Optimal Haploid Value Increases Genetic Gain and Preserves More Genetic Diversity Relative to Genomic Selection.

    Science.gov (United States)

    Daetwyler, Hans D; Hayden, Matthew J; Spangenberg, German C; Hayes, Ben J

    2015-08-01

    Doubled haploids are routinely created and phenotypically selected in plant breeding programs to accelerate the breeding cycle. Genomic selection, which makes use of both phenotypes and genotypes, has been shown to further improve genetic gain through prediction of performance before or without phenotypic characterization of novel germplasm. Additional opportunities exist to combine genomic prediction methods with the creation of doubled haploids. Here we propose an extension to genomic selection, optimal haploid value (OHV) selection, which predicts the best doubled haploid that can be produced from a segregating plant. This method focuses selection on the haplotype and optimizes the breeding program toward its end goal of generating an elite fixed line. We rigorously tested OHV selection breeding programs, using computer simulation, and show that it results in up to 0.6 standard deviations more genetic gain than genomic selection. At the same time, OHV selection preserved a substantially greater amount of genetic diversity in the population than genomic selection, which is important to achieve long-term genetic gain in breeding populations. Copyright © 2015 by the Genetics Society of America.

  14. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  15. Quantum dot laser optimization: selectively doped layers

    Science.gov (United States)

    Korenev, Vladimir V.; Konoplev, Sergey S.; Savelyev, Artem V.; Shernyakov, Yurii M.; Maximov, Mikhail V.; Zhukov, Alexey E.

    2016-08-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics.

  16. Quantum dot laser optimization: selectively doped layers

    International Nuclear Information System (INIS)

    Korenev, Vladimir V; Konoplev, Sergey S; Savelyev, Artem V; Shernyakov, Yurii M; Maximov, Mikhail V; Zhukov, Alexey E

    2016-01-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics. (paper)

  17. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    Science.gov (United States)

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  18. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  19. Genetic Spot Optimization for Peak Power Estimation in Large VLSI Circuits

    Directory of Open Access Journals (Sweden)

    Michael S. Hsiao

    2002-01-01

    Full Text Available Estimating peak power involves optimization of the circuit's switching function. The switching of a given gate is not only dependent on the output capacitance of the node, but also heavily dependent on the gate delays in the circuit, since multiple switching events can result from uneven circuit delay paths in the circuit. Genetic spot expansion and optimization are proposed in this paper to estimate tight peak power bounds for large sequential circuits. The optimization spot shifts and expands dynamically based on the maximum power potential (MPP of the nodes under optimization. Four genetic spot optimization heuristics are studied for sequential circuits. Experimental results showed an average of 70.7% tighter peak power bounds for large sequential benchmark circuits was achieved in short execution times.

  20. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  1. Optimizing the allocation of resources for genomic selection in one breeding cycle.

    Science.gov (United States)

    Riedelsheimer, Christian; Melchinger, Albrecht E

    2013-11-01

    We developed a universally applicable planning tool for optimizing the allocation of resources for one cycle of genomic selection in a biparental population. The framework combines selection theory with constraint numerical optimization and considers genotype  ×  environment interactions. Genomic selection (GS) is increasingly implemented in plant breeding programs to increase selection gain but little is known how to optimally allocate the resources under a given budget. We investigated this problem with model calculations by combining quantitative genetic selection theory with constraint numerical optimization. We assumed one selection cycle where both the training and prediction sets comprised double haploid (DH) lines from the same biparental population. Grain yield for testcrosses of maize DH lines was used as a model trait but all parameters can be adjusted in a freely available software implementation. An extension of the expected selection accuracy given by Daetwyler et al. (2008) was developed to correctly balance between the number of environments for phenotyping the training set and its population size in the presence of genotype × environment interactions. Under small budget, genotyping costs mainly determine whether GS is superior over phenotypic selection. With increasing budget, flexibility in resource allocation increases greatly but selection gain leveled off quickly requiring balancing the number of populations with the budget spent for each population. The use of an index combining phenotypic and GS predicted values in the training set was especially beneficial under limited resources and large genotype × environment interactions. Once a sufficiently high selection accuracy is achieved in the prediction set, further selection gain can be achieved most efficiently by massively expanding its size. Thus, with increasing budget, reducing the costs for producing a DH line becomes increasingly crucial for successfully exploiting the

  2. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  3. Sequential kidney scintiscanning before and after vascular reconstruction

    International Nuclear Information System (INIS)

    Siems, H.H.; Allenberg, J.R.; Hupp, T.; Clorius, J.H.

    1985-01-01

    In this follow-up study sequential scintigraphy was performed on 20 of selected patients up to 3.4 years after operation, the results are compared with the pre-operative examinations and with the surgical effect on the increased blood pressure. (orig./MG) [de

  4. Selecting Optimal Subset of Security Controls

    OpenAIRE

    Yevseyeva, I.; Basto-Fernandes, V.; Michael, Emmerich, T. M.; Moorsel, van, A.

    2015-01-01

    Open Access journal Choosing an optimal investment in information security is an issue most companies face these days. Which security controls to buy to protect the IT system of a company in the best way? Selecting a subset of security controls among many available ones can be seen as a resource allocation problem that should take into account conflicting objectives and constraints of the problem. In particular, the security of the system should be improved without hindering productivity, ...

  5. Sequential blind identification of underdetermined mixtures using a novel deflation scheme.

    Science.gov (United States)

    Zhang, Mingjian; Yu, Simin; Wei, Gang

    2013-09-01

    In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.

  6. Optimizing Technology-Oriented Constructional Paramour's of complex dynamic systems

    International Nuclear Information System (INIS)

    Novak, S.M.

    1998-01-01

    Creating optimal vibro systems requires sequential solving of a few problems: selecting the basic pattern of dynamic actions, synthesizing the dynamic active systems, optimizing technological, technical, economic and design parameters. This approach is illustrated by an example of a high-efficiency vibro system synthesized for forming building structure components. When using only one single source to excite oscillations, resonance oscillations are imparted to the product to be formed in the horizontal and vertical planes. In order to obtain versatile and dynamically optimized parameters, a factor is introduced into the differential equations of the motion, accounting for the relationship between the parameters, which determine the frequency characteristics of the system and the parameter variation range. This results in obtaining non-sophisticated mathematical models of the system under investigation, convenient for optimization and for engineering design and calculations as well

  7. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  8. Diet selection of African elephant over time shows changing optimization currency

    NARCIS (Netherlands)

    Pretorius, Y.; Stigter, J.D.; Boer, de W.F.; Wieren, van S.E.; Jong, de C.B.; Knegt, de H.J.; Grant, R.C.; Heitkonig, I.M.A.; Knox, N.; Kohi, E.; Mwakiwa, E.; Peel, M.J.S.; Skidmore, A.K.; Slotow, R.; Waal, van der C.; Langevelde, van F.; Prins, H.H.T.

    2012-01-01

    Multiple factors determine diet selection of herbivores. However, in many diet studies selection of single nutrients is studied or optimization models are developed using only one currency. In this paper, we use linear programming to explain diet selection by African elephant based on plant

  9. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  10. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  11. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  12. Event-shape analysis: Sequential versus simultaneous multifragment emission

    International Nuclear Information System (INIS)

    Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.

    1990-01-01

    The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions

  13. Sequential selective same-day suture removal in the management of post-keratoplasty astigmatism.

    Science.gov (United States)

    Fares, U; Mokashi, A A; Elalfy, M S; Dua, H S

    2013-09-01

    In a previous study, we proposed that corneal topography performed 30-40 min after the initial suture removal can identify the next set of sutures requiring removal, for the treatment of post-keratoplasty astigmatism. The aim of this study was to evaluate the effect of removing subsequent sets of sutures at the same sitting. 10/0 nylon interrupted sutures were placed, to secure the graft-host junction, at the time of keratoplasty. Topography was performed using Pentacam (Oculus) before suture removal. The sutures to be removed in the steep semi-meridians were identified and removed at the slit-lamp biomicroscope. Topography was repeated 30-40 min post suture removal, the new steep semi-meridians determined, and the next set of sutures to be removed were identified and removed accordingly. Topography was repeated 4-6 weeks later and the magnitude of topographic astigmatism was recorded. A paired-samples t-test was used to evaluate the impact of selective suture removal on reducing the magnitude of topographic and refractive astigmatism. Twenty eyes of 20 patients underwent sequential selective same-day suture removal (SSSS) after corneal transplantation. This study showed that the topographic astigmatism decreased by about 46.7% (3.68 D) and the refractive astigmatism decreased by about 37.7% (2.61 D) following SSSS. Vector calculations also show a significant reduction of both topographic and refractive astigmatism (P<0.001). SSSS may help patients to achieve satisfactory vision more quickly and reduce the number of follow-up visits required post keratoplasty.

  14. Simulation and Optimization of Control of Selected Phases of Gyroplane Flight

    Directory of Open Access Journals (Sweden)

    Wienczyslaw Stalewski

    2018-02-01

    Full Text Available Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

  15. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    Science.gov (United States)

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  16. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  17. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  18. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  19. Plutonium association with selected solid phases in soils of Rocky Flats, Colorado, using sequential extraction technique

    International Nuclear Information System (INIS)

    Litaor, M.I.; Ibrahim, S.A.

    1996-01-01

    Plutonium contamination in the soil environs of Rock Flats, CO, has been a potential health risk to the public since the late 1960s. Although the measurement of total activity of Pu-239 + 240 in the soil is important information in appraising this risk, total activity does not provide the information required to characterize the geochemical behavior that affects the transport of Pu from the soil and vadose zone to groundwater. A sequential extraction experiment was conducted to assess the geochemical association of Pu with selected mineralogical and chemical phases of the soil. In the surface horizons, Pu-239 + 240 was primarily associated with the organic C (45-65%), sesquioxides (20-40%), and the residual fraction (10-15%). A small portion of Pu-239+240 was associated with soluble (0.09-0.22%), exchangeable (0.04-0.08%), and carbonates (0.57-7.0%) phases. These results suggest that under the observed pH and oxic conditions, relatively little Pu-239 + 240 is available for geochemically induced transport processes. Uncommon hydrogeochemical conditions were observed during the spring of 1995, which may have facilitated a partial dissolution of sesquioxides followed by desorption of Pu resulting in increased Pu mobility. Systematic errors in the sequential extraction experiment due to postextraction readsorption were evaluated using Np-237 tracer as a surrogate to Pu-239. The results suggested that postextraction readsorption rates were insignificant during the first 30 min after extraction for most chemical and mineralogical phases under study. 50 refs., 2 figs., 5 tabs

  20. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  1. Opportunistic relaying in multipath and slow fading channel: Relay selection and optimal relay selection period

    KAUST Repository

    Sungjoon Park,

    2011-11-01

    In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula for opportunistic relaying in the channel, and validate the results by comparing it with the exact outage probability. Also, we suggest a new relay selection algorithm that incorporates shadowing. We consider a protocol of broadcasting the channel gain of the previously selected relay. This saves resources in slow fading channel by reducing collisions in relay selection. We further investigate the optimal relay selection period to maximize the throughput while avoiding selection overhead. © 2011 IEEE.

  2. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO

    Directory of Open Access Journals (Sweden)

    Lixin Yan

    2016-07-01

    Full Text Available The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1 the Markov blanket (MB algorithm is employed to extract the main factors associated with hazardous traffic events; (2 a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G have significant influences on hazardous traffic events. The sequential minimal optimization (SMO algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

  3. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  4. Selection and optimization of extracellular lipase production using ...

    African Journals Online (AJOL)

    The aim of this study was to isolate and select lipase-producing microorganisms originated from different substrates, as well as to optimize the production of microbial lipase by submerged fermentation under different nutrient conditions. Of the 40 microorganisms isolated, 39 showed a halo around the colonies and 4 were ...

  5. Sequential Extraction Versus Comprehensive Characterization of Heavy Metal Species in Brownfield Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.

    2002-06-01

    The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.

  6. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  7. optimal selection of hydraulic turbines for small hydro electric power

    African Journals Online (AJOL)

    eobe

    Keywords: optimal selection, SHP turbine, flow duration curve, energy efficiency, annual capacity factor. 1. INTRODUCTION ... depleted, with adverse environmental impacts downstream ..... Technologies, Financing Cogeneration and Small -.

  8. The influence of the selection of macronutrients coupled with dietary energy density on the performance of broiler chickens.

    Directory of Open Access Journals (Sweden)

    Sonia Y Liu

    Full Text Available A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05 when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05. However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027 and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042 than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P < 0.0001 than low energy (197 g/bird or medium energy diets (663 g/bird whilst broiler chickens were offered diets with different energy densities but high crude protein (300 g/kg or digestible lysine (17.5 g/kg concentrations. Choice fed birds had lower FCR (1.217 versus 1.327 g/g, P < 0.0001 and higher carcass yield (88.1 versus 87.3%, P = 0.012 than sequentially fed birds. This suggests that the dietary balance between protein and energy is essential for optimal feed conversion efficiency. The intake path

  9. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  10. Computation of Stackelberg Equilibria of Finite Sequential Games

    DEFF Research Database (Denmark)

    Bosanski, Branislav; Branzei, Simina; Hansen, Kristoffer Arnsfelt

    2015-01-01

    The Stackelberg equilibrium is a solution concept that describes optimal strategies to commit to: Player~1 (the leader) first commits to a strategy that is publicly announced, then Player~2 (the follower) plays a best response to the leader's choice. We study Stackelberg equilibria in finite...... sequential (i.e., extensive-form) games and provide new exact algorithms, approximate algorithms, and hardness results for finding equilibria for several classes of such two-player games....

  11. Sequential Triangle Strip Generator based on Hopfield Networks

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Lněnička, Radim

    2009-01-01

    Roč. 21, č. 2 (2009), s. 583-617 ISSN 0899-7667 R&D Projects: GA MŠk(CZ) 1M0545; GA AV ČR 1ET100300517; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10750506 Keywords : sequential triangle strip * combinatorial optimization * Hopfield network * minimum energy * simulated annealing Subject RIV: IN - Informatics, Computer Science Impact factor: 2.175, year: 2009

  12. [Optimization and Prognosis of Cell Radiosensitivity Enhancement in vitro and in vivo after Sequential Thermoradiactive Action].

    Science.gov (United States)

    Belkina, S V; Petin, V G

    2016-01-01

    Previously developed mathematical model of simultaneous action of two inactivating agents has been adapted and tested to describe the results of sequential action. The possibility of applying the mathematical model to the interpretation and prognosis of the increase in radio-sensitivity of tumor cells as well as mammalian cells after sequential action of two high temperatures or hyperthermia and ionizing radiation is analyzed. The model predicts the value of the thermal enhancement ratio depending on the duration of thermal exposure, its greatest value, and the condition under which it is achieved.

  13. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  14. Sequentially Integrated Optimization of the Conditions to Obtain a High-Protein and Low-Antinutritional Factors Protein Isolate from Edible Jatropha curcas Seed Cake.

    Science.gov (United States)

    León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto

    2013-01-01

    Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g(-1) of total phenolics, 2891.84 mg 100 g(-1) of phytates and 168 mg 100 g(-1) of saponins. The protein content of the this isolate was higher than the content reported by other authors.

  15. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  16. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    Science.gov (United States)

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  17. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  18. Comparison of Optimal Portfolios Selected by Multicriterial Model Using Absolute and Relative Criteria Values

    Directory of Open Access Journals (Sweden)

    Branka Marasović

    2009-03-01

    Full Text Available In this paper we select an optimal portfolio on the Croatian capital market by using the multicriterial programming. In accordance with the modern portfolio theory maximisation of returns at minimal risk should be the investment goal of any successful investor. However, contrary to the expectations of the modern portfolio theory, the tests carried out on a number of financial markets reveal the existence of other indicators important in portfolio selection. Considering the importance of variables other than return and risk, selection of the optimal portfolio becomes a multicriterial problem which should be solved by using the appropriate techniques.In order to select an optimal portfolio, absolute values of criteria, like return, risk, price to earning value ratio (P/E, price to book value ratio (P/B and price to sale value ratio (P/S are included in our multicriterial model. However the problem might occur as the mean values of some criteria are significantly different for different sectors and because financial managers emphasize that comparison of the same criteria for different sectors could lead us to wrong conclusions. In the second part of the paper, relative values of previously stated criteria (in relation to mean value of sector are included in model for selecting optimal portfolio. Furthermore, the paper shows that if relative values of criteria are included in multicriterial model for selecting optimal portfolio, return in subsequent period is considerably higher than if absolute values of the same criteria were used.

  19. Optimal trajectories of aircraft and spacecraft

    Science.gov (United States)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful

  20. Non-euclidean simplex optimization

    International Nuclear Information System (INIS)

    Silver, G.L.

    1977-01-01

    Geometric optimization techniques useful for studying chemical equilibrium traditionally rely upon principles of euclidean geometry, but such algorithms may also be based upon principles of a non-euclidean geometry. The sequential simplex method is adapted to the hyperbolic plane, and application of optimization to problems such as the potentiometric titration of plutonium is suggested

  1. Tank waste remediation system optimized processing strategy with an altered treatment scheme

    International Nuclear Information System (INIS)

    Slaathaug, E.J.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy with an altered treatment scheme performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility

  2. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  3. Metal fractionation of atmospheric aerosols via sequential chemical extraction: a review

    Energy Technology Data Exchange (ETDEWEB)

    Smichowski, Patricia; Gomez, Dario [Unidad de Actividad Quimica, Comision Nacional de Energia Atomica, San Martin (Argentina); Polla, Griselda [Unidad de Actividad Fisica, Comision Nacional de Energia Atomica, San Martin (Argentina)

    2005-01-01

    This review surveys schemes used to sequentially chemically fractionate metals and metalloids present in airborne particulate matter. It focuses mainly on sequential chemical fractionation schemes published over the last 15 years. These schemes have been classified into five main categories: (1) based on Tessier's procedure, (2) based on Chester's procedure, (3) based on Zatka's procedure, (4) based on BCR procedure, and (5) other procedures. The operational characteristics as well as the state of the art in metal fractionation of airborne particulate matter, fly ashes and workroom aerosols, in terms of applications, optimizations and innovations, are also described. Many references to other works in this area are provided. (orig.)

  4. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  5. Further Developments on Optimum Structural Design Using MSC/Nastran and Sequential Quadratic Programming

    DEFF Research Database (Denmark)

    Holzleitner, Ludwig

    1996-01-01

    , here the shape of two dimensional parts with different thickness areas will be optimized. As in the previos paper, a methodology for structural optimization using the commercial finite element package MSC/NASTRAN for structural analysis is described. Three different methods for design sensitivity......This work is closely connected to the paper: K.G. MAHMOUD, H.W. ENGL and HOLZLEITNER: "OPTIMUM STRUCTURAL DESIGN USING MSC/NASTRAN AND SEQUENTIAL QUADRATIC PROGRAMMING", Computers & Structures, Vol. 52, No. 3, pp. 437-447, (1994). In contrast to that paper, where thickness optimization is described...

  6. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  7. Decision-making in research tasks with sequential testing.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    Full Text Available BACKGROUND: In a recent controversial essay, published by JPA Ioannidis in PLoS Medicine, it has been argued that in some research fields, most of the published findings are false. Based on theoretical reasoning it can be shown that small effect sizes, error-prone tests, low priors of the tested hypotheses and biases in the evaluation and publication of research findings increase the fraction of false positives. These findings raise concerns about the reliability of research. However, they are based on a very simple scenario of scientific research, where single tests are used to evaluate independent hypotheses. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present computer simulations and experimental approaches for analyzing more realistic scenarios. In these scenarios, research tasks are solved sequentially, i.e. subsequent tests can be chosen depending on previous results. We investigate simple sequential testing and scenarios where only a selected subset of results can be published and used for future rounds of test choice. Results from computer simulations indicate that for the tasks analyzed in this study, the fraction of false among the positive findings declines over several rounds of testing if the most informative tests are performed. Our experiments show that human subjects frequently perform the most informative tests, leading to a decline of false positives as expected from the simulations. CONCLUSIONS/SIGNIFICANCE: For the research tasks studied here, findings tend to become more reliable over time. We also find that the performance in those experimental settings where not all performed tests could be published turned out to be surprisingly inefficient. Our results may help optimize existing procedures used in the practice of scientific research and provide guidance for the development of novel forms of scholarly communication.

  8. On the diversity enhancement and power balancing of per-subcarrier antenna selection in OFDM systems

    KAUST Repository

    Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we consider multi-carrier systems with multiple transmit antennas under a power balancing constraint. Applying transmit antenna selection and discrete rate adaptive modulation using M-ary quadrature amplitude modulation (QAM) according to the channel variation per subcarrier, we develop an optimal antenna selection scheme in terms of maximum spectral efficiency where all the possible grouping to send the same information bearing signals in a group of subcarriers are searched and the groups of subcarriers to provide the frequency diversity gain are formed. In addition, we propose a suboptimal method to reduce the computational complexity of the optimal method. The suboptimal scheme consider only the subcarriers under outage and those are combined sequentially until it meets a required SNR. Numerical results show that the proposed suboptimal method with diversity combining outperforms the optimal antenna selection without diversity combining introduced in [1], especially for low SNR region and offers the spectral efficiency close to that of the optimal method with diversity combining, while maintaining lower complexity. ©2010 IEEE.

  9. On the diversity enhancement and power balancing of per-subcarrier antenna selection in OFDM systems

    KAUST Repository

    Park, Kihong

    2010-09-01

    In this paper, we consider multi-carrier systems with multiple transmit antennas under a power balancing constraint. Applying transmit antenna selection and discrete rate adaptive modulation using M-ary quadrature amplitude modulation (QAM) according to the channel variation per subcarrier, we develop an optimal antenna selection scheme in terms of maximum spectral efficiency where all the possible grouping to send the same information bearing signals in a group of subcarriers are searched and the groups of subcarriers to provide the frequency diversity gain are formed. In addition, we propose a suboptimal method to reduce the computational complexity of the optimal method. The suboptimal scheme consider only the subcarriers under outage and those are combined sequentially until it meets a required SNR. Numerical results show that the proposed suboptimal method with diversity combining outperforms the optimal antenna selection without diversity combining introduced in [1], especially for low SNR region and offers the spectral efficiency close to that of the optimal method with diversity combining, while maintaining lower complexity. ©2010 IEEE.

  10. Strategic Path Planning by Sequential Parametric Bayesian Decisions

    Directory of Open Access Journals (Sweden)

    Baro Hyun

    2013-11-01

    Full Text Available The objective of this research is to generate a path for a mobile agent that carries sensors used for classification, where the path is to optimize strategic objectives that account for misclassification and the consequences of misclassification, and where the weights assigned to these consequences are chosen by a strategist. We propose a model that accounts for the interaction between the agent kinematics (i.e., the ability to move, informatics (i.e., the ability to process data to information, classification (i.e., the ability to classify objects based on the information, and strategy (i.e., the mission objective. Within this model, we pose and solve a sequential decision problem that accounts for strategist preferences and the solution to the problem yields a sequence of kinematic decisions of a moving agent. The solution of the sequential decision problem yields the following flying tactics: “approach only objects whose suspected identity matters to the strategy”. These tactics are numerically illustrated in several scenarios.

  11. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    International Nuclear Information System (INIS)

    Taylor, A.; Blake, W.H.; Keith-Roach, M.J.

    2012-01-01

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the

  12. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    Science.gov (United States)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  13. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    Science.gov (United States)

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  14. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  15. Optimal Portfolio Selection Under Concave Price Impact

    International Nuclear Information System (INIS)

    Ma Jin; Song Qingshuo; Xu Jing; Zhang Jianfeng

    2013-01-01

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a “piecewise constant” form, reflecting a more practical perspective.

  16. Optimal Portfolio Selection Under Concave Price Impact

    Energy Technology Data Exchange (ETDEWEB)

    Ma Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States); Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk [City University of Hong Kong, Department of Mathematics (Hong Kong); Xu Jing, E-mail: xujing8023@yahoo.com.cn [Chongqing University, School of Economics and Business Administration (China); Zhang Jianfeng, E-mail: jianfenz@usc.edu [University of Southern California, Department of Mathematics (United States)

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.

  17. Selective condensation drives partitioning and sequential secretion of cyst wall proteins in differentiating Giardia lamblia.

    Directory of Open Access Journals (Sweden)

    Christian Konrad

    2010-04-01

    Full Text Available Controlled secretion of a protective extracellular matrix is required for transmission of the infective stage of a large number of protozoan and metazoan parasites. Differentiating trophozoites of the highly minimized protozoan parasite Giardia lamblia secrete the proteinaceous portion of the cyst wall material (CWM consisting of three paralogous cyst wall proteins (CWP1-3 via organelles termed encystation-specific vesicles (ESVs. Phylogenetic and molecular data indicate that Diplomonads have lost a classical Golgi during reductive evolution. However, neogenesis of ESVs in encysting Giardia trophozoites transiently provides basic Golgi functions by accumulating presorted CWM exported from the ER for maturation. Based on this "minimal Golgi" hypothesis we predicted maturation of ESVs to a trans Golgi-like stage, which would manifest as a sorting event before regulated secretion of the CWM. Here we show that proteolytic processing of pro-CWP2 in maturing ESVs coincides with partitioning of CWM into two fractions, which are sorted and secreted sequentially with different kinetics. This novel sorting function leads to rapid assembly of a structurally defined outer cyst wall, followed by slow secretion of the remaining components. Using live cell microscopy we find direct evidence for condensed core formation in maturing ESVs. Core formation suggests that a mechanism controlled by phase transitions of the CWM from fluid to condensed and back likely drives CWM partitioning and makes sorting and sequential secretion possible. Blocking of CWP2 processing by a protease inhibitor leads to mis-sorting of a CWP2 reporter. Nevertheless, partitioning and sequential secretion of two portions of the CWM are unaffected in these cells. Although these cysts have a normal appearance they are not water resistant and therefore not infective. Our findings suggest that sequential assembly is a basic architectural principle of protective wall formation and requires

  18. Q-Learning Multi-Objective Sequential Optimal Sensor Parameter Weights

    Directory of Open Access Journals (Sweden)

    Raquel Cohen

    2016-04-01

    Full Text Available The goal of our solution is to deliver trustworthy decision making analysis tools which evaluate situations and potential impacts of such decisions through acquired information and add efficiency for continuing mission operations and analyst information.We discuss the use of cooperation in modeling and simulation and show quantitative results for design choices to resource allocation. The key contribution of our paper is to combine remote sensing decision making with Nash Equilibrium for sensor parameter weighting optimization. By calculating all Nash Equilibrium possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. Our tool provides insight into what are the most important or optimal weights for sensor parameters and can be used to efficiently tune those weights.

  19. Condition Monitoring of Sensors in a NPP Using Optimized PCA

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available An optimized principal component analysis (PCA framework is proposed to implement condition monitoring for sensors in a nuclear power plant (NPP in this paper. Compared with the common PCA method in previous research, the PCA method in this paper is optimized at different modeling procedures, including data preprocessing stage, modeling parameter selection stage, and fault detection and isolation stage. Then, the model’s performance is greatly improved through these optimizations. Finally, sensor measurements from a real NPP are used to train the optimized PCA model in order to guarantee the credibility and reliability of the simulation results. Meanwhile, artificial faults are sequentially imposed to sensor measurements to estimate the fault detection and isolation ability of the proposed PCA model. Simulation results show that the optimized PCA model is capable of detecting and isolating the sensors regardless of whether they exhibit major or small failures. Meanwhile, the quantitative evaluation results also indicate that better performance can be obtained in the optimized PCA method compared with the common PCA method.

  20. A metaheuristic optimization framework for informative gene selection

    Directory of Open Access Journals (Sweden)

    Kaberi Das

    Full Text Available This paper presents a metaheuristic framework using Harmony Search (HS with Genetic Algorithm (GA for gene selection. The internal architecture of the proposed model broadly works in two phases, in the first phase, the model allows the hybridization of HS with GA to compute and evaluate the fitness of the randomly selected solutions of binary strings and then HS ranks the solutions in descending order of their fitness. In the second phase, the offsprings are generated using crossover and mutation operations of GA and finally, those offsprings were selected for the next generation whose fitness value is more than their parents evaluated by SVM classifier. The accuracy of the final gene subsets obtained from this model has been evaluated using SVM classifiers. The merit of this approach is analyzed by experimental results on five benchmark datasets and the results showed an impressive accuracy over existing feature selection approaches. The occurrence of gene subsets selected from this model have also been computed and the most often selected gene subsets with the probability of [0.1–0.9] have been chosen as optimal sets of informative genes. Finally, the performance of those selected informative gene subsets have been measured and established through probabilistic measures. Keywords: Gene Selection, Metaheuristic, Harmony Search Algorithm, Genetic Algorithm, SVM

  1. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  2. Optimization of growth medium and fermentation conditions for ...

    African Journals Online (AJOL)

    A sequential optimization approach based on statistical experimental designs was employed to optimize growth medium and fermentation conditions, in order to improve the antibiotic activity of Xenorhabdus nematophila TB. Tryptone soyptone broth (TSB) was chosen as the original medium for optimization. Glucose and ...

  3. More than 10 years survival with sequential therapy in a patient with advanced renal cell carcinoma: a case report

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, J.L.; Wang, F.L.; Yi, X.M.; Qin, W.J.; Wu, G.J. [Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi' an, Shaanxi (China); Huan, Y. [Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi' an, Shaanxi (China); Yang, L.J.; Zhang, G.; Yu, L.; Zhang, Y.T.; Qin, R.L.; Tian, C.J. [Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi' an, Shaanxi (China)

    2014-10-31

    Although radical nephrectomy alone is widely accepted as the standard of care in localized treatment for renal cell carcinoma (RCC), it is not sufficient for the treatment of metastatic RCC (mRCC), which invariably leads to an unfavorable outcome despite the use of multiple therapies. Currently, sequential targeted agents are recommended for the management of mRCC, but the optimal drug sequence is still debated. This case was a 57-year-old man with clear-cell mRCC who received multiple therapies following his first operation in 2003 and has survived for over 10 years with a satisfactory quality of life. The treatments given included several surgeries, immunotherapy, and sequentially administered sorafenib, sunitinib, and everolimus regimens. In the course of mRCC treatment, well-planned surgeries, effective sequential targeted therapies and close follow-up are all of great importance for optimal management and a satisfactory outcome.

  4. Pipe degradation investigations for optimization of flow-accelerated corrosion inspection location selection

    International Nuclear Information System (INIS)

    Chandra, S.; Habicht, P.; Chexal, B.; Mahini, R.; McBrine, W.; Esselman, T.; Horowitz, J.

    1995-01-01

    A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encountered can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities' (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program

  5. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  6. On the diversity enhancement and power balancing of per-subcarrier transmit antenna selection in OFDM systems

    KAUST Repository

    Park, Kihong

    2011-01-01

    In this paper, we consider multicarrier systems with multiple transmit antennas under a power-balancing constraint. Applying transmit antenna selection and discrete rate-adaptive modulation using M-ary quadrature-amplitude modulation (QAM) according to the channel variation per subcarrier, we develop an optimal transmit antenna selection scheme in terms of the maximum spectral efficiency, where all the possible groupings for sending the same information-bearing signals in a group of subcarriers are searched, and the groups of subcarriers for providing the frequency diversity gain are formed. In addition, we propose a suboptimal method for reducing the computational complexity of the optimal method. The suboptimal scheme considers only the subcarriers under outage, and these subcarriers are sequentially combined until the required signal-to-noise ratio (SNR) is met. Numerical results show that the proposed suboptimal method with diversity combining outperforms the optimal antenna selection without diversity combining, as introduced in the work of Sandell and Coon, particularly for low-SNR regions, and offers the spectral efficiency close to the optimal method with diversity combining while maintaining lower complexity. © 2011 IEEE.

  7. The stock selection problem: Is the stock selection approach more important than the optimization method? Evidence from the Danish stock market

    OpenAIRE

    Grobys, Klaus

    2011-01-01

    Passive investment strategies basically aim to replicate an underlying benchmark. Thereby, the management usually selects a subset of stocks being employed in the optimization procedure. Apart from the optimization procedure, the stock selection approach determines the stock portfolios' out-of-sample performance. The empirical study here takes into account the Danish stock market from 2000-2010 and gives evidence that stock portfolios including small companies' stocks being estimated via coin...

  8. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  9. Collaborative Filtering Based on Sequential Extraction of User-Item Clusters

    Science.gov (United States)

    Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo

    Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.

  10. A Feedback Optimal Control Algorithm with Optimal Measurement Time Points

    Directory of Open Access Journals (Sweden)

    Felix Jost

    2017-02-01

    Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.

  11. Simultaneous Versus Sequential Complementarity in the Adoption of Technological and Organizational Innovations

    DEFF Research Database (Denmark)

    Battisti, Giuliana; Rabbiosi, Larissa; Colombo, Massimo G.

    2015-01-01

    It is generally suggested that technological and organizational innovations, being complementary, need to be adopted simultaneously. Nevertheless, sequential rather than simultaneous adoption of these two types of innovation may be optimal. In this paper, we analyze the pattern of mutual causation...... of technological and organizational innovations and contribute to the understanding of their interdependencies......

  12. Trip Travel Time Forecasting Based on Selective Forgetting Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Zhiming Gui

    2014-01-01

    Full Text Available Travel time estimation on road networks is a valuable traffic metric. In this paper, we propose a machine learning based method for trip travel time estimation in road networks. The method uses the historical trip information extracted from taxis trace data as the training data. An optimized online sequential extreme machine, selective forgetting extreme learning machine, is adopted to make the prediction. Its selective forgetting learning ability enables the prediction algorithm to adapt to trip conditions changes well. Experimental results using real-life taxis trace data show that the forecasting model provides an effective and practical way for the travel time forecasting.

  13. [Professor GAO Yuchun's experience on "sequential acupuncture leads to smooth movement of qi"].

    Science.gov (United States)

    Wang, Yanjun; Xing, Xiao; Cui, Linhua

    2016-01-01

    Professor GAO Yuchun is considered as the key successor of GAO's academic school of acupuncture and moxibustion in Yanzhao region. Professor GAO's clinical experience of, "sequential acupuncture" is introduced in details in this article. In Professor GAO's opinions, appropriate acupuncture sequence is the key to satisfactory clinical effects during treatment. Based on different acupoints, sequential acupuncture can achieve the aim of qi following needles and needles leading qi; based on different symptoms, sequential acupuncture can regulate qi movement; based on different body positions, sequential acupuncture can harmonize qi-blood and reinforcing deficiency and reducing excess. In all, according to the differences of disease condition and constitution, based on the accurate acupoint selection and appropriate manipulation, it is essential to capture the nature of diseases and make the order of acupuncture, which can achieve the aim of regulating qi movement and reinforcing deficiency and reducing excess.

  14. Hybrid collaborative optimization based on selection strategy of initial point and adaptive relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Aimin; Yin, Xu; Yuan, Minghai [Hohai University, Changzhou (China)

    2015-09-15

    There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer.

  15. Hybrid collaborative optimization based on selection strategy of initial point and adaptive relaxation

    International Nuclear Information System (INIS)

    Ji, Aimin; Yin, Xu; Yuan, Minghai

    2015-01-01

    There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer

  16. Selective waste collection optimization in Romania and its impact to urban climate

    Science.gov (United States)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  17. Strategies for Optimal Design of Structural Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  18. Sequential Functionalization of Alkynes and Alkenes Catalyzed by Gold(I) and Palladium(II) N-Heterocyclic Carbene Complexes

    KAUST Repository

    Gó mez-Herrera, Alberto; Nahra, Fady; Brill, Marcel; Nolan, Steven P.; Cazin, Catherine S. J.

    2016-01-01

    The iodination of terminal alkynes for the synthesis of 1-iodoalkynes using N-iodosuccinimide in the presence of a AuI-NHC (NHC=N-heterocyclic carbene) catalyst is reported. A series of aromatic alkynes was transformed successfully into the corresponding 1-iodoalkynes in good to excellent yields under mild reaction conditions. The further use of these compounds as organic building blocks and the advantageous choice of metal-NHC complexes as catalysts for alkyne functionalization were further demonstrated by performing selective AuI-catalyzed hydrofluorination to yield (Z)-2-fluoro-1-iodoalkenes, followed by a Suzuki–Miyaura cross-coupling with aryl boronic acids catalyzed by a PdII-NHC complex to access trisubstituted (Z)-fluoroalkenes. All methodologies can be performed sequentially with only minor variations in the optimized individual reaction conditions, maintaining high efficiency and selectivity in all cases, which therefore, provides straightforward access to valuable fluorinated alkenes from commercially available terminal alkynes.

  19. Sequential Functionalization of Alkynes and Alkenes Catalyzed by Gold(I) and Palladium(II) N-Heterocyclic Carbene Complexes

    KAUST Repository

    Gómez-Herrera, Alberto

    2016-08-22

    The iodination of terminal alkynes for the synthesis of 1-iodoalkynes using N-iodosuccinimide in the presence of a AuI-NHC (NHC=N-heterocyclic carbene) catalyst is reported. A series of aromatic alkynes was transformed successfully into the corresponding 1-iodoalkynes in good to excellent yields under mild reaction conditions. The further use of these compounds as organic building blocks and the advantageous choice of metal-NHC complexes as catalysts for alkyne functionalization were further demonstrated by performing selective AuI-catalyzed hydrofluorination to yield (Z)-2-fluoro-1-iodoalkenes, followed by a Suzuki–Miyaura cross-coupling with aryl boronic acids catalyzed by a PdII-NHC complex to access trisubstituted (Z)-fluoroalkenes. All methodologies can be performed sequentially with only minor variations in the optimized individual reaction conditions, maintaining high efficiency and selectivity in all cases, which therefore, provides straightforward access to valuable fluorinated alkenes from commercially available terminal alkynes.

  20. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  1. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    Science.gov (United States)

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    Science.gov (United States)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  3. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    Science.gov (United States)

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  5. Optimal portfolio selection for general provisioning and terminal wealth problems

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2010-01-01

    In Dhaene et al. (2005), multiperiod portfolio selection problems are discussed, using an analytical approach to find optimal constant mix investment strategies in a provisioning or a savings context. In this paper we extend some of these results, investigating some specific, real-life situations.

  6. Optimal portfolio selection for general provisioning and terminal wealth problems

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2009-01-01

    In Dhaene et al. (2005), multiperiod portfolio selection problems are discussed, using an analytical approach to find optimal constant mix investment strategies in a provisioning or savings context. In this paper we extend some of these results, investigating some specific, real-life situations. The

  7. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  8. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    Science.gov (United States)

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  9. Optimal relay selection and power allocation for cognitive two-way relaying networks

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-06-01

    In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays. The secondary nodes share the spectrum with a licensed primary user (PU), and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. We propose joint relay selection and optimal power allocation among the secondary user (SU) nodes achieving maximum throughput under transmit power and PU interference constraints. A closed-form solution for optimal allocation of transmit power among the SU transceivers and the SU relay is presented. Furthermore, numerical simulations and comparisons are presented to illustrate the performance of the proposed scheme. © 2012 IEEE.

  10. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  11. Band Subset Selection for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Chunyan Yu

    2018-01-01

    Full Text Available This paper develops a new approach to band subset selection (BSS for hyperspectral image classification (HSIC which selects multiple bands simultaneously as a band subset, referred to as simultaneous multiple band selection (SMMBS, rather than one band at a time sequentially, referred to as sequential multiple band selection (SQMBS, as most traditional band selection methods do. In doing so, a criterion is particularly developed for BSS that can be used for HSIC. It is a linearly constrained minimum variance (LCMV derived from adaptive beamforming in array signal processing which can be used to model misclassification errors as the minimum variance. To avoid an exhaustive search for all possible band subsets, two numerical algorithms, referred to as sequential (SQ and successive (SC algorithms are also developed for LCMV-based SMMBS, called SQ LCMV-BSS and SC LCMV-BSS. Experimental results demonstrate that LCMV-based BSS has advantages over SQMBS.

  12. Optimal Licensing Contracts with Adverse Selection and Informational Rents

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-06-01

    Full Text Available In the paper we analyse a model for determining the optimal licensing contract in both situations of symmetric and asymmetric information between the license’s owner and the potential buyer. Next we present another way of solving the corresponding adverse selection model, using the informational rents as variables. This approach is different from that of Macho-Stadler and Perez-Castrillo.

  13. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  14. Thermodynamic performance analysis of sequential Carnot cycles using heat sources with finite heat capacity

    International Nuclear Information System (INIS)

    Park, Hansaem; Kim, Min Soo

    2014-01-01

    The maximum efficiency of a heat engine is able to be estimated by using a Carnot cycle. Even though, in terms of efficiency, the Carnot cycle performs the role of reference very well, its application is limited to the case of infinite heat reservoirs, which is not that realistic. Moreover, considering that one of the recent key issues is to produce maximum work from low temperature and finite heat sources, which are called renewable energy sources, more advanced theoretical cycles, which can present a new standard, and the research about them are necessary. Therefore, in this paper, a sequential Carnot cycle, where multiple Carnot cycles are connected in parallel, is studied. The cycle adopts a finite heat source, which has a certain initial temperature and heat capacity, and an infinite heat sink, which is assumed to be ambient air. Heat transfer processes in the cycle occur with the temperature difference between a heat reservoir and a cycle. In order to resolve the heat transfer rate in those processes, the product of an overall heat transfer coefficient and a heat transfer area is introduced. Using these conditions, the performance of a sequential Carnot cycle is analytically calculated. Furthermore, as the efforts for enhancing the work of the cycle, the optimization research is also conducted with numerical calculation. - Highlights: • Modified sequential Carnot cycles are proposed for evaluating low grade heat sources. • Performance of sequential Carnot cycles is calculated analytically. • Optimization study for the cycle is conducted with numerical solver. • Maximum work from a heat source under a certain condition is obtained by equations

  15. A Sequential Statistical Approach towards an Optimized Production of a Broad Spectrum Bacteriocin Substance from a Soil Bacterium Bacillus sp. YAS 1 Strain

    Directory of Open Access Journals (Sweden)

    Amira M. Embaby

    2014-01-01

    Full Text Available Bacteriocins, ribosomally synthesized antimicrobial peptides, display potential applications in agriculture, medicine, and industry. The present study highlights integral statistical optimization and partial characterization of a bacteriocin substance from a soil bacterium taxonomically affiliated as Bacillus sp. YAS 1 after biochemical and molecular identifications. A sequential statistical approach (Plackett-Burman and Box-Behnken was employed to optimize bacteriocin (BAC YAS 1 production. Using optimal levels of three key determinants (yeast extract (0.48% (w/v, incubation time (62 hrs, and agitation speed (207 rpm in peptone yeast beef based production medium resulted in 1.6-fold enhancement in BAC YAS 1 level (470 AU/mL arbitrary units against Erwinia amylovora. BAC YAS 1 showed activity over a wide range of pH (1–13 and temperature (45–80°C. A wide spectrum antimicrobial activity of BAC YAS 1 against the human pathogens (Clostridium perfringens, Staphylococcus epidermidis, Campylobacter jejuni, Enterobacter aerogenes, Enterococcus sp., Proteus sp., Klebsiella sp., and Salmonella typhimurium, the plant pathogen (E. amylovora, and the food spoiler (Listeria innocua was demonstrated. On top and above, BAC YAS 1 showed no antimicrobial activity towards lactic acid bacteria (Lactobacillus bulgaricus, L. casei, L. lactis, and L. reuteri. Promising characteristics of BAC YAS 1 prompt its commercialization for efficient utilization in several industries.

  16. Iteration particle swarm optimization for contract capacities selection of time-of-use rates industrial customers

    International Nuclear Information System (INIS)

    Lee, Tsung-Ying; Chen, Chun-Lung

    2007-01-01

    This paper presents a new algorithm for solving the optimal contract capacities of a time-of-use (TOU) rates industrial customer. This algorithm is named iteration particle swarm optimization (IPSO). A new index, called iteration best is incorporated into particle swarm optimization (PSO) to improve solution quality and computation efficiency. Expanding line construction cost and contract recovery cost are considered, as well as demand contract capacity cost and penalty bill, in the selection of the optimal contract capacities. The resulting optimal contract capacity effectively reaches the minimum electricity charge of TOU rates users. A significant reduction in electricity costs is observed. The effects of expanding line construction cost and contract recovery cost on the selection of optimal contract capacities can also be estimated. The feasibility of the new algorithm is demonstrated by a numerical example, and the IPSO solution quality and computation efficiency are compared to those of other algorithms

  17. Double tracer autoradiographic method for sequential evaluation of regional cerebral perfusion

    International Nuclear Information System (INIS)

    Matsuda, H.; Tsuji, S.; Oba, H.; Kinuya, K.; Terada, H.; Sumiya, H.; Shiba, K.; Mori, H.; Hisada, K.; Maeda, T.

    1989-01-01

    A new double tracer autoradiographic method for the sequential evaluation of altered regional cerebral perfusion in the same animal is presented. This method is based on the sequential injection of two tracers, 99m Tc-hexamethylpropyleneamine oxime and N-isopropyl-( 125 I)p-iodoamphetamine. This method is validated in the assessment of brovincamine effects on regional cerebral perfusion in an experimental model of chronic brain ischemia in the rat. The drug enhanced perfusion recovery in low-flow areas, selectively in surrounding areas of infarction. The results suggest that this technique is of potential use in the study of neuropharmacological effects applied during the experiment

  18. Optimal processing pathway selection for microalgae-based biorefinery under uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Zaman, Muhammad; Lee, Jay H.

    2015-01-01

    We propose a systematic framework for the selection of optimal processing pathways for a microalgaebased biorefinery under techno-economic uncertainty. The proposed framework promotes robust decision making by taking into account the uncertainties that arise due to inconsistencies among...... and shortage in the available technical information. A stochastic mixed integer nonlinear programming (sMINLP) problem is formulated for determining the optimal biorefinery configurations based on a superstructure model where parameter uncertainties are modeled and included as sampled scenarios. The solution...... the accounting of uncertainty are compared with respect to different objectives. (C) 2015 Elsevier Ltd. All rights reserved....

  19. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)

    2012-03-30

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were Sequential extraction with natural {sup 7}Be returns high analytical uncertainties. Black-Right-Pointing-Pointer Preconcentrating extracts from a large sample mass improved analytical uncertainty. Black-Right-Pointing-Pointer This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic {sup 7}Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of {sup 7}Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout {sup 7}Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of {sup 7}Be (t{sub 1/2} = 53.3 days). Here, three different methods of preparing and quantifying {sup 7}Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the {sup 7}Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural {sup 7}Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period ({approx}10% (2

  20. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  1. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  2. Recovery of Cobalt as Cobalt Oxalate from Cobalt Tailings Using Moderately Thermophilic Bioleaching Technology and Selective Sequential Extraction

    Directory of Open Access Journals (Sweden)

    Guobao Chen

    2016-07-01

    Full Text Available Cobalt is a very important metal which is widely applied in various critical areas, however, it is difficult to recover cobalt from minerals since there is a lack of independent cobalt deposits in nature. This work is to provide a complete process to recover cobalt from cobalt tailings using the moderately thermophilic bioleaching technology and selective sequential extraction. It is found that 96.51% Co and 26.32% Cu were extracted after bioleaching for four days at 10% pulp density. The mean compositions of the leach solutions contain 0.98 g·L−1 of Co, 6.52 g·L−1 of Cu, and 24.57 g·L−1 of Fe (III. The copper ion was then recovered by a solvent extraction process and the ferric ions were selectively removed by applying a goethite deironization process. The technological conditions of the above purification procedures were deliberately discussed. Over 98.6% of copper and 99.9% of ferric ions were eliminated from the leaching liquor. Cobalt was finally produced as cobalt oxalate and its overall recovery during the whole process was greater than 95%. The present bioleaching process of cobalt is worth using for reference to deal with low-grade cobalt ores.

  3. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  4. Speciation fingerprints of binary mixtures by the optimized sequential two-phase separation

    International Nuclear Information System (INIS)

    Macasek, F.

    1995-01-01

    The analysis of the separation methods suitable for chemical speciation of radionuclides and metals, and advantages of sequential (double) distribution technique were discussed. The equilibria are relatively easy to control and the method enables to minimize a matrix composition adjustment, and therefore it minimizes also the disturbance of original (native) state of elements. The technique may consist in the repeat solvent extraction of sample, or the replicate equilibration with sorbent. The common condition of applicability is a linear separation isotherm of the species, what is mostly a reasonable condition in case of trace concentrations. The equations used for simultaneous fitting were written in general form. 1 tab., 1 fig., 2 refs

  5. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts

    DEFF Research Database (Denmark)

    Vilhelmsen, Troels Norvin; Ferre, Ty Paul

    2017-01-01

    . In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific...... measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when...

  6. High-efficiency design optimization of a centrifugal pump

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Man Woong; Ma, Sang Bum; Shim, Hyeon Seok; Kim, Kwang Yong [Dept. of Mechanical Engineering, Inha University, Incheon (Korea, Republic of)

    2016-09-15

    Design optimization of a backward-curved blades centrifugal pump with specific speed of 150 has been performed to improve hydraulic performance of the pump using surrogate modeling and three-dimensional steady Reynolds-averaged Navier-Stokes analysis. The shear stress transport model was used for the analysis of turbulence. Four geometric variables defining the blade hub inlet angle, hub contours, blade outlet angle, and blade angle profile of impeller were selected as design variables, and total efficiency of the pump at design flow rate was set as the objective function for the optimization. Thirty-six design points were chosen using the Latin hypercube sampling, and three different surrogate models were constructed using the objective function values calculated at these design points. The optimal point was searched from the constructed surrogate model by using sequential quadratic programming. The optimum designs of the centrifugal pump predicted by the surrogate models show considerable increases in efficiency compared to a reference design. Performance of the best optimum design was validated compared to experimental data for total efficiency and head.

  7. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  8. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  9. Selection of the optimal Box-Cox transformation parameter for modelling and forecasting age-specific fertility

    OpenAIRE

    Shang, Han Lin

    2015-01-01

    The Box-Cox transformation can sometimes yield noticeable improvements in model simplicity, variance homogeneity and precision of estimation, such as in modelling and forecasting age-specific fertility. Despite its importance, there have been few studies focusing on the optimal selection of Box-Cox transformation parameters in demographic forecasting. A simple method is proposed for selecting the optimal Box-Cox transformation parameter, along with an algorithm based on an in-sample forecast ...

  10. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  11. Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    Chiang Mung

    2006-01-01

    Full Text Available The problem of choosing the optimal multipath components to be employed at a minimum mean square error (MMSE selective Rake receiver is considered for an impulse radio ultra-wideband system. First, the optimal finger selection problem is formulated as an integer programming problem with a nonconvex objective function. Then, the objective function is approximated by a convex function and the integer programming problem is solved by means of constraint relaxation techniques. The proposed algorithms are suboptimal due to the approximate objective function and the constraint relaxation steps. However, they perform better than the conventional finger selection algorithm, which is suboptimal since it ignores the correlation between multipath components, and they can get quite close to the optimal scheme that cannot be implemented in practice due to its complexity. In addition to the convex relaxation techniques, a genetic-algorithm- (GA- based approach is proposed, which does not need any approximations or integer relaxations. This iterative algorithm is based on the direct evaluation of the objective function, and can achieve near-optimal performance with a reasonable number of iterations. Simulation results are presented to compare the performance of the proposed finger selection algorithms with that of the conventional and the optimal schemes.

  12. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  13. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  14. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  15. Three-Dimensional Dynamic Topology Optimization with Frequency Constraints Using Composite Exponential Function and ICM Method

    Directory of Open Access Journals (Sweden)

    Hongling Ye

    2015-01-01

    Full Text Available The dynamic topology optimization of three-dimensional continuum structures subject to frequency constraints is investigated using Independent Continuous Mapping (ICM design variable fields. The composite exponential function (CEF is selected to be a filter function which recognizes the design variables and to implement the changing process of design variables from “discrete” to “continuous” and back to “discrete.” Explicit formulations of frequency constraints are given based on filter functions, first-order Taylor series expansion. And an improved optimal model is formulated using CEF and the explicit frequency constraints. Dual sequential quadratic programming (DSQP algorithm is used to solve the optimal model. The program is developed on the platform of MSC Patran & Nastran. Finally, numerical examples are given to demonstrate the validity and applicability of the proposed method.

  16. A working-set framework for sequential convex approximation methods

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2008-01-01

    We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....

  17. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    International Nuclear Information System (INIS)

    Nguyen, Q H; Lang, V T; Choi, S B

    2015-01-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass. (technical note)

  18. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  19. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Maolong Xi

    2016-01-01

    Full Text Available This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO for cancer feature gene selection, coupling support vector machine (SVM for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV. Finally, the BQPSO coupling SVM (BQPSO/SVM, binary PSO coupling SVM (BPSO/SVM, and genetic algorithm coupling SVM (GA/SVM are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  20. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    Science.gov (United States)

    Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms. PMID:27642363

  1. Models of sequential decision making in consumer lending

    OpenAIRE

    Kanshukan Rajaratnam; Peter A. Beling; George A. Overstreet

    2016-01-01

    Abstract In this paper, we introduce models of sequential decision making in consumer lending. From the definition of adverse selection in static lending models, we show that homogenous borrowers take-up offers at different instances of time when faced with a sequence of loan offers. We postulate that bounded rationality and diverse decision heuristics used by consumers drive the decisions they make about credit offers. Under that postulate, we show how observation of early decisions in a seq...

  2. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  3. Natural selection and optimality

    International Nuclear Information System (INIS)

    Torres, J.L.

    1989-01-01

    It is assumed that Darwin's principle translates into optimal regimes of operation along metabolical pathways in an ecological system. Fitness is then defined in terms of the distance of a given individual's thermodynamic parameters from their optimal values. The method is illustrated testing maximum power as a criterion of merit satisfied in ATP synthesis. (author). 26 refs, 2 figs

  4. Log-Optimal Portfolio Selection Using the Blackwell Approachability Theorem

    OpenAIRE

    V'yugin, Vladimir

    2014-01-01

    We present a method for constructing the log-optimal portfolio using the well-calibrated forecasts of market values. Dawid's notion of calibration and the Blackwell approachability theorem are used for computing well-calibrated forecasts. We select a portfolio using this "artificial" probability distribution of market values. Our portfolio performs asymptotically at least as well as any stationary portfolio that redistributes the investment at each round using a continuous function of side in...

  5. Sequential Therapy in Metastatic Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Bradford R Hirsch

    2016-04-01

    Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.

  6. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  7. Steering Evolution with Sequential Therapy to Prevent the Emergence of Bacterial Antibiotic Resistance.

    Directory of Open Access Journals (Sweden)

    Daniel Nichol

    2015-09-01

    Full Text Available The increasing rate of antibiotic resistance and slowing discovery of novel antibiotic treatments presents a growing threat to public health. Here, we consider a simple model of evolution in asexually reproducing populations which considers adaptation as a biased random walk on a fitness landscape. This model associates the global properties of the fitness landscape with the algebraic properties of a Markov chain transition matrix and allows us to derive general results on the non-commutativity and irreversibility of natural selection as well as antibiotic cycling strategies. Using this formalism, we analyze 15 empirical fitness landscapes of E. coli under selection by different β-lactam antibiotics and demonstrate that the emergence of resistance to a given antibiotic can be either hindered or promoted by different sequences of drug application. Specifically, we demonstrate that the majority, approximately 70%, of sequential drug treatments with 2-4 drugs promote resistance to the final antibiotic. Further, we derive optimal drug application sequences with which we can probabilistically 'steer' the population through genotype space to avoid the emergence of resistance. This suggests a new strategy in the war against antibiotic-resistant organisms: drug sequencing to shepherd evolution through genotype space to states from which resistance cannot emerge and by which to maximize the chance of successful therapy.

  8. Design and Optimization of Tube Type Interior Permanent Magnets Generator for Free Piston Applications

    Directory of Open Access Journals (Sweden)

    Serdal ARSLAN

    2017-05-01

    Full Text Available In this study a design and optimization of a generator to be used in free piston applications was made. In order to supply required initial force, an IPM (interior permanent magnets cavity tube type linear generator was selected. By using analytical equations’ basic dimensioning of generator was made. By using Ansys-Maxwell dimensioning, analysis and optimization of the generator was realized. Also, the effects of design basic variables (pole step ratio, cavity step ratio, inner diameter - outer diameter ratio, primary final length, air interval on pinking force were examined by using parametric analyses. Among these variables, cavity step ratio, inner diameter - outer diameter ratio, primary final length were optimally determined by algorithm and sequential nonlinear programming. The two methods were compared in terms of pinking force calculation problem. Preliminary application of the linear generator was performed for free piston application.

  9. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  10. Gene selection and classification for cancer microarray data based on machine learning and similarity measures

    Directory of Open Access Journals (Sweden)

    Liu Qingzhong

    2011-12-01

    Full Text Available Abstract Background Microarray data have a high dimension of variables and a small sample size. In microarray data analyses, two important issues are how to choose genes, which provide reliable and good prediction for disease status, and how to determine the final gene set that is best for classification. Associations among genetic markers mean one can exploit information redundancy to potentially reduce classification cost in terms of time and money. Results To deal with redundant information and improve classification, we propose a gene selection method, Recursive Feature Addition, which combines supervised learning and statistical similarity measures. To determine the final optimal gene set for prediction and classification, we propose an algorithm, Lagging Prediction Peephole Optimization. By using six benchmark microarray gene expression data sets, we compared Recursive Feature Addition with recently developed gene selection methods: Support Vector Machine Recursive Feature Elimination, Leave-One-Out Calculation Sequential Forward Selection and several others. Conclusions On average, with the use of popular learning machines including Nearest Mean Scaled Classifier, Support Vector Machine, Naive Bayes Classifier and Random Forest, Recursive Feature Addition outperformed other methods. Our studies also showed that Lagging Prediction Peephole Optimization is superior to random strategy; Recursive Feature Addition with Lagging Prediction Peephole Optimization obtained better testing accuracies than the gene selection method varSelRF.

  11. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-01-01

    The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  12. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  13. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick; Wendt, Fabian; Musial, Walter; Finucane, Z.; Hulliger, L.; Chilka, S.; Dolan, D.; Cushing, J.; O' Connell, D.; Falk, S.

    2017-06-19

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, the turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  14. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  15. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning

    International Nuclear Information System (INIS)

    Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan

    2005-01-01

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated

  16. Self-regulated learning of important information under sequential and simultaneous encoding conditions.

    Science.gov (United States)

    Middlebrooks, Catherine D; Castel, Alan D

    2018-05-01

    Learners make a number of decisions when attempting to study efficiently: they must choose which information to study, for how long to study it, and whether to restudy it later. The current experiments examine whether documented impairments to self-regulated learning when studying information sequentially, as opposed to simultaneously, extend to the learning of and memory for valuable information. In Experiment 1, participants studied lists of words ranging in value from 1-10 points sequentially or simultaneously at a preset presentation rate; in Experiment 2, study was self-paced and participants could choose to restudy. Although participants prioritized high-value over low-value information, irrespective of presentation, those who studied the items simultaneously demonstrated superior value-based prioritization with respect to recall, study selections, and self-pacing. The results of the present experiments support the theory that devising, maintaining, and executing efficient study agendas is inherently different under sequential formatting than simultaneous. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Turn-on fluorescent sensor for Zinc and Cadmium ions based on quinolone and its sequential response to phosphate

    International Nuclear Information System (INIS)

    Liu, Xiaoyan; Wang, Peng; Fu, Jiaxin; Yao, Kun; Xue, Kun; Xu, Kuoxi

    2017-01-01

    Sequential fluorescence sensing of Zn 2+ /Cd 2+ ions and phosphate anion by new quinoline based sensors(L1 and L2) have been presented. Sensors exhibit highly selective fluorescence “turn-on” sensing properties to Zn 2+ /Cd 2+ ions in CH 3 OH/H 2 O(1/1, v/v, Tris, 10 mol·L −1 , pH 7.4) solution with a 1:1 binding stoichiometry. The complexes display high selectivity to H 2 PO 4 - and HPO 4 2- anions through fluorescence “turn-off” respond. The results of Zn 2+ /Cd 2+ ions and phosphate anion sequential recognition via fluorescence changes make sensors L1 and L2 have potential utility for Zn 2+ / Cd 2+ ions and phosphate anion detection in aqueous media. - Graphical abstract: Sequential fluorescence sensing of Zn 2+ /Cd 2+ ions and phosphate anion by new quinoline based sensors (L1 and L2) have been presented. Sensors exhibit highly selective and sensitive fluorescence “turn-on” sensing properties to Zn 2+ /Cd 2+ ions in CH 3 OH/H 2 O(1/1, v/v, Tris, 10 mM, pH 7.4) solution with a 1:1 binding stoichiometry. The complexes display high selectivity to H 2 PO 4 - and HPO 4 2- anions through fluorescence “turn-off” respond. Zn 2+ /Cd 2+ ions and phosphate anion sequential recognition via fluorescence changes make sensors L1 and L2 have potential utility for Zn 2+ / Cd 2+ ions and phosphate anion detection in aqueous media.

  18. Hypotension Risk Prediction via Sequential Contrast Patterns of ICU Blood Pressure.

    Science.gov (United States)

    Ghosh, Shameek; Feng, Mengling; Nguyen, Hung; Li, Jinyan

    2016-09-01

    Acute hypotension is a significant risk factor for in-hospital mortality at intensive care units. Prolonged hypotension can cause tissue hypoperfusion, leading to cellular dysfunction and severe injuries to multiple organs. Prompt medical interventions are thus extremely important for dealing with acute hypotensive episodes (AHE). Population level prognostic scoring systems for risk stratification of patients are suboptimal in such scenarios. However, the design of an efficient risk prediction system can significantly help in the identification of critical care patients, who are at risk of developing an AHE within a future time span. Toward this objective, a pattern mining algorithm is employed to extract informative sequential contrast patterns from hemodynamic data, for the prediction of hypotensive episodes. The hypotensive and normotensive patient groups are extracted from the MIMIC-II critical care research database, following an appropriate clinical inclusion criteria. The proposed method consists of a data preprocessing step to convert the blood pressure time series into symbolic sequences, using a symbolic aggregate approximation algorithm. Then, distinguishing subsequences are identified using the sequential contrast mining algorithm. These subsequences are used to predict the occurrence of an AHE in a future time window separated by a user-defined gap interval. Results indicate that the method performs well in terms of the prediction performance as well as in the generation of sequential patterns of clinical significance. Hence, the novelty of sequential patterns is in their usefulness as potential physiological biomarkers for building optimal patient risk stratification systems and for further clinical investigation of interesting patterns in critical care patients.

  19. A continuous-time neural model for sequential action.

    Science.gov (United States)

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  20. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  1. Optimization of lipid profile and hardness of low-fat mortadella following a sequential strategy of experimental design.

    Science.gov (United States)

    Saldaña, Erick; Siche, Raúl; da Silva Pinto, Jair Sebastião; de Almeida, Marcio Aurélio; Selani, Miriam Mabel; Rios-Mera, Juan; Contreras-Castillo, Carmen J

    2018-02-01

    This study aims to optimize simultaneously the lipid profile and instrumental hardness of low-fat mortadella. For lipid mixture optimization, the overlapping of surface boundaries was used to select the quantities of canola, olive, and fish oils, in order to maximize PUFAs, specifically the long-chain n-3 fatty acids (eicosapentaenoic-EPA, docosahexaenoic acids-DHA) using the minimum content of fish oil. Increased quantities of canola oil were associated with higher PUFA/SFA ratios. The presence of fish oil, even in small amounts, was effective in improving the nutritional quality of the mixture, showing lower n-6/n-3 ratios and significant levels of EPA and DHA. Thus, the optimal lipid mixture comprised of 20, 30 and 50% fish, olive and canola oils, respectively, which present PUFA/SFA (2.28) and n-6/n-3 (2.30) ratios within the recommendations of a healthy diet. Once the lipid mixture was optimized, components of the pre-emulsion used as fat replacer in the mortadella, such as lipid mixture (LM), sodium alginate (SA), and milk protein concentrate (PC), were studied to optimize hardness and springiness to target ranges of 13-16 N and 0.86-0.87, respectively. Results showed that springiness was not significantly affected by these variables. However, as the concentration of the three components increased, hardness decreased. Through the desirability function, the optimal proportions were 30% LM, 0.5% SA, and 0.5% PC. This study showed that the pre-emulsion decreases hardness of mortadella. In addition, response surface methodology was efficient to model lipid mixture and hardness, resulting in a product with improved texture and lipid quality.

  2. A dynamic regrouping based sequential dynamic programming algorithm for unit commitment of combined heat and power systems

    DEFF Research Database (Denmark)

    Rong, Aiying; Hakonen, Henri; Lahdelma, Risto

    2009-01-01

    efficiency of the plants. We introduce in this paper the DRDP-RSC algorithm, which is a dynamic regrouping based dynamic programming (DP) algorithm based on linear relaxation of the ON/OFF states of the units, sequential commitment of units in small groups. Relaxed states of the plants are used to reduce...... the dimension of the UC problem and dynamic regrouping is used to improve the solution quality. Numerical results based on real-life data sets show that this algorithm is efficient and optimal or near-optimal solutions with very small optimality gap are obtained....

  3. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  4. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  5. Globally convergent optimization algorithm using conservative convex separable diagonal quadratic approximations

    NARCIS (Netherlands)

    Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.

    2009-01-01

    We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by

  6. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  7. Optimized bioregenerative space diet selection with crew choice

    Science.gov (United States)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  8. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  9. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  10. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    Science.gov (United States)

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  11. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    Science.gov (United States)

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  12. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    Directory of Open Access Journals (Sweden)

    Huanqing Cui

    2017-03-01

    Full Text Available Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  13. Optimal relay selection and power allocation for cognitive two-way relaying networks

    KAUST Repository

    Pandarakkottilil, Ubaidulla; Aï ssa, Sonia

    2012-01-01

    In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays

  14. Optimal individual supervised hyperspectral band selection distinguishing savannah trees at leaf level

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-08-01

    Full Text Available computer intensive search technique to find the bands optimizing the value of TSAM as a function of the bands, by continually updating this function at succes- sive steps. Band selection by means of minimizing the total accumulated correlation...

  15. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  16. Introducing sequential managed aquifer recharge technology (SMART) - From laboratory to full-scale application.

    Science.gov (United States)

    Regnery, Julia; Wing, Alexandre D; Kautz, Jessica; Drewes, Jörg E

    2016-07-01

    Previous lab-scale studies demonstrated that stimulating the indigenous soil microbial community of groundwater recharge systems by manipulating the availability of biodegradable organic carbon (BDOC) and establishing sequential redox conditions in the subsurface resulted in enhanced removal of compounds with redox-dependent removal behavior such as trace organic chemicals. The aim of this study is to advance this concept from laboratory to full-scale application by introducing sequential managed aquifer recharge technology (SMART). To validate the concept of SMART, a full-scale managed aquifer recharge (MAR) facility in Colorado was studied for three years that featured the proposed sequential configuration: A short riverbank filtration passage followed by subsequent re-aeration and artificial recharge and recovery. Our findings demonstrate that sequential subsurface treatment zones characterized by carbon-rich (>3 mg/L BDOC) to carbon-depleted (≤1 mg/L BDOC) and predominant oxic redox conditions can be established at full-scale MAR facilities adopting the SMART concept. The sequential configuration resulted in substantially improved trace organic chemical removal (i.e. higher biodegradation rate coefficients) for moderately biodegradable compounds compared to conventional MAR systems with extended travel times in an anoxic aquifer. Furthermore, sorption batch experiments with clay materials dispersed in the subsurface implied that sorptive processes might also play a role in the attenuation and retardation of chlorinated flame retardants during MAR. Hence, understanding key factors controlling trace organic chemical removal performance during SMART allows for systems to be engineered for optimal efficiency, resulting in improved removal of constituents at shorter subsurface travel times and a potentially reduced physical footprint of MAR installations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Results of simultaneous and sequential pediatric liver and kidney transplantation.

    Science.gov (United States)

    Rogers, J; Bueno, J; Shapiro, R; Scantlebury, V; Mazariegos, G; Fung, J; Reyes, J

    2001-11-27

    (86%) of seven sequentially transplanted kidneys developed acute cellular rejection compared with only two (25%) of eight simultaneously transplanted kidneys (P<0.04). Simultaneously transplanted kidneys were less likely to develop rejection than sequentially transplanted kidneys in this series. This did not have any bearing on patient or graft survival rates. Mortality correlated directly with the severity of United Network of Organ Sharing status at the time of kidney transplantation. Candidates for simultaneous or sequential LTx/KTx should be prioritized based on medical stability to optimize distribution of scarce renal allografts.

  18. Optimal Channel Selection Based on Online Decision and Offline Learning in Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mu Qiao

    2017-01-01

    Full Text Available We propose a channel selection strategy with hybrid architecture, which combines the centralized method and the distributed method to alleviate the overhead of access point and at the same time provide more flexibility in network deployment. By this architecture, we make use of game theory and reinforcement learning to fulfill the optimal channel selection under different communication scenarios. Particularly, when the network can satisfy the requirements of energy and computational costs, the online decision algorithm based on noncooperative game can help each individual sensor node immediately select the optimal channel. Alternatively, when the network cannot satisfy the requirements of energy and computational costs, the offline learning algorithm based on reinforcement learning can help each individual sensor node to learn from its experience and iteratively adjust its behavior toward the expected target. Extensive simulation results validate the effectiveness of our proposal and also prove that higher system throughput can be achieved by our channel selection strategy over the conventional off-policy channel selection approaches.

  19. Evaluating the Stability of Feature Selectors that Optimize Feature Subset Cardinality

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Novovičová, Jana

    2008-01-01

    Roč. 2008, č. 5342 (2008), s. 956-966 ISSN 0302-9743. [Joint IAPR International Workshops SSPR 2008 and SPR 2008. Orlando , 04.12.2008-06.12.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Feature selection * stability * relative weighted consistency measure * sequential search * floating search Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2008/RO/somol-evaluating the stability of feature selectors that optimize feature subset cardinality.pdf

  20. Optimal Advertising with Stochastic Demand

    OpenAIRE

    George E. Monahan

    1983-01-01

    A stochastic, sequential model is developed to determine optimal advertising expenditures as a function of product maturity and past advertising. Random demand for the product depends upon an aggregate measure of current and past advertising called "goodwill," and the position of the product in its life cycle measured by sales-to-date. Conditions on the parameters of the model are established that insure that it is optimal to advertise less as the product matures. Additional characteristics o...

  1. Sequential metabolic phases as a means to optimize cellular output in a constant environment.

    Science.gov (United States)

    Palinkas, Aljoscha; Bulik, Sascha; Bockmayr, Alexander; Holzhütter, Hermann-Georg

    2015-01-01

    Temporal changes of gene expression are a well-known regulatory feature of all cells, which is commonly perceived as a strategy to adapt the proteome to varying external conditions. However, temporal (rhythmic and non-rhythmic) changes of gene expression are also observed under virtually constant external conditions. Here we hypothesize that such changes are a means to render the synthesis of the metabolic output more efficient than under conditions of constant gene activities. In order to substantiate this hypothesis, we used a flux-balance model of the cellular metabolism. The total time span spent on the production of a given set of target metabolites was split into a series of shorter time intervals (metabolic phases) during which only selected groups of metabolic genes are active. The related flux distributions were calculated under the constraint that genes can be either active or inactive whereby the amount of protein related to an active gene is only controlled by the number of active genes: the lower the number of active genes the more protein can be allocated to the enzymes carrying non-zero fluxes. This concept of a predominantly protein-limited efficiency of gene expression clearly differs from other concepts resting on the assumption of an optimal gene regulation capable of allocating to all enzymes and transporters just that fraction of protein necessary to prevent rate limitation. Applying this concept to a simplified metabolic network of the central carbon metabolism with glucose or lactate as alternative substrates, we demonstrate that switching between optimally chosen stationary flux modes comprising different sets of active genes allows producing a demanded amount of target metabolites in a significantly shorter time than by a single optimal flux mode at fixed gene activities. Our model-based findings suggest that temporal expression of metabolic genes can be advantageous even under conditions of constant external substrate supply.

  2. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  3. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  4. The Effect of Exit Strategy on Optimal Portfolio Selection with Birandom Returns

    OpenAIRE

    Cao, Guohua; Shan, Dan

    2013-01-01

    The aims of this paper are to use a birandom variable to denote the stock return selected by some recurring technical patterns and to study the effect of exit strategy on optimal portfolio selection with birandom returns. Firstly, we propose a new method to estimate the stock return and use birandom distribution to denote the final stock return which can reflect the features of technical patterns and investors' heterogeneity simultaneously; secondly, we build a birandom safety-first model and...

  5. Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Novovičová, Jana

    2010-01-01

    Roč. 32, č. 11 (2010), s. 1921-1939 ISSN 0162-8828 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA ČR GA102/07/1594 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection * feature stability * stability measures * similarity measures * sequential search * individual ranking * feature subset-size optimization * high dimensionality * small sample size Subject RIV: BD - Theory of Information Impact factor: 5.027, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/somol-0348726.pdf

  6. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  7. Portfolio optimization for seed selection in diverse weather scenarios.

    Science.gov (United States)

    Marko, Oskar; Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  8. Portfolio optimization for seed selection in diverse weather scenarios.

    Directory of Open Access Journals (Sweden)

    Oskar Marko

    Full Text Available The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  9. Results of improvement of simultaneous and sequential x-ray fluorescence equipment for quantitative routine analysis

    International Nuclear Information System (INIS)

    Zsamboky, Jozsef

    1985-01-01

    Two main types of x-ray fluorescence analyzers measuring sequentially and simultaneously, respectively, the intensities at given wave lengths are described. The main parts of an up to date x-ray fluorescence analyzer are surveyed in detail. The advantages and disadvantages of both methods are discussed. Some results on calibration and optimization are given. (D.Gy.)

  10. Reducing residual stresses and deformations in selective laser melting through multi-level multi-scale optimization of cellular scanning strategy

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    . A multilevel optimization strategy is adopted using a customized genetic algorithm developed for optimizing cellular scanning strategy for selective laser melting, with an objective of reducing residual stresses and deformations. The resulting thermo-mechanically optimized cellular scanning strategies......, a calibrated, fast, multiscale thermal model coupled with a 3D finite element mechanical model is used to simulate residual stress formation and deformations during selective laser melting. The resulting reduction in thermal model computation time allows evolutionary algorithm-based optimization of the process...

  11. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  12. Transfer printing of 3D hierarchical gold structures using a sequentially imprinted polymer stamp

    International Nuclear Information System (INIS)

    Zhang Fengxiang; Low, Hong Yee

    2008-01-01

    Complex three-dimensional (3D) hierarchical structures on polymeric materials are fabricated through a process referred to as sequential imprinting. In this work, the sequentially imprinted polystyrene film is used as a soft stamp to replicate hierarchical structures onto gold (Au) films, and the Au structures are then transferred to a substrate by transfer printing at an elevated temperature and pressure. Continuous and isolated 3D structures can be selectively fabricated with the assistance of thermo-mechanical deformation of the polymer stamp. Hierarchical Au structures are achieved without the need for a corresponding three-dimensionally patterned mold

  13. A Thrust Allocation Method for Efficient Dynamic Positioning of a Semisubmersible Drilling Rig Based on the Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Luman Zhao

    2015-01-01

    Full Text Available A thrust allocation method was proposed based on a hybrid optimization algorithm to efficiently and dynamically position a semisubmersible drilling rig. That is, the thrust allocation was optimized to produce the generalized forces and moment required while at the same time minimizing the total power consumption under the premise that forbidden zones should be taken into account. An optimization problem was mathematically formulated to provide the optimal thrust allocation by introducing the corresponding design variables, objective function, and constraints. A hybrid optimization algorithm consisting of a genetic algorithm and a sequential quadratic programming (SQP algorithm was selected and used to solve this problem. The proposed method was evaluated by applying it to a thrust allocation problem for a semisubmersible drilling rig. The results indicate that the proposed method can be used as part of a cost-effective strategy for thrust allocation of the rig.

  14. Applications of sub-optimality in dynamic programming to location and construction of nuclear fuel processing plant

    International Nuclear Information System (INIS)

    Thiriet, L.; Deledicq, A.

    1968-09-01

    First, the point of applying Dynamic Programming to optimization and Operational Research problems in chemical industries are recalled, as well as the conditions in which a dynamic program is illustrated by a sequential graph. A new algorithm for the determination of sub-optimal politics in a sequential graph is then developed. Finally, the applications of sub-optimality concept is shown when taking into account the indirect effects related to possible strategies, or in the case of stochastic choices and of problems of the siting of plants... application examples are given. (authors) [fr

  15. Information/disturbance trade-off in single and sequential measurements on a qudit signal

    Energy Technology Data Exchange (ETDEWEB)

    Genoni, Marco G; Paris, Matteo G A [Dipartimento di Fisica, Universita degli studi di Milano (Italy)

    2007-05-15

    We address the trade-off between information gain and state disturbance in measurement performed on qudit systems and devise a class of optimal measurement schemes that saturate the ultimate bound imposed by quantum mechanics to estimation and transmission fidelities. The schemes are minimal, i.e. they involve a single additional probe qudit, and optimal, i.e. they provide the maximum amount of information compatible with a given level of disturbance. The performances of optimal single-user schemes in extracting information by sequential measurements in a N-user transmission line are also investigated, and the optimality is analyzed by explicit evaluation of fidelities. We found that the estimation fidelity does not depend on the number of users, neither for single-measure inference nor for collective one, whereas the transmission fidelity decreases with N. The resulting trade-off is no longer optimal and degrades with increasing N. We found that optimality can be restored by an effective preparation of the probe states and present explicitly calculations for the 2-user case.

  16. Sequential reduction–oxidation for photocatalytic degradation of tetrabromobisphenol A: Kinetics and intermediates

    International Nuclear Information System (INIS)

    Guo, Yaoguang; Lou, Xiaoyi; Xiao, Dongxue; Xu, Lei; Wang, Zhaohui; Liu, Jianshe

    2012-01-01

    Highlights: ► Sequential photocatalytic reduction–oxidation degradation of TBBPA was firstly examined. ► Different atmospheres were found to have significant effect on debromination reaction. ► A possible sequential photocatalytic reduction–oxidation pathway was proposed. - Abstract: C-Br bond cleavage is considered as a key step to reduce their toxicities and increase degradation rates for most brominated organic pollutants. Here a sequential reduction/oxidation strategy (i.e. debromination followed by photocatalytic oxidation) for photocatalytic degradation of tetrabromobisphenol A (TBBPA), one of the most frequently used brominated flame retardants, was proposed on the basis of kinetic analysis and intermediates identification. The results demonstrated that the rates of debromination and even photodegradation of TBBPA strongly depended on the atmospheres, initial TBBPA concentrations, pH of the reaction solution, hydrogen donors, and electron acceptors. These kinetic data and byproducts identification obtained by GC–MS measurement indicated that reductive debromination reaction by photo-induced electrons dominated under N 2 -saturated condition, while oxidation reaction by photoexcited holes or hydroxyl radicals played a leading role when air was saturated. It also suggested that the reaction might be further optimized for pretreatment of TBBPA-contaminated wastewater by a two-stage reductive debromination/subsequent oxidative decomposition process in the UV-TiO 2 system by changing the reaction atmospheres.

  17. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  18. Multitarget Tracking with Spatial Nonmaximum Suppressed Sensor Selection

    Directory of Open Access Journals (Sweden)

    Liang Ma

    2015-01-01

    Full Text Available Multitarget tracking is one of the most important applications of sensor networks, yet it is an extremely challenging problem since multisensor multitarget tracking itself is nontrivial and the difficulty is further compounded by sensor management. Recently, random finite set based Bayesian framework has opened doors for multitarget tracking with sensor management, which is modelled in the framework of partially observed Markov decision process (POMDP. However, sensor management posed as a POMDP is in essence a combinatorial optimization problem which is NP-hard and computationally unacceptable. In this paper, we propose a novel sensor selection method for multitarget tracking. We first present the sequential multi-Bernoulli filter as a centralized multisensor fusion scheme for multitarget tracking. In order to perform sensor selection, we define the hypothesis information gain (HIG of a sensor to measure its information quantity when the sensor is selected alone. Then, we propose spatial nonmaximum suppression approach to select sensors with respect to their locations and HIGs. Two distinguished implementations have been provided using the greedy spatial nonmaximum suppression. Simulation results verify the effectiveness of proposed sensor selection approach for multitarget tracking.

  19. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  20. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    Science.gov (United States)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  1. NMPC for Oil Reservoir Production Optimization

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove

    2011-01-01

    this problem numerically using a single shooting sequential quadratic programming (SQP) based optimization method. Explicit singly diagonally implicit Runge-Kutta (ESDIRK) methods are used for integration of the stiff system of differential equations describing the two-phase flow, and the adjoint method...

  2. A reliable computational workflow for the selection of optimal screening libraries.

    Science.gov (United States)

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  3. Modeling and optimization of potable water network

    Energy Technology Data Exchange (ETDEWEB)

    Djebedjian, B.; Rayan, M.A. [Mansoura Univ., El-Mansoura (Egypt); Herrick, A. [Suez Canal Authority, Ismailia (Egypt)

    2000-07-01

    Software was developed in order to optimize the design of water distribution systems and pipe networks. While satisfying all the constraints imposed such as pipe diameter and nodal pressure, it was based on a mathematical model treating looped networks. The optimum network configuration and cost are determined considering parameters like pipe diameter, flow rate, corresponding pressure and hydraulic losses. It must be understood that minimum cost is relative to the different objective functions selected. The determination of the proper objective function often depends on the operating policies of a particular company. The solution for the optimization technique was obtained by using a non-linear technique. To solve the optimal design of network, the model was derived using the sequential unconstrained minimization technique (SUMT) of Fiacco and McCormick, which decreased the number of iterations required. The pipe diameters initially assumed were successively adjusted to correspond to the existing commercial pipe diameters. The technique was then applied to a two-loop network without pumps or valves. Fed by gravity, it comprised eight pipes, 1000 m long each. The first evaluation of the method proved satisfactory. As with other methods, it failed to find the global optimum. In the future, research efforts will be directed to the optimization of networks with pumps and reservoirs. 24 refs., 3 tabs., 1 fig.

  4. Optimization strategies for complex engineering applications

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, M.S.

    1998-02-01

    LDRD research activities have focused on increasing the robustness and efficiency of optimization studies for computationally complex engineering problems. Engineering applications can be characterized by extreme computational expense, lack of gradient information, discrete parameters, non-converging simulations, and nonsmooth, multimodal, and discontinuous response variations. Guided by these challenges, the LDRD research activities have developed application-specific techniques, fundamental optimization algorithms, multilevel hybrid and sequential approximate optimization strategies, parallel processing approaches, and automatic differentiation and adjoint augmentation methods. This report surveys these activities and summarizes the key findings and recommendations.

  5. Sequential Combination of Electro-Fenton and Electrochemical Chlorination Processes for the Treatment of Anaerobically-Digested Food Wastewater.

    Science.gov (United States)

    Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang

    2017-09-19

    A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.

  6. In-situ sequential laser transfer and laser reduction of graphene oxide films

    Science.gov (United States)

    Papazoglou, S.; Petridis, C.; Kymakis, E.; Kennou, S.; Raptis, Y. S.; Chatzandroulis, S.; Zergioti, I.

    2018-04-01

    Achieving high quality transfer of graphene on selected substrates is a priority in device fabrication, especially where drop-on-demand applications are involved. In this work, we report an in-situ, fast, simple, and one step process that resulted in the reduction, transfer, and fabrication of reduced graphene oxide-based humidity sensors, using picosecond laser pulses. By tuning the laser illumination parameters, we managed to implement the sequential printing and reduction of graphene oxide flakes. The overall process lasted only a few seconds compared to a few hours that our group has previously published. DC current measurements, X-Ray Photoelectron Spectroscopy, X-Ray Diffraction, and Raman Spectroscopy were employed in order to assess the efficiency of our approach. To demonstrate the applicability and the potential of the technique, laser printed reduced graphene oxide humidity sensors with a limit of detection of 1700 ppm are presented. The results demonstrated in this work provide a selective, rapid, and low-cost approach for sequential transfer and photochemical reduction of graphene oxide micro-patterns onto various substrates for flexible electronics and sensor applications.

  7. The relationship between PMI (manA) gene expression and optimal selection pressure in Indica rice transformation.

    Science.gov (United States)

    Gui, Huaping; Li, Xia; Liu, Yubo; Han, Kai; Li, Xianggan

    2014-07-01

    An efficient mannose selection system was established for transformation of Indica cultivar IR58025B . Different selection pressures were required to achieve optimum transformation frequency for different PMI selectable marker cassettes. This study was conducted to establish an efficient transformation system for Indica rice, cultivar IR58025B. Four combinations of two promoters, rice Actin 1 and maize Ubiquitin 1, and two manA genes, native gene from E. coli (PMI-01) and synthetic maize codon-optimized gene (PMI-09) were compared under various concentrations of mannose. Different selection pressures were required for different gene cassettes to achieve corresponding optimum transformation frequency (TF). Higher TFs as 54 and 53% were obtained when 5 g/L mannose was used for selection of prActin-PMI-01 cassette and 7.5 g/L mannose used for selection of prActin-PMI-09, respectively. TFs as 67 and 56% were obtained when 7.5 and 15 g/L mannose were used for selection of prUbi-PMI-01 and prUbi-PMI-09, respectively. We conclude that higher TFs can be achieved for different gene cassettes when an optimum selection pressure is applied. By investigating the PMI expression level in transgenic calli and leaves, we found there was a significant positive correlation between the protein expression level and the optimal selection pressure. Higher optimal selection pressure is required for those constructs which confer higher expression of PMI protein. The single copy rate of those transgenic events for prActin-PMI-01 cassette is lower than that for other three cassettes. We speculate some of low copy events with low protein expression levels might not have been able to survive in the mannose selection.

  8. Chiral stationary phase optimized selectivity liquid chromatography: A strategy for the separation of chiral isomers.

    Science.gov (United States)

    Hegade, Ravindra Suryakant; De Beer, Maarten; Lynen, Frederic

    2017-09-15

    Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  10. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  11. Effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer

    Directory of Open Access Journals (Sweden)

    Shu-Hui Yao

    2016-03-01

    Full Text Available Objective: To evaluate the effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer. Methods: A total of 39 cases with advanced non-small cell lung cancer who received cryoablation sequential chemotherapy and 39 cases with advanced non-small cell lung cancer who received chemotherapy alone were selected and enrolled in sequential group and control group, disease progression and survival of two groups were followed up, and contents of tumor markers and angiogenesis molecules in serum as well as contents of T-lymphocyte subsets in peripheral blood were detected. Results: Progressionfree survival and median overall survival (mOS of sequential group were longer than those of control group, and cumulative cases of tumor progression at various points in time were significantly less than those of control group (P<0.05; 1 month after treatment, serum tumor markers CEA, CYFRA21-1 and NSE contents, serum angiogenesis molecules PCDGF, VEGF and HDGF contents as well as CD3+CD4-CD8+CD28-T cell content in peripheral blood of sequential group were significantly lower than those of control group (P<0.05, and contents of CD3+CD4+CD8-T cell and CD3+CD4-CD8+CD28+T cell in peripheral blood were higher than those of control group (P<0.05. Conclusions: Cryoablation sequential chemotherapy can improve the prognosis of patients with advanced non-small cell lung cancer, delay disease progression, prolong survival time, inhibit angiogenesis and improve immune function.

  12. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    Science.gov (United States)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  13. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  14. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  15. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  16. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  17. Turn-on fluorescent sensor for Zinc and Cadmium ions based on quinolone and its sequential response to phosphate

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaoyan; Wang, Peng; Fu, Jiaxin; Yao, Kun; Xue, Kun [Engineering Laboratory for Flame Retardant and Functional Materials of Hennan Province, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Xu, Kuoxi, E-mail: xukx@henu.edu.cn [Engineering Laboratory for Flame Retardant and Functional Materials of Hennan Province, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China); Institute of Environmental and Analytical Sciences, College of Chemistry and Chemical Engineering, Henan University, Kaifeng, Henan 475004 (China)

    2017-06-15

    Sequential fluorescence sensing of Zn{sup 2+}/Cd{sup 2+} ions and phosphate anion by new quinoline based sensors(L1 and L2) have been presented. Sensors exhibit highly selective fluorescence “turn-on” sensing properties to Zn{sup 2+}/Cd{sup 2+} ions in CH{sub 3}OH/H{sub 2}O(1/1, v/v, Tris, 10 mol·L{sup −1}, pH 7.4) solution with a 1:1 binding stoichiometry. The complexes display high selectivity to H{sub 2}PO{sub 4}{sup -} and HPO{sub 4}{sup 2-} anions through fluorescence “turn-off” respond. The results of Zn{sup 2+}/Cd{sup 2+} ions and phosphate anion sequential recognition via fluorescence changes make sensors L1 and L2 have potential utility for Zn{sup 2+}/ Cd{sup 2+} ions and phosphate anion detection in aqueous media. - Graphical abstract: Sequential fluorescence sensing of Zn{sup 2+}/Cd{sup 2+} ions and phosphate anion by new quinoline based sensors (L1 and L2) have been presented. Sensors exhibit highly selective and sensitive fluorescence “turn-on” sensing properties to Zn{sup 2+}/Cd{sup 2+} ions in CH{sub 3}OH/H{sub 2}O(1/1, v/v, Tris, 10 mM, pH 7.4) solution with a 1:1 binding stoichiometry. The complexes display high selectivity to H{sub 2}PO{sub 4}{sup -} and HPO{sub 4}{sup 2-} anions through fluorescence “turn-off” respond. Zn{sup 2+}/Cd{sup 2+} ions and phosphate anion sequential recognition via fluorescence changes make sensors L1 and L2 have potential utility for Zn{sup 2+}/ Cd{sup 2+} ions and phosphate anion detection in aqueous media.

  18. Large-grain polycrystalline silicon film by sequential lateral solidification on a plastic substrate

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Chung, Choong-Heui; Yun, Sun Jin; Moon, Jaehyun; Park, Dong-Jin; Kim, Dae-Won; Lim, Jung Wook; Song, Yoon-Ho; Lee, Jin Ho

    2005-01-01

    A large-grain polycrystalline silicon film was obtained on a plastic substrate by sequential lateral solidification. With various combinations of sputtering powers and Ar working gas pressures, the conditions for producing dense amorphous silicon (a-Si) and SiO 2 films were optimized. The successful crystallization of the a-Si film is attributed to the production of a dense a-Si film that has low argon content and can endure high-intensity laser irradiation

  19. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    Science.gov (United States)

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  20. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    International Nuclear Information System (INIS)

    O’Connor, D; Nguyen, D; Voronenko, Y; Yin, W; Sheng, K

    2016-01-01

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA

  1. Global blending optimization of laminated composites with discrete material candidate selection and thickness variation

    DEFF Research Database (Denmark)

    Sørensen, Søren N.; Stolpe, Mathias

    2015-01-01

    rate. The capabilities of the method and the effect of active versus inactive manufacturing constraints are demonstrated on several numerical examples of limited size, involving at most 320 binary variables. Most examples are solved to guaranteed global optimality and may constitute benchmark examples...... but is, however, convex in the original mixed binary nested form. Convexity is the foremost important property of optimization problems, and the proposed method can guarantee the global or near-global optimal solution; unlike most topology optimization methods. The material selection is limited...... for popular topology optimization methods and heuristics based on solving sequences of non-convex problems. The results will among others demonstrate that the difficulty of the posed problem is highly dependent upon the composition of the constitutive properties of the material candidates....

  2. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  3. Footprints of Optimal Protein Assembly Strategies in the Operonic Structure of Prokaryotes

    Directory of Open Access Journals (Sweden)

    Jan Ewald

    2015-04-01

    Full Text Available In this work, we investigate optimality principles behind synthesis strategies for protein complexes using a dynamic optimization approach. We show that the cellular capacity of protein synthesis has a strong influence on optimal synthesis strategies reaching from a simultaneous to a sequential synthesis of the subunits of a protein complex. Sequential synthesis is preferred if protein synthesis is strongly limited, whereas a simultaneous synthesis is optimal in situations with a high protein synthesis capacity. We confirm the predictions of our optimization approach through the analysis of the operonic organization of protein complexes in several hundred prokaryotes. Thereby, we are able to show that cellular protein synthesis capacity is a driving force in the dissolution of operons comprising the subunits of a protein complex. Thus, we also provide a tested hypothesis explaining why the subunits of many prokaryotic protein complexes are distributed across several operons despite the presumably less precise co-regulation.

  4. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    Science.gov (United States)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  5. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    International Nuclear Information System (INIS)

    Kim, Minsun; Stewart, Robert D.; Phillips, Mark H.

    2015-01-01

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T d ), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D mean ≤ 45 Gy), lungs (D mean ≤ 20 Gy), cord (D max ≤ 45 Gy), esophagus (D max ≤ 63 Gy), and unspecified tissues (D 05 ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D 95 of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T d (3–100 days), tumor lag-time (T k = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D 95 were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T d and T k used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T d less than 10 days, there was no

  6. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  7. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  8. Sequential inference as a mode of cognition and its correlates in fronto-parietal and hippocampal brain regions.

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2017-05-01

    Full Text Available Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects' choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates.

  9. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-01-01

    developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from

  10. Chemiluminescence analyzer of NOx as a high-throughput screening tool in selective catalytic reduction of NO

    International Nuclear Information System (INIS)

    Oh, Kwang Seok; Woo, Seong Ihl

    2011-01-01

    A chemiluminescence-based analyzer of NO x gas species has been applied for high-throughput screening of a library of catalytic materials. The applicability of the commercial NO x analyzer as a rapid screening tool was evaluated using selective catalytic reduction of NO gas. A library of 60 binary alloys composed of Pt and Co, Zr, La, Ce, Fe or W on Al 2 O 3 substrate was tested for the efficiency of NO x removal using a home-built 64-channel parallel and sequential tubular reactor. The NO x concentrations measured by the NO x analyzer agreed well with the results obtained using micro gas chromatography for a reference catalyst consisting of 1 wt% Pt on γ-Al 2 O 3 . Most alloys showed high efficiency at 275 °C, which is typical of Pt-based catalysts for selective catalytic reduction of NO. The screening with NO x analyzer allowed to select Pt-Ce (X) (X=1–3) and Pt–Fe (2) as the optimal catalysts for NO x removal: 73% NO x conversion was achieved with the Pt–Fe (2) alloy, which was much better than the results for the reference catalyst and the other library alloys. This study demonstrates a sequential high-throughput method of practical evaluation of catalysts for the selective reduction of NO.

  11. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  12. METHOD FOR OPTIMAL RESOLUTION OF MULTI-AIRCRAFT CONFLICTS IN THREE-DIMENSIONAL SPACE

    Directory of Open Access Journals (Sweden)

    Denys Vasyliev

    2017-03-01

    Full Text Available Purpose: The risk of critical proximities of several aircraft and appearance of multi-aircraft conflicts increases under current conditions of high dynamics and density of air traffic. The actual problem is a development of methods for optimal multi-aircraft conflicts resolution that should provide the synthesis of conflict-free trajectories in three-dimensional space. Methods: The method for optimal resolution of multi-aircraft conflicts using heading, speed and altitude change maneuvers has been developed. Optimality criteria are flight regularity, flight economy and the complexity of maneuvering. Method provides the sequential synthesis of the Pareto-optimal set of combinations of conflict-free flight trajectories using multi-objective dynamic programming and selection of optimal combination using the convolution of optimality criteria. Within described method the following are defined: the procedure for determination of combinations of aircraft conflict-free states that define the combinations of Pareto-optimal trajectories; the limitations on discretization of conflict resolution process for ensuring the absence of unobservable separation violations. Results: The analysis of the proposed method is performed using computer simulation which results show that synthesized combination of conflict-free trajectories ensures the multi-aircraft conflict avoidance and complies with defined optimality criteria. Discussion: Proposed method can be used for development of new automated air traffic control systems, airborne collision avoidance systems, intelligent air traffic control simulators and for research activities.

  13. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    Directory of Open Access Journals (Sweden)

    Laura Bix

    Full Text Available Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling.Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding to optimize a label for comparison with those typical of commercial medical devices.Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not. Participants were instructed to select the label along a given criteria (e.g., latex containing as quickly as possible. Dependent variables were binary (correct selection and continuous (time to correct selection.Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST conferences, and using a targeted e-mail of AST members.Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05. Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05. Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001 LSM; UCL, LCL: 97.3%; 98.4%, 95.5%, as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3% and time to selection.Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  14. A note on “An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems”

    OpenAIRE

    R. Venkata Rao

    2012-01-01

    A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...

  15. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  16. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  17. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2014-01-01

    The investment of large scale offshore wind farms is high in which the electrical system has a significant contribution to the total cost. As one of the key components, the cost of the connection cables affects the initial investment a lot. The development of cable manufacturing provides a vast...... and systematical way for the optimal selection of cables in large scale offshore wind farms....

  18. Properties of the DREAM scheme and its optimization for application to proteins

    International Nuclear Information System (INIS)

    Westfeld, Thomas; Verel, René; Ernst, Matthias; Böckmann, Anja; Meier, Beat H.

    2012-01-01

    The DREAM scheme is an efficient adiabatic homonuclear polarization-transfer method suitable for multi-dimensional experiments in biomolecular solid-state NMR. The bandwidth and dynamics of the polarization transfer in the DREAM experiment depend on a number of experimental and spin-system parameters. In order to obtain optimal results, the dependence of the cross-peak intensity on these parameters needs to be understood and carefully controlled. We introduce a simplified model to semi-quantitatively describe the polarization-transfer patterns for the relevant spin systems. Numerical simulations for all natural amino acids (except tryptophane) show the dependence of the cross-peak intensities as a function of the radio-frequency-carrier position. This dependency can be used as a guide to select the desired conditions in protein spectroscopy. Practical guidelines are given on how to set up a DREAM experiment for optimized Cα/Cβ transfer, which is important in sequential assignment experiments.

  19. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  20. Well Field Management Using Multi-Objective Optimization

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine; Hendricks Franssen, H. J.; Bauer-Gottwein, Peter

    2013-01-01

    with infiltration basins, injection wells and abstraction wells. The two management objectives are to minimize the amount of water needed for infiltration and to minimize the risk of getting contaminated water into the drinking water wells. The management is subject to a daily demand fulfilment constraint. Two...... different optimization methods are tested. Constant scheduling where decision variables are held constant during the time of optimization, and sequential scheduling where the optimization is performed stepwise for daily time steps. The latter is developed to work in a real-time situation. Case study...

  1. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  2. Comparison of direct machine parameter optimization versus fluence optimization with sequential sequencing in IMRT of hypopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Dobler, Barbara; Pohl, Fabian; Bogner, Ludwig; Koelbl, Oliver

    2007-01-01

    To evaluate the effects of direct machine parameter optimization in the treatment planning of intensity-modulated radiation therapy (IMRT) for hypopharyngeal cancer as compared to subsequent leaf sequencing in Oncentra Masterplan v1.5. For 10 hypopharyngeal cancer patients IMRT plans were generated in Oncentra Masterplan v1.5 (Nucletron BV, Veenendal, the Netherlands) for a Siemens Primus linear accelerator. For optimization the dose volume objectives (DVO) for the planning target volume (PTV) were set to 53 Gy minimum dose and 59 Gy maximum dose, in order to reach a dose of 56 Gy to the average of the PTV. For the parotids a median dose of 22 Gy was allowed and for the spinal cord a maximum dose of 35 Gy. The maximum DVO to the external contour of the patient was set to 59 Gy. The treatment plans were optimized with the direct machine parameter optimization ('Direct Step & Shoot', DSS, Raysearch Laboratories, Sweden) newly implemented in Masterplan v1.5 and the fluence modulation technique ('Intensity Modulation', IM) which was available in previous versions of Masterplan already. The two techniques were compared with regard to compliance to the DVO, plan quality, and number of monitor units (MU) required per fraction dose. The plans optimized with the DSS technique met the DVO for the PTV significantly better than the plans optimized with IM (p = 0.007 for the min DVO and p < 0.0005 for the max DVO). No significant difference could be observed for compliance to the DVO for the organs at risk (OAR) (p > 0.05). Plan quality, target coverage and dose homogeneity inside the PTV were superior for the plans optimized with DSS for similar dose to the spinal cord and lower dose to the normal tissue. The mean dose to the parotids was lower for the plans optimized with IM. Treatment plan efficiency was higher for the DSS plans with (901 ± 160) MU compared to (1151 ± 157) MU for IM (p-value < 0.05). Renormalization of the IM plans to the mean of the

  3. Mobility of radionuclides based on sequential extraction of soils

    International Nuclear Information System (INIS)

    Salbu, B.; Oughton, D.H.; Lien, H.N.; Oestby, G.; Strand, P.

    1992-01-01

    Since 1989, core samples of soil and vegetation from semi-natural pastures have been collected at selected sites in Norway during the growing season. The activity concentrations in soil and vegetation as well as transfer coefficients vary significantly between regions, within regions and even within sampling plot areas. In order to differentiate between mobil and inert fractions of radioactive and stable isotopes of Cs and Sr in soils, samples were extracted sequentially using agents with increasing dissolution power. The reproducibility of the sequential extraction technique is good and the data obtained seems most informative. As the distribution pattern for radioactive and stable isotopes of Cs and Sr are similar, a high degree of isotopic exchange is indicated. Based on easily leachable fractions, mobility factors are calculated. In general the mobility of 90 Sr is higher than for 137 Cs. Mobility factors are not significantly influenced by seasonal variations, but a decrease in the mobile fraction in soil with time is indicated. Mobility factors should be considered useful for modelling purposes. (au)

  4. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  5. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    Science.gov (United States)

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  6. A Generalized Measure for the Optimal Portfolio Selection Problem and its Explicit Solution

    Directory of Open Access Journals (Sweden)

    Zinoviy Landsman

    2018-03-01

    Full Text Available In this paper, we offer a novel class of utility functions applied to optimal portfolio selection. This class incorporates as special cases important measures such as the mean-variance, Sharpe ratio, mean-standard deviation and others. We provide an explicit solution to the problem of optimal portfolio selection based on this class. Furthermore, we show that each measure in this class generally reduces to the efficient frontier that coincides or belongs to the classical mean-variance efficient frontier. In addition, a condition is provided for the existence of the a one-to-one correspondence between the parameter of this class of utility functions and the trade-off parameter λ in the mean-variance utility function. This correspondence essentially provides insight into the choice of this parameter. We illustrate our results by taking a portfolio of stocks from National Association of Securities Dealers Automated Quotation (NASDAQ.

  7. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  8. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    Science.gov (United States)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  9. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D. [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 (United States); Phillips, Mark H. [Departments of Radiation Oncology and Neurological Surgery, University of Washington, Seattle, Washington 98195-6043 (United States)

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  10. A screening method for the optimal selection of plate heat exchanger configurations

    Directory of Open Access Journals (Sweden)

    Pinto J.M.

    2002-01-01

    Full Text Available An optimization method for determining the best configuration(s of gasketed plate heat exchangers is presented. The objective is to select the configuration(s with the minimum heat transfer area that still satisfies constraints on the number of channels, the pressure drop of both fluids, the channel flow velocities and the exchanger thermal effectiveness. The configuration of the exchanger is defined by six parameters, which are as follows: the number of channels, the numbers of passes on each side, the fluid locations, the feed positions and the type of flow in the channels. The resulting configuration optimization problem is formulated as the minimization of the exchanger heat transfer area and a screening procedure is proposed for its solution. In this procedure, subsets of constraints are successively applied to eliminate infeasible and nonoptimal solutions. Examples show that the optimization method is able to successfully determine a set of optimal configurations with a minimum number of exchanger evaluations. Approximately 5 % of the pressure drop and channel velocity calculations and 1 % of the thermal simulations are required for the solution.

  11. Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools

    Science.gov (United States)

    Orr, Stanley A.; Narducci, Robert P.

    2009-01-01

    A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.

  12. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  13. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  14. Optimality and stability of symmetric evolutionary games with applications in genetic selection.

    Science.gov (United States)

    Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun

    2015-06-01

    Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

  15. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Science.gov (United States)

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  16. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-01-01

    Full Text Available In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region’s weights and then weighted different subregions’ matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1, demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  17. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    Science.gov (United States)

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  18. The impact of uncertainty on optimal emission policies

    Science.gov (United States)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  19. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  20. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  1. Non-euclidean simplex optimization. [Application to potentiometric titration of Pu

    Energy Technology Data Exchange (ETDEWEB)

    Silver, G.L.

    1977-08-15

    Geometric optimization techniques useful for studying chemical equilibrium traditionally rely upon principles of euclidean geometry, but such algorithms may also be based upon principles of a non-euclidean geometry. The sequential simplex method is adapted to the hyperbolic plane, and application of optimization to problems such as the potentiometric titration of plutonium is suggested.

  2. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  3. Comonotonic approximations for a generalized provisioning problem with application to optimal portfolio selection

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2011-01-01

    In this paper we discuss multiperiod portfolio selection problems related to a specific provisioning problem. Our results are an extension of Dhaene et al. (2005) [14], where optimal constant mix investment strategies are obtained in a provisioning and savings context, using an analytical approach

  4. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  5. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  6. Short-Range Temporal Interactions in Sleep; Hippocampal Spike Avalanches Support a Large Milieu of Sequential Activity Including Replay.

    Directory of Open Access Journals (Sweden)

    J Matthew Mahoney

    Full Text Available Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation.

  7. Parameter sampling capabilities of sequential and simultaneous data assimilation: I. Analytical comparison

    International Nuclear Information System (INIS)

    Fossum, Kristian; Mannseth, Trond

    2014-01-01

    We assess the parameter sampling capabilities of some Bayesian, ensemble-based, joint state-parameter (JS) estimation methods. The forward model is assumed to be non-chaotic and have nonlinear components, and the emphasis is on results obtained for the parameters in the state-parameter vector. A variety of approximate sampling methods exist, and a number of numerical comparisons between such methods have been performed. Often, more than one of the defining characteristics vary from one method to another, so it can be difficult to point out which characteristic of the more successful method in such a comparison was decisive. In this study, we single out one defining characteristic for comparison; whether or not data are assimilated sequentially or simultaneously. The current paper is concerned with analytical investigations into this issue. We carefully select one sequential and one simultaneous JS method for the comparison. We also design a corresponding pair of pure parameter estimation methods, and we show how the JS methods and the parameter estimation methods are pairwise related. It is shown that the sequential and the simultaneous parameter estimation methods are equivalent for one particular combination of observations with different degrees of nonlinearity. Strong indications are presented for why one may expect the sequential parameter estimation method to outperform the simultaneous parameter estimation method for all other combinations of observations. Finally, the conditions for when similar relations can be expected to hold between the corresponding JS methods are discussed. A companion paper, part II (Fossum and Mannseth 2014 Inverse Problems 30 114003), is concerned with statistical analysis of results from a range of numerical experiments involving sequential and simultaneous JS estimation, where the design of the numerical investigation is motivated by our findings in the current paper. (paper)

  8. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  9. Sequential bidding in day-ahead auctions for spot energy and power systems reserve

    International Nuclear Information System (INIS)

    Swider, Derk J.

    2005-01-01

    In this paper a novel approach for sequential bidding on day-ahead auction markets for spot energy and power systems reserve is presented. For the spot market a relatively simple method is considered as a competitive market is assumed. For the reserve market one bidder is assumed to behave strategically and the behavior of the competitors is summarized in a probability distribution of the market price. This results in a method for sequential bidding, where the bidding prices and capacities on the spot and reserve markets are calculated by maximizing a stochastic non-linear objective function of expected profit. With an exemplary application is shown that the trading sequence leads to increasing bidding capacities and prices in the reverse rank number of the markets. Hence, the consideration of a defined trading sequence greatly influences the mathematical representation of the optimal bidding behavior under price uncertainty in day-ahead auctions for spot energy and power systems reserve. (Author)

  10. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    Science.gov (United States)

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  11. Selective pressures on C4 photosynthesis evolution in grasses through the lens of optimality

    OpenAIRE

    Akcay, Erol; Zhou, Haoran; Helliker, Brent

    2016-01-01

    CO2, temperature, water availability and light intensity were potential selective pressures to propel the initial evolution and global expansion of C4 photosynthesis in grasses. To tease apart the primary selective pressures along the evolutionary trajectory, we coupled photosynthesis and hydraulics models and optimized photosynthesis over stomatal resistance and leaf/fine-root allocation. We also examined the importance of nitrogen reallocation from the dark to the light reactions. Our resul...

  12. Optimal Training for Time-Selective Wireless Fading Channels Using Cutoff Rate

    Directory of Open Access Journals (Sweden)

    Tong Lang

    2006-01-01

    Full Text Available We consider the optimal allocation of resources—power and bandwidth—between training and data transmissions for single-user time-selective Rayleigh flat-fading channels under the cutoff rate criterion. The transmitter exploits statistical channel state information (CSI in the form of the channel Doppler spectrum to embed pilot symbols into the transmission stream. At the receiver, instantaneous, though imperfect, CSI is acquired through minimum mean-square estimation of the channel based on some set of pilot observations. We compute the ergodic cutoff rate for this scenario. Assuming estimator-based interleaving and -PSK inputs, we study two special cases in-depth. First, we derive the optimal resource allocation for the Gauss-Markov correlation model. Next, we validate and refine these insights by studying resource allocation for the Jakes model.

  13. Articulated Human Motion Tracking Using Sequential Immune Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yi Li

    2013-01-01

    Full Text Available We formulate human motion tracking as a high-dimensional constrained optimization problem. A novel generative method is proposed for human motion tracking in the framework of evolutionary computation. The main contribution is that we introduce immune genetic algorithm (IGA for pose optimization in latent space of human motion. Firstly, we perform human motion analysis in the learnt latent space of human motion. As the latent space is low dimensional and contents the prior knowledge of human motion, it makes pose analysis more efficient and accurate. Then, in the search strategy, we apply IGA for pose optimization. Compared with genetic algorithm and other evolutionary methods, its main advantage is the ability to use the prior knowledge of human motion. We design an IGA-based method to estimate human pose from static images for initialization of motion tracking. And we propose a sequential IGA (S-IGA algorithm for motion tracking by incorporating the temporal continuity information into the traditional IGA. Experimental results on different videos of different motion types show that our IGA-based pose estimation method can be used for initialization of motion tracking. The S-IGA-based motion tracking method can achieve accurate and stable tracking of 3D human motion.

  14. Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying

    KAUST Repository

    Zafar, Ammar

    2013-12-01

    This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal-to-noise-ratio (SNR) above a certain peak value and satisfying the peak power constraints at the source and relay nodes. To achieve this objective, an optimal relay selection and power allocation strategy is derived by solving the power minimization problem. Numerical results show that the derived optimal strategy enhances the energy-efficiency as compared to a benchmark scheme in which both the source and the selected relay transmit at peak power. © 2013 IEEE.

  15. Optimal Selection Method of Process Patents for Technology Transfer Using Fuzzy Linguistic Computing

    Directory of Open Access Journals (Sweden)

    Gangfeng Wang

    2014-01-01

    Full Text Available Under the open innovation paradigm, technology transfer of process patents is one of the most important mechanisms for manufacturing companies to implement process innovation and enhance the competitive edge. To achieve promising technology transfers, we need to evaluate the feasibility of process patents and optimally select the most appropriate patent according to the actual manufacturing situation. Hence, this paper proposes an optimal selection method of process patents using multiple criteria decision-making and 2-tuple fuzzy linguistic computing to avoid information loss during the processes of evaluation integration. An evaluation index system for technology transfer feasibility of process patents is designed initially. Then, fuzzy linguistic computing approach is applied to aggregate the evaluations of criteria weights for each criterion and corresponding subcriteria. Furthermore, performance ratings for subcriteria and fuzzy aggregated ratings of criteria are calculated. Thus, we obtain the overall technology transfer feasibility of patent alternatives. Finally, a case study of aeroengine turbine manufacturing is presented to demonstrate the applicability of the proposed method.

  16. Feature extraction and sensor selection for NPP initiating event identification

    International Nuclear Information System (INIS)

    Lin, Ting-Han; Wu, Shun-Chi; Chen, Kuang-You; Chou, Hwai-Pwu

    2017-01-01

    Highlights: • A two-stage feature extraction scheme for NPP initiating event identification. • With stBP, interrelations among the sensors can be retained for identification. • With dSFS, sensors that are crucial for identification can be efficiently selected. • Efficacy of the scheme is illustrated with data from the Maanshan NPP simulator. - Abstract: Initiating event identification is essential in managing nuclear power plant (NPP) severe accidents. In this paper, a novel two-stage feature extraction scheme that incorporates the proposed sensor type-wise block projection (stBP) and deflatable sequential forward selection (dSFS) is used to elicit the discriminant information in the data obtained from various NPP sensors to facilitate event identification. With the stBP, the primal features can be extracted without eliminating the interrelations among the sensors of the same type. The extracted features are then subjected to a further dimensionality reduction by selecting the sensors that are most relevant to the events under consideration. This selection is not easy, and a combinatorial optimization technique is normally required. With the dSFS, an optimal sensor set can be found with less computational load. Moreover, its sensor deflation stage allows sensors in the preselected set to be iteratively refined to avoid being trapped into a local optimum. Results from detailed experiments containing data of 12 event categories and a total of 112 events generated with a Taiwan’s Maanshan NPP simulator are presented to illustrate the efficacy of the proposed scheme.

  17. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  18. A dynamical model of hierarchical selection and coordination in speech planning.

    Directory of Open Access Journals (Sweden)

    Sam Tilsen

    Full Text Available studies of the control of complex sequential movements have dissociated two aspects of movement planning: control over the sequential selection of movement plans, and control over the precise timing of movement execution. This distinction is particularly relevant in the production of speech: utterances contain sequentially ordered words and syllables, but articulatory movements are often executed in a non-sequential, overlapping manner with precisely coordinated relative timing. This study presents a hybrid dynamical model in which competitive activation controls selection of movement plans and coupled oscillatory systems govern coordination. The model departs from previous approaches by ascribing an important role to competitive selection of articulatory plans within a syllable. Numerical simulations show that the model reproduces a variety of speech production phenomena, such as effects of preparation and utterance composition on reaction time, and asymmetries in patterns of articulatory timing associated with onsets and codas. The model furthermore provides a unified understanding of a diverse group of phonetic and phonological phenomena which have not previously been related.

  19. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    Science.gov (United States)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  20. Sequential use of simulation and optimization in analysis and planning

    Science.gov (United States)

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  1. On the non-stationarity of financial time series: impact on optimal portfolio selection

    International Nuclear Information System (INIS)

    Livan, Giacomo; Inoue, Jun-ichi; Scalas, Enrico

    2012-01-01

    We investigate the possible drawbacks of employing the standard Pearson estimator to measure correlation coefficients between financial stocks in the presence of non-stationary behavior, and we provide empirical evidence against the well-established common knowledge that using longer price time series provides better, more accurate, correlation estimates. Then, we investigate the possible consequences of instabilities in empirical correlation coefficient measurements on optimal portfolio selection. We rely on previously published works which provide a framework allowing us to take into account possible risk underestimations due to the non-optimality of the portfolio weights being used in order to distinguish such non-optimality effects from risk underestimations genuinely due to non-stationarities. We interpret such results in terms of instabilities in some spectral properties of portfolio correlation matrices. (paper)

  2. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    Science.gov (United States)

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  4. Optimal load suitability based RAT selection for HSDPA and IEEE 802.11e

    DEFF Research Database (Denmark)

    Prasad, Ramjee; Cabral, O.; Felez, F.J.

    2009-01-01

    are a premium. This paper investigates cooperation between networks based Radio Access Technology (RAT) selection algorithm that uses suitability to optimize the choice between WiFi and High Speed Downlink Packet Access (HSDPA). It has been shown that this approach has the potential to provide gain...... by allocating a user terminal to the most preferred network based on traffic type and network load. Optimal load threshold values that maximise the total QoS throughput for the given interworking scenario are 0.6 and 0.53 for HSDPA and WiFi, respectively. This corresponds to a CRRM gain on throughput of 80...

  5. A new optimization algotithm with application to nonlinear MPC

    Directory of Open Access Journals (Sweden)

    Frode Martinsen

    2005-01-01

    Full Text Available This paper investigates application of SQP optimization algorithm to nonlinear model predictive control. It considers feasible vs. infeasible path methods, sequential vs. simultaneous methods and reduced vs full space methods. A new optimization algorithm coined rFOPT which remains feasibile with respect to inequality constraints is introduced. The suitable choices between these various strategies are assessed informally through a small CSTR case study. The case study also considers the effect various discretization methods have on the optimization problem.

  6. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites

    Science.gov (United States)

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-01-01

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection. PMID:28772485

  7. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    Science.gov (United States)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  8. A fast inverse treatment planning strategy facilitating optimized catheter selection in image-guided high-dose-rate interstitial gynecologic brachytherapy.

    Science.gov (United States)

    Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A

    2017-12-01

    Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the

  9. Strain sensors optimal placement for vibration-based structural health monitoring. The effect of damage on the initially optimal configuration

    Science.gov (United States)

    Loutas, T. H.; Bourikas, A.

    2017-12-01

    We revisit the optimal sensor placement of engineering structures problem with an emphasis on in-plane dynamic strain measurements and to the direction of modal identification as well as vibration-based damage detection for structural health monitoring purposes. The approach utilized is based on the maximization of a norm of the Fisher Information Matrix built with numerically obtained mode shapes of the structure and at the same time prohibit the sensorization of neighbor degrees of freedom as well as those carrying similar information, in order to obtain a satisfactory coverage. A new convergence criterion of the Fisher Information Matrix (FIM) norm is proposed in order to deal with the issue of choosing an appropriate sensor redundancy threshold, a concept recently introduced but not further investigated concerning its choice. The sensor configurations obtained via a forward sequential placement algorithm are sub-optimal in terms of FIM norm values but the selected sensors are not allowed to be placed in neighbor degrees of freedom providing thus a better coverage of the structure and a subsequent better identification of the experimental mode shapes. The issue of how service induced damage affects the initially nominated as optimal sensor configuration is also investigated and reported. The numerical model of a composite sandwich panel serves as a representative aerospace structure upon which our investigations are based.

  10. Optimization of practical trusses with constraints on eigenfrequencies, displacements, stresses, and buckling

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard; Nielsen, A.

    2004-01-01

    In this paper we consider the optimization of general 3D truss structures. The design variables are the cross-sections of the truss bars together with the joint coordinates, and are considered to be continuous variables. Using these design variables we simultaneously carry out size optimization...... are imposed in correlation with industrial standards, to make the optimized designs valuable from a practical point of view. The optimization problem is solved using SLP (Sequential Linear Programming)....

  11. Analysis of multicriteria models application for selection of an optimal artificial lift method in oil production

    Directory of Open Access Journals (Sweden)

    Crnogorac Miroslav P.

    2016-01-01

    Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.

  12. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    Science.gov (United States)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  13. Constructing Multiply Substituted Arenes Using Sequential Pd(II)-Catalyzed C–H Olefination**

    Science.gov (United States)

    Engle, Keary M.; Wang, Dong-Hui; Yu, Jin-Quan

    2011-01-01

    Complementary catalytic systems have been developed in which the reactivity/selectivity balance in Pd(II)-catalyzed ortho-C–H olefination can be modulated through ligand control. This allows for sequential C–H functionalization for the rapid preparation of 1,2,3-trisubstituted arenes. Additionally, a rare example of iterative C–H activation, in which a newly installed functional group directs subsequent C–H activation has been demonstrated. PMID:20632344

  14. A hybrid agent-based computational economics and optimization approach for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Zahra Pourabdollahi

    2017-12-01

    Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.

  15. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  16. Optimal Linear Responses for Markov Chains and Stochastically Perturbed Dynamical Systems

    Science.gov (United States)

    Antown, Fadi; Dragičević, Davor; Froyland, Gary

    2018-03-01

    The linear response of a dynamical system refers to changes to properties of the system when small external perturbations are applied. We consider the little-studied question of selecting an optimal perturbation so as to (i) maximise the linear response of the equilibrium distribution of the system, (ii) maximise the linear response of the expectation of a specified observable, and (iii) maximise the linear response of the rate of convergence of the system to the equilibrium distribution. We also consider the inhomogeneous, sequential, or time-dependent situation where the governing dynamics is not stationary and one wishes to select a sequence of small perturbations so as to maximise the overall linear response at some terminal time. We develop the theory for finite-state Markov chains, provide explicit solutions for some illustrative examples, and numerically apply our theory to stochastically perturbed dynamical systems, where the Markov chain is replaced by a matrix representation of an approximate annealed transfer operator for the random dynamical system.

  17. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  18. The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups.

    Science.gov (United States)

    Flowe, Heather D; Ebbesen, Ebbe B

    2007-02-01

    Two experiments investigated whether remembering is affected by the similarity of the study face relative to the alternatives in a lineup. In simultaneous and sequential lineups, choice rates and false alarms were larger in low compared to high similarity lineups, indicating criterion placement was affected by lineup similarity structure (Experiment 1). In Experiment 2, foil choices and similarity ranking data for target present lineups were compared to responses made when the target was removed from the lineup (only the 5 foils were presented). The results indicated that although foils were selected more often in target-removed lineups in the simultaneous compared to the sequential condition, responses shifted from the target to one of the foils at equal rates across lineup procedures.

  19. Glycerol production by Oenococcus oeni during sequential and simultaneous cultures with wine yeast strains.

    Science.gov (United States)

    Ale, Cesar E; Farías, Marta E; Strasser de Saad, Ana M; Pasteris, Sergio E

    2014-07-01

    Growth and fermentation patterns of Saccharomyces cerevisiae, Kloeckera apiculata, and Oenococcus oeni strains cultured in grape juice medium were studied. In pure, sequential and simultaneous cultures, the strains reached the stationary growth phase between 2 and 3 days. Pure and mixed K. apiculata and S. cerevisiae cultures used mainly glucose, producing ethanol, organic acids, and 4.0 and 0.1 mM glycerol, respectively. In sequential cultures, O. oeni achieved about 1 log unit at 3 days using mainly fructose and L-malic acid. Highest sugars consumption was detected in K. apiculata supernatants, lactic acid being the major end-product. 8.0 mM glycerol was found in 6-day culture supernatants. In simultaneous cultures, total sugars and L-malic acid were used at 3 days and 98% of ethanol and glycerol were detected. This study represents the first report of the population dynamics and metabolic behavior of yeasts and O. oeni in sequential and simultaneous cultures and contributes to the selection of indigenous strains to design starter cultures for winemaking, also considering the inclusion of K. apiculata. The sequential inoculation of yeasts and O. oeni would enhance glycerol production, which confers desirable organoleptic characteristics to wines, while organic acids levels would not affect their sensory profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  1. Production of alkyl esters from macaw palm oil by a sequential hydrolysis/esterification process using heterogeneous biocatalysts: optimization by response surface methodology.

    Science.gov (United States)

    Bressani, Ana Paula P; Garcia, Karen C A; Hirata, Daniela B; Mendes, Adriano A

    2015-02-01

    The present study deals with the enzymatic synthesis of alkyl esters with emollient properties by a sequential hydrolysis/esterification process (hydroesterification) using unrefined macaw palm oil from pulp seeds (MPPO) as feedstock. Crude enzymatic extract from dormant castor bean seeds was used as biocatalyst in the production of free fatty acids (FFA) by hydrolysis of MPPO. Esterification of purified FFA with several alcohols in heptane medium was catalyzed by immobilized Thermomyces lanuginosus lipase (TLL) on poly-hydroxybutyrate (PHB) particles. Under optimal experimental conditions (mass ratio oil:buffer of 35% m/m, reaction temperature of 35 °C, biocatalyst concentration of 6% m/m, and stirring speed of 1,000 rpm), complete hydrolysis of MPPO was reached after 110 min of reaction. Maximum ester conversion percentage of 92.4 ± 0.4% was reached using hexanol as acyl acceptor at 750 mM of each reactant after 15 min of reaction. The biocatalyst retained full activity after eight successive cycles of esterification reaction. These results show that the proposed process is a promising strategy for the synthesis of alkyl esters of industrial interest from macaw palm oil, an attractive option for the Brazilian oleochemical industry.

  2. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  3. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  4. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  5. Evaluation of the mobility and pollution index of selected essential/toxic metals in paddy soil by sequential extraction method.

    Science.gov (United States)

    Hasan, Maria; Kausar, Dilshad; Akhter, Gulraiz; Shah, Munir H

    2018-01-01

    Comparative distribution and mobility of selected essential and toxic metals in the paddy soil from district Sargodha, Pakistan was evaluated by the modified Community Bureau of Reference (mBCR) sequential extraction procedure. Most of the soil samples showed slightly alkaline nature while the soil texture was predominantly silty loam in nature. The metal contents were quantified in the exchangeable, reducible, oxidisable and residual fractions of the soil by flame atomic absorption spectrophotometry and the metal data were subjected to the statistical analyses in order to evaluate the mutual relationships among the metals in each fraction. Among the metals, Ca, Sr and Mn were found to be more mobile in the soil. A number of significant correlations between different metal pairs were noted in various fractions. Contamination factor, geoaccumulation index and enrichment factor revealed extremely severe enrichment/contamination for Cd; moderate to significant enrichment/contamination for Ni, Zn, Co and Pb while Cr, Sr, Cu and Mn revealed minimal to moderate contamination and accumulation in the soil. Multivariate cluster analysis showed significant anthropogenic intrusions of the metals in various fractions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. The influence of the selection of macronutrients coupled with dietary energy density on the performance of broiler chickens.

    Science.gov (United States)

    Liu, Sonia Y; Chrystal, Peter V; Cowieson, Aaron J; Truong, Ha H; Moss, Amy F; Selle, Peter H

    2017-01-01

    A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05) when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05). However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027) and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042) than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P macronutrients from 10-31 days in choice and sequential feeding groups were plotted and compared with the null path if broiler chickens selected equal amounts of the three diets in the combination. Regardless of feeding regimen, the intake paths of starch and protein are very close to the null path; however, lipid and protein intake paths in choice fed birds are father from the null path than sequentially fed birds.

  7. Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions

    Science.gov (United States)

    Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.

    2013-12-01

    Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth

  8. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  9. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    Science.gov (United States)

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  10. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-01-01

    Full Text Available Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO based support vector machine (SVM classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR method with a pseudorandom binary sequence (PRBS stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  11. Multi-objective optimization of cellular scanning strategy in selective laser melting

    DEFF Research Database (Denmark)

    Ahrari, Ali; Deb, Kalyanmoy; Mohanty, Sankhya

    2017-01-01

    The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study is to dev......The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study......, the problem is a combination of combinatorial and choice optimization, which makes the problem difficult to solve. On a process simulation domain consisting of 32 cells, our multi-objective evolutionary method is able to find a set of trade-off solutions for the defined conflicting objectives, which cannot...

  12. AN APPLICATION OF FUZZY PROMETHEE METHOD FOR SELECTING OPTIMAL CAR PROBLEM

    Directory of Open Access Journals (Sweden)

    SERKAN BALLI

    2013-06-01

    Full Text Available Most of the economical, industrial, financial or political decision problems are multi-criteria. In these multi criteria problems, optimal selection of alternatives is hard and complex process. Recently, some kinds of methods are improved to solve these problems. Promethee is one of most efficient and easiest method and solves problems that consist quantitative criteria.  However, in daily life, there are criteria which are explained as linguistic and cannot modeled numerical. Hence, Promethee method is incomplete for linguistic criteria which are imprecise. To satisfy this deficiency, fuzzy set approximation can be used. Promethee method, which is extended with using fuzzy inputs, is applied to car selection for seven different cars in same class by using criteria: price, fuel, performance and security. The obtained results are appropriate and consistent.

  13. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  14. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  15. Research on Optimized Torque-Distribution Control Method for Front/Rear Axle Electric Wheel Loader

    Directory of Open Access Journals (Sweden)

    Zhiyu Yang

    2017-01-01

    Full Text Available Optimized torque-distribution control method (OTCM is a critical technology for front/rear axle electric wheel loader (FREWL to improve the operation performance and energy efficiency. In the paper, a longitudinal dynamics model of FREWL is created. Based on the model, the objective functions are that the weighted sum of variance and mean of tire workload is minimal and the total motor efficiency is maximal. Four nonlinear constraint optimization algorithms, quasi-newton Lagrangian multiplier method, sequential quadratic programming, adaptive genetic algorithms, and particle swarm optimization with random weighting and natural selection, which have fast convergent rate and quick calculating speed, are used as solving solutions for objective function. The simulation results show that compared to no-control FREWL, controlled FREWL utilizes the adhesion ability better and slips less. It is obvious that controlled FREWL gains better operation performance and higher energy efficiency. The energy efficiency of FREWL in equipment transferring condition is increased by 13–29%. In addition, this paper discussed the applicability of OTCM and analyzed the reason for different simulation results of four algorithms.

  16. Optimal portfolio selection between different kinds of Renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Zakerinia, MohammadSaleh; Piltan, Mehdi; Ghaderi, Farid

    2010-09-15

    In this paper, selection of the optimal energy supply system in an industrial unit is taken into consideration. This study takes environmental, economical and social parameters into consideration in modeling along with technical factors. Several alternatives which include renewable energy sources, micro-CHP systems and conventional system has been compared by means of an integrated model of linear programming and three multi-criteria approaches (AHP, TOPSIS and ELECTRE III). New parameters like availability of sources, fuels' price volatility, besides traditional factors are considered in different scenarios. Results show with environmental preferences, renewable sources and micro-CHP are good alternatives for conventional systems.

  17. A Selection Approach for Optimized Problem-Solving Process by Grey Relational Utility Model and Multicriteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Chih-Kun Ke

    2012-01-01

    Full Text Available In business enterprises, especially the manufacturing industry, various problem situations may occur during the production process. A situation denotes an evaluation point to determine the status of a production process. A problem may occur if there is a discrepancy between the actual situation and the desired one. Thus, a problem-solving process is often initiated to achieve the desired situation. In the process, how to determine an action need to be taken to resolve the situation becomes an important issue. Therefore, this work uses a selection approach for optimized problem-solving process to assist workers in taking a reasonable action. A grey relational utility model and a multicriteria decision analysis are used to determine the optimal selection order of candidate actions. The selection order is presented to the worker as an adaptive recommended solution. The worker chooses a reasonable problem-solving action based on the selection order. This work uses a high-tech company’s knowledge base log as the analysis data. Experimental results demonstrate that the proposed selection approach is effective.

  18. SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression

    OpenAIRE

    Flores, Salvador

    2015-01-01

    This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a ...

  19. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  20. Multi-arm group sequential designs with a simultaneous stopping rule.

    Science.gov (United States)

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  1. Variationally optimal selection of slow coordinates and reaction coordinates in macromolecular systems

    Science.gov (United States)

    Noe, Frank

    To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.

  2. Applying Sequential Particle Swarm Optimization Algorithm to Improve Power Generation Quality

    Directory of Open Access Journals (Sweden)

    Abdulhafid Sallama

    2014-10-01

    Full Text Available Swarm Optimization approach is a heuristic search method whose mechanics are inspired by the swarming or collaborative behaviour of biological populations. It is used to solve constrained, unconstrained, continuous and discrete problems. Swarm intelligence systems are widely used and very effective in solving standard and large-scale optimization, provided that the problem does not require multi solutions. In this paper, particle swarm optimisation technique is used to optimise fuzzy logic controller (FLC for stabilising a power generation and distribution network that consists of four generators. The system is subject to different types of faults (single and multi-phase. Simulation studies show that the optimised FLC performs well in stabilising the network after it recovers from a fault. The controller is compared to multi-band and standard controllers.

  3. Making the Optimal Decision in Selecting Protective Clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2008-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it applies to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'

  4. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification

    International Nuclear Information System (INIS)

    McQuaid, Sarah J; Southekal, Sudeepti; Kijewski, Marie Foley; Moore, Stephen C

    2011-01-01

    Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.

  5. iCycle: Integrated, multicriterial beam angle, and profile optimization for generation of coplanar and noncoplanar IMRT plans

    International Nuclear Information System (INIS)

    Breedveld, Sebastiaan; Storchi, Pascal R. M.; Voet, Peter W. J.; Heijmen, Ben J. M.

    2012-01-01

    Purpose: To introduce iCycle, a novel algorithm for integrated, multicriterial optimization of beam angles, and intensity modulated radiotherapy (IMRT) profiles. Methods: A multicriterial plan optimization with iCycle is based on a prescription called wish-list, containing hard constraints and objectives with ascribed priorities. Priorities are ordinal parameters used for relative importance ranking of the objectives. The higher an objective priority is, the higher the probability that the corresponding objective will be met. Beam directions are selected from an input set of candidate directions. Input sets can be restricted, e.g., to allow only generation of coplanar plans, or to avoid collisions between patient/couch and the gantry in a noncoplanar setup. Obtaining clinically feasible calculation times was an important design criterium for development of iCycle. This could be realized by sequentially adding beams to the treatment plan in an iterative procedure. Each iteration loop starts with selection of the optimal direction to be added. Then, a Pareto-optimal IMRT plan is generated for the (fixed) beam setup that includes all so far selected directions, using a previously published algorithm for multicriterial optimization of fluence profiles for a fixed beam arrangement Breedveld et al.[Phys. Med. Biol. 54, 7199-7209 (2009)]. To select the next direction, each not yet selected candidate direction is temporarily added to the plan and an optimization problem, derived from the Lagrangian obtained from the just performed optimization for establishing the Pareto-optimal plan, is solved. For each patient, a single one-beam, two-beam, three-beam, etc. Pareto-optimal plan is generated until addition of beams does no longer result in significant plan quality improvement. Plan generation with iCycle is fully automated. Results: Performance and characteristics of iCycle are demonstrated by generating plans for a maxillary sinus case, a cervical cancer patient, and a

  6. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  7. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    Science.gov (United States)

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  8. Detecting changes in real-time data: a user's guide to optimal detection.

    Science.gov (United States)

    Johnson, P; Moriarty, J; Peskir, G

    2017-08-13

    The real-time detection of changes in a noisily observed signal is an important problem in applied science and engineering. The study of parametric optimal detection theory began in the 1930s, motivated by applications in production and defence. Today this theory, which aims to minimize a given measure of detection delay under accuracy constraints, finds applications in domains including radar, sonar, seismic activity, global positioning, psychological testing, quality control, communications and power systems engineering. This paper reviews developments in optimal detection theory and sequential analysis, including sequential hypothesis testing and change-point detection, in both Bayesian and classical (non-Bayesian) settings. For clarity of exposition, we work in discrete time and provide a brief discussion of the continuous time setting, including recent developments using stochastic calculus. Different measures of detection delay are presented, together with the corresponding optimal solutions. We emphasize the important role of the signal-to-noise ratio and discuss both the underlying assumptions and some typical applications for each formulation.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  9. Optimal allocation of resources in systems

    International Nuclear Information System (INIS)

    Derman, C.; Lieberman, G.J.; Ross, S.M.

    1975-01-01

    In the design of a new system, or the maintenance of an old system, allocation of resources is of prime consideration. In allocating resources it is often beneficial to develop a solution that yields an optimal value of the system measure of desirability. In the context of the problems considered in this paper the resources to be allocated are components already produced (assembly problems) and money (allocation in the construction or repair of systems). The measure of desirability for system assembly will usually be maximizing the expected number of systems that perform satisfactorily and the measure in the allocation context will be maximizing the system reliability. Results are presented for these two types of general problems in both a sequential (when appropriate) and non-sequential context

  10. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    Science.gov (United States)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  11. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  12. Transmit Antenna Selection for Power Adaptive Underlay Cognitive Radio with Instantaneous Interference Constraint

    KAUST Repository

    Hanif, Muhammad

    2017-03-31

    The high hardware cost associated with multiple antennas at the secondary transmitter of an underlay cognitive radio (CR) can be reduced by antenna selection. This paper analyzes different power adaptive transmit antenna selection (TAS) schemes for an underlay CR, which ensure that the instantaneous interference caused by the secondary transmitter to the primary receiver is below a predetermined level. We consider the optimal continuous power adaptive TAS and present a low-complexity antenna and power level selection scheme, named sequential antenna and power level selection scheme (SAPS), for discrete power adaptation. Exact statistical characterizations of the signal-to-interference plus noise ratio at the secondary receiver are derived for the considered schemes. Based on the newly derived statistics, we prove that the considered schemes achieve the highest diversity order equaling the number of antennas at the secondary transmitter. Further, we also derive a closed-form expression of the ergodic capacity for the underlay CR with SAPS scheme. Finally, we show that the proposed scheme outperforms existing schemes in terms of ergodic capacity.

  13. Sequentially administrated of pemetrexed with icotinib/erlotinib in lung adenocarcinoma cell lines in vitro.

    Science.gov (United States)

    Feng, Xiuli; Zhang, Yan; Li, Tao; Li, Yu

    2017-12-26

    Combination of chemotherapy and epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) had been proved to be a potent anti-drug for the treatment of tumors. However, survival time was not extended for the patients with lung adenocarcinoma (AdC) compared with first-line chemotherapy. In the present study, we attempt to assess the optimal schedule of the combined administration of pemetrexed and icotinib/erlotinib in AdC cell lines. Human lung AdC cell lines with wild-type (A549), EGFR T790M (H1975) and activating EGFR mutation (HCC827) were applied in vitro to assess the differential efficacy of various sequential regimens on cell viability, cell apoptosis and cell cycle distribution. The results suggested that the antiproliferative effect of the sequence of pemetrexed followed by icotinib/erlotinib was more effective than that of icotinib/erlotinib followed by pemetrexed. Additionally, a reduction of G1 phase and increased S phase in sequence of pemetrexed followed by icotinib/erlotinib was also observed, promoting cell apoptosis. Thus, the sequential administration of pemetrexed followed by icotinib/erlotinib exerted a synergistic effect on HCC827 and H1975 cell lines compared with the reverse sequence. The sequential treatment of pemetrexed followed by icotinib/erlotinib has been demonstrated promising results. This treatment strategy warrants further confirmation in patients with advanced lung AdC.

  14. Sequential growth factor application in bone marrow stromal cell ligament engineering.

    Science.gov (United States)

    Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H

    2005-01-01

    In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.

  15. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  16. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  17. Optimization of fermentation medium for nisin production from ...

    African Journals Online (AJOL)

    Sequentially, Box-Behnken design experiments were implemented for further optimization. RSM combined with ANNGA were used for analysis of data. Specially, a RSM model was used for determining the individual effect and mutual interaction effect of tested variables on nisin titer (NT), an ANN model was used for NT ...

  18. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  19. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  20. Accumulation of evidence during sequential decision making: the importance of top-down factors.

    Science.gov (United States)

    de Lange, Floris P; Jensen, Ole; Dehaene, Stanislas

    2010-01-13

    In the last decade, great progress has been made in characterizing the accumulation of neural information during simple unitary perceptual decisions. However, much less is known about how sequentially presented evidence is integrated over time for successful decision making. The aim of this study was to study the mechanisms of sequential decision making in humans. In a magnetoencephalography (MEG) study, we presented healthy volunteers with sequences of centrally presented arrows. Sequence length varied between one and five arrows, and the accumulated directions of the arrows informed the subject about which hand to use for a button press at the end of the sequence (e.g., LRLRR should result in a right-hand press). Mathematical modeling suggested that nonlinear accumulation was the rational strategy for performing this task in the presence of no or little noise, whereas quasilinear accumulation was optimal in the presence of substantial noise. MEG recordings showed a correlate of evidence integration over parietal and central cortex that was inversely related to the amount of accumulated evidence (i.e., when more evidence was accumulated, neural activity for new stimuli was attenuated). This modulation of activity likely reflects a top-down influence on sensory processing, effectively constraining the influence of sensory information on the decision variable over time. The results indicate that, when making decisions on the basis of sequential information, the human nervous system integrates evidence in a nonlinear manner, using the amount of previously accumulated information to constrain the accumulation of additional evidence.

  1. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Directory of Open Access Journals (Sweden)

    Huan Chen

    Full Text Available This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN. Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  2. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Science.gov (United States)

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  3. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  4. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  5. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    Science.gov (United States)

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  6. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  7. Sequential injection analysis for automation of the Winkler methodology, with real-time SIMPLEX optimization and shipboard application

    Energy Technology Data Exchange (ETDEWEB)

    Horstkotte, Burkhard; Tovar Sanchez, Antonio; Duarte, Carlos M. [Department of Global Change Research, IMEDEA (CSIC-UIB) Institut Mediterrani d' Estudis Avancats, Miquel Marques 21, 07190 Esporles (Spain); Cerda, Victor, E-mail: Victor.Cerda@uib.es [University of the Balearic Islands, Department of Chemistry Carreterra de Valldemossa km 7.5, 07011 Palma de Mallorca (Spain)

    2010-01-25

    A multipurpose analyzer system based on sequential injection analysis (SIA) for the determination of dissolved oxygen (DO) in seawater is presented. Three operation modes were established and successfully applied onboard during a research cruise in the Southern ocean: 1st, in-line execution of the entire Winkler method including precipitation of manganese (II) hydroxide, fixation of DO, precipitate dissolution by confluent acidification, and spectrophotometric quantification of the generated iodine/tri-iodide (I{sub 2}/I{sub 3}{sup -}), 2nd, spectrophotometric quantification of I{sub 2}/I{sub 3}{sup -} in samples prepared according the classical Winkler protocol, and 3rd, accurate batch-wise titration of I{sub 2}/I{sub 3}{sup -} with thiosulfate using one syringe pump of the analyzer as automatic burette. In the first mode, the zone stacking principle was applied to achieve high dispersion of the reagent solutions in the sample zone. Spectrophotometric detection was done at the isobestic wavelength 466 nm of I{sub 2}/I{sub 3}{sup -}. Highly reduced consumption of reagents and sample compared to the classical Winkler protocol, linear response up to 16 mg L{sup -1} DO, and an injection frequency of 30 per hour were achieved. It is noteworthy that for the offline protocol, sample metering and quantification with a potentiometric titrator lasts in general over 5 min without counting sample fixation, incubation, and glassware cleaning. The modified SIMPLEX methodology was used for the simultaneous optimization of four volumetric and two chemical variables. Vertex calculation and consequent application including in-line preparation of one reagent was carried out in real-time using the software AutoAnalysis. The analytical system featured high signal stability, robustness, and a repeatability of 3% RSD (1st mode) and 0.8% (2nd mode) during shipboard application.

  8. Sequential injection analysis for automation of the Winkler methodology, with real-time SIMPLEX optimization and shipboard application

    International Nuclear Information System (INIS)

    Horstkotte, Burkhard; Tovar Sanchez, Antonio; Duarte, Carlos M.; Cerda, Victor

    2010-01-01

    A multipurpose analyzer system based on sequential injection analysis (SIA) for the determination of dissolved oxygen (DO) in seawater is presented. Three operation modes were established and successfully applied onboard during a research cruise in the Southern ocean: 1st, in-line execution of the entire Winkler method including precipitation of manganese (II) hydroxide, fixation of DO, precipitate dissolution by confluent acidification, and spectrophotometric quantification of the generated iodine/tri-iodide (I 2 /I 3 - ), 2nd, spectrophotometric quantification of I 2 /I 3 - in samples prepared according the classical Winkler protocol, and 3rd, accurate batch-wise titration of I 2 /I 3 - with thiosulfate using one syringe pump of the analyzer as automatic burette. In the first mode, the zone stacking principle was applied to achieve high dispersion of the reagent solutions in the sample zone. Spectrophotometric detection was done at the isobestic wavelength 466 nm of I 2 /I 3 - . Highly reduced consumption of reagents and sample compared to the classical Winkler protocol, linear response up to 16 mg L -1 DO, and an injection frequency of 30 per hour were achieved. It is noteworthy that for the offline protocol, sample metering and quantification with a potentiometric titrator lasts in general over 5 min without counting sample fixation, incubation, and glassware cleaning. The modified SIMPLEX methodology was used for the simultaneous optimization of four volumetric and two chemical variables. Vertex calculation and consequent application including in-line preparation of one reagent was carried out in real-time using the software AutoAnalysis. The analytical system featured high signal stability, robustness, and a repeatability of 3% RSD (1st mode) and 0.8% (2nd mode) during shipboard application.

  9. Distributed Algorithms for Time Optimal Reachability Analysis

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    . We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...

  10. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  11. Structural Design Optimization On Thermally Induced Vibration

    International Nuclear Information System (INIS)

    Gu, Yuanxian; Chen, Biaosong; Zhang, Hongwu; Zhao, Guozhong

    2002-01-01

    The numerical method of design optimization for structural thermally induced vibration is originally studied in this paper and implemented in application software JIFEX. The direct and adjoint methods of sensitivity analysis for thermal induced vibration coupled with both linear and nonlinear transient heat conduction is firstly proposed. Based on the finite element method, the structural linear dynamics is treated simultaneously with coupled linear and nonlinear transient heat structural linear dynamics is treated simultaneously with coupled linear and nonlinear transient heat conduction. In the thermal analysis model, the nonlinear heat conduction considered is result from the radiation and temperature-dependent materials. The sensitivity analysis of transient linear and nonlinear heat conduction is performed with the precise time integration method. And then, the sensitivity analysis of structural transient dynamics is performed by the Newmark method. Both the direct method and the adjoint method are employed to derive the sensitivity equations of thermal vibration, and there are two adjoint vectors of structure and heat conduction respectively. The coupling effect of heat conduction on thermal vibration in the sensitivity analysis is particularly investigated. With coupling sensitivity analysis, the optimization model is constructed and solved by the sequential linear programming or sequential quadratic programming algorithm. The methods proposed have been implemented in the application software JIFEX of structural design optimization, and numerical examples are given to illustrate the methods and usage of structural design optimization on thermally induced vibration

  12. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  13. Sequential decision making in computational sustainability via adaptive submodularity

    Science.gov (United States)

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  14. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    International Nuclear Information System (INIS)

    Sutrisno; Widowati; Solikhin

    2016-01-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well. (paper)

  15. Total Lesion Glycolysis and Sequential (90)Y-Selective Internal Radiation Therapy in Breast Cancer Liver Metastases: Preliminary Results.

    Science.gov (United States)

    Bagni, Oreste; Filippi, Luca; Pelle, Giuseppe; Cianni, Roberto; Schillaci, Orazio

    2015-12-01

    To assess the prognostic role of total lesion glycolysis (TLG) in patients with breast cancer liver metastases (BCLM) after sequential lobar (90)Y-radioembolization ((90)Y-RE). Seventeen patients with bilobar BCLM underwent FDG PET/CT and TLG calculation before (90)Y-RE. The hepatic lobe with the highest TLG was treated in the first session. PET was performed 6 weeks postprocedure and decrease in TLG (ΔTLG) in the treated lobe was calculated before the second (90)Y administration. Subjects were divided in two groups (group 1: ΔTLG >50%, group 2: ΔTLG 50% and seven had a ΔTLG value 50% and ΔTLG <50% had a mean OS of 16.4 ± 0.6 and 10.3 ± 0.4 months, respectively (p < 0.001). Cox regression analysis demonstrated hepatic tumor load (p = 0.048) and ΔTLG as the only significant (p = 0.005) predictors of survival. ΔTLG after the first (90)Y administration agrees with final outcome in BCLM patients after separate sequential lobar (90)Y-RE.

  16. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  17. DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization

    Directory of Open Access Journals (Sweden)

    Olivier Roustant

    2012-10-01

    Full Text Available We present two recently released R packages, DiceKriging and DiceOptim, for the approximation and the optimization of expensive-to-evaluate deterministic functions. Following a self-contained mini tutorial on Kriging-based approximation and optimization, the functionalities of both packages are detailed and demonstrated in two distinct sections. In particular, the versatility of DiceKriging with respect to trend and noise specifications, covariance parameter estimation, as well as conditional and unconditional simulations are illustrated on the basis of several reproducible numerical experiments. We then put to the fore the implementation of sequential and parallel optimization strategies relying on the expected improvement criterion on the occasion of DiceOptim’s presentation. An appendix is dedicated to complementary mathematical and computational details.

  18. Moral Hazard, Adverse Selection and the Optimal Consumption-Leisure Choice under Equilibrium Price Dispersion

    Directory of Open Access Journals (Sweden)

    Sergey Malakhov

    2017-09-01

    Full Text Available The analysis of the optimal consumption-leisure choice under equilibrium price dispersion discovers the methodological difference between problems of moral hazard and adverse selection. While the phenomenon of moral hazard represents the individual behavioral reaction on the marginal rate of substitution of leisure for consumption proposed by the insurance policy, the adverse selection can take place on any imperfect market under equilibrium price dispersion and it looks like a market phenomenon of a natural selection between consumers with different income and different propensity to search. The analysis of health insurance where the propensity to search takes the form of the propensity to seek healthcare demonstrates that moral hazard takes place when the insurance policy proposes a suboptimal consumption-leisure choice and the increase in consumption of medical services with the reduction of leisure time represents not an unlimited demand for “free goods” but the simple process of the consumption-leisure optimization. The path of consumerism with consumer-directed plans can solve partly the problem of moral hazard because in order to eliminate moral hazard this trend should come to the re-sale of medical services under health vouchers like it takes place in the life settlement.

  19. Value of information in sequential decision making: Component inspection, permanent monitoring and system-level scheduling

    International Nuclear Information System (INIS)

    Memarzadeh, Milad; Pozzi, Matteo

    2016-01-01

    We illustrate how to assess the Value of Information (VoI) in sequential decision making problems modeled by Partially Observable Markov Decision Processes (POMDPs). POMDPs provide a general framework for modeling the management of infrastructure components, including operation and maintenance, when only partial or noisy observations are available; VoI is a key concept for selecting explorative actions, with application to component inspection and monitoring. Furthermore, component-level VoI can serve as an effective heuristic for assigning priorities to system-level inspection scheduling. We introduce two alternative models for the availability of information, and derive the VoI in each of those settings: the Stochastic Allocation (SA) model assumes that observations are collected with a given probability, while the Fee-based Allocation model (FA) assumes that they are available at a given cost. After presenting these models at component-level, we investigate how they perform for system-level inspection scheduling. - Highlights: • On the Value of Information in POMDPs, for optimal exploration of systems. • A method for assessing the Value of Information of permanent monitoring. • A method for allocating inspections in systems made up by parallel POMDPs.

  20. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  1. Optimization of ISSR-PCR reaction system and selection of primers in Bryum argenteum

    Directory of Open Access Journals (Sweden)

    Ma Xiaoying

    2017-02-01

    Full Text Available In order to determine optimum ISSR-PCR reaction system for moss Bryum argenteum,the concentrations of template DNA primers,dNTPs,Mg2+ and Taq DNA polymerase were optimized in four levels by PCR orthogonal experimental method. The appropriate primers were screened from 100 primers by temperature gradient PCR,and the optimal anneal temperature of the screened primers were determined. The results showed that the optimized 20 μL ISSR-PCR reaction system was as follows:template DNA 20 ng/20 μL,primers 0.45 μmol/L,Mg2+2.65 mmol/L,Taq DNA polymerase 0.4 U/20 μL,dNTPs 0.45 mmol/L. Using this system,50 primers with clear bands,repeatability well and polymorphism highly were selected from 100 primers. The establishment of this system,the screened primers and the annealing temperature could provide a theoretical basis for further research on the genetic diversity of bryophytes using ISSR molecular markers.

  2. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  3. Sequential reduction of external networks for the security- and short circuit monitor in power system control centers

    Energy Technology Data Exchange (ETDEWEB)

    Dietze, P [Siemens A.G., Erlangen (Germany, F.R.). Abt. ESTE

    1978-01-01

    For the evaluation of the effects of switching operations or simulation of line, transformer, and generator outages the influence of interconnected neighbor networks is modelled by network equivalents in the process computer. The basic passive conductivity model is produced by sequential reduction and adapted to fit the active network behavior. The reduction routine uses the admittance matrix, sparse technique and optimal ordering; it is applicable to process computer applications.

  4. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  5. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  6. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  7. Atlas ranking and selection for automatic segmentation of the esophagus from CT scans

    Science.gov (United States)

    Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence

    2017-12-01

    In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).

  8. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  9. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  10. Channel selection for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom

    Science.gov (United States)

    Hwang, Han-Jeong; Hahne, Janne Mathias; Müller, Klaus-Robert

    2014-10-01

    Objective. Recent studies have shown the possibility of simultaneous and proportional control of electrically powered upper-limb prostheses, but there has been little investigation on optimal channel selection. The objective of this study is to find a robust channel selection method and the channel subsets most suitable for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom (DoFs). Approach. Ten able-bodied subjects and one person with congenital upper-limb deficiency took part in this study, and performed wrist movements with various combinations of two DoFs (flexion/extension and radial/ulnar deviation). During the experiment, high density electromyographic (EMG) signals and the actual wrist angles were recorded with an 8 × 24 electrode array and a motion tracking system, respectively. The wrist angles were estimated from EMG features with ridge regression using the subsets of channels chosen by three different channel selection methods: (1) least absolute shrinkage and selection operator (LASSO), (2) sequential feature selection (SFS), and (3) uniform selection (UNI). Main results. SFS generally showed higher estimation accuracy than LASSO and UNI, but LASSO always outperformed SFS in terms of robustness, such as noise addition, channel shift and training data reduction. It was also confirmed that about 95% of the original performance obtained using all channels can be retained with only 12 bipolar channels individually selected by LASSO and SFS. Significance. From the analysis results, it can be concluded that LASSO is a promising channel selection method for accurate simultaneous and proportional prosthesis control. We expect that our results will provide a useful guideline to select optimal channel subsets when developing clinical myoelectric prosthesis control systems based on continuous movements with multiple DoFs.

  11. Device-independent two-party cryptography secure against sequential attacks

    International Nuclear Information System (INIS)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-01-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser–Horne–Shimony–Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation. (paper)

  12. Device-independent two-party cryptography secure against sequential attacks

    Science.gov (United States)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-05-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser-Horne-Shimony-Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation.

  13. Optimizing delivery of a behavioral pain intervention in cancer patients using a sequential multiple assignment randomized trial SMART.

    Science.gov (United States)

    Kelleher, Sarah A; Dorfman, Caroline S; Plumb Vilardaga, Jen C; Majestic, Catherine; Winger, Joseph; Gandhi, Vicky; Nunez, Christine; Van Denburg, Alyssa; Shelby, Rebecca A; Reed, Shelby D; Murphy, Susan; Davidian, Marie; Laber, Eric B; Kimmick, Gretchen G; Westbrook, Kelly W; Abernethy, Amy P; Somers, Tamara J

    2017-06-01

    Pain is common in cancer patients and results in lower quality of life, depression, poor physical functioning, financial difficulty, and decreased survival time. Behavioral pain interventions are effective and nonpharmacologic. Traditional randomized controlled trials (RCT) test interventions of fixed time and dose, which poorly represent successive treatment decisions in clinical practice. We utilize a novel approach to conduct a RCT, the sequential multiple assignment randomized trial (SMART) design, to provide comparative evidence of: 1) response to differing initial doses of a pain coping skills training (PCST) intervention and 2) intervention dose sequences adjusted based on patient response. We also examine: 3) participant characteristics moderating intervention responses and 4) cost-effectiveness and practicality. Breast cancer patients (N=327) having pain (ratings≥5) are recruited and randomly assigned to: 1) PCST-Full or 2) PCST-Brief. PCST-Full consists of 5 PCST sessions. PCST-Brief consists of one 60-min PCST session. Five weeks post-randomization, participants re-rate their pain and are re-randomized, based on intervention response, to receive additional PCST sessions, maintenance calls, or no further intervention. Participants complete measures of pain intensity, interference and catastrophizing. Novel RCT designs may provide information that can be used to optimize behavioral pain interventions to be adaptive, better meet patients' needs, reduce barriers, and match with clinical practice. This is one of the first trials to use a novel design to evaluate symptom management in cancer patients and in chronic illness; if successful, it could serve as a model for future work with a wide range of chronic illnesses. Copyright © 2016. Published by Elsevier Inc.

  14. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  15. Cancer microarray data feature selection using multi-objective binary particle swarm optimization algorithm

    Science.gov (United States)

    Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider

    2016-01-01

    Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm. PMID:27822174

  16. Aggregators’ Optimal Bidding Strategy in Sequential Day-Ahead and Intraday Electricity Spot Markets

    Directory of Open Access Journals (Sweden)

    Xiaolin Ayón

    2017-04-01

    Full Text Available This paper proposes a probabilistic optimization method that produces optimal bidding curves to be submitted by an aggregator to the day-ahead electricity market and the intraday market, considering the flexible demand of his customers (based in time dependent resources such as batteries and shiftable demand and taking into account the possible imbalance costs as well as the uncertainty of forecasts (market prices, demand, and renewable energy sources (RES generation. The optimization strategy aims to minimize the total cost of the traded energy over a whole day, taking into account the intertemporal constraints. The proposed formulation leads to the solution of different linear optimization problems, following the natural temporal sequence of electricity spot markets. Intertemporal constraints regarding time dependent resources are fulfilled through a scheduling process performed after the day-ahead market clearing. Each of the different problems is of moderate dimension and requires short computation times. The benefits of the proposed strategy are assessed comparing the payments done by an aggregator over a sample period of one year following different deterministic and probabilistic strategies. Results show that probabilistic strategy reports better benefits for aggregators participating in power markets.

  17. Pretreatment of wastewater: Optimal coagulant selection using Partial Order Scaling Analysis (POSA)

    International Nuclear Information System (INIS)

    Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos

    2011-01-01

    Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment.

  18. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  19. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  20. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr