WorldWideScience

Sample records for monitoring optimization approaches

  1. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  2. Optimization of floodplain monitoring sensors through an entropy approach

    Science.gov (United States)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  3. A decision analysis approach for optimal groundwater monitoring system design under uncertainty

    Directory of Open Access Journals (Sweden)

    N. B. Yenigül

    2006-01-01

    Full Text Available Groundwater contamination is the degradation of the natural quality of groundwater as a result of human activity. Landfills are one of the most common human activities threatening the groundwater quality. The objective of the monitoring systems is to detect the contaminant plumes before reaching the regulatory compliance boundary in order to prevent the severe risk to both society and groundwater quality, and also to enable cost-effective counter measures in case of a failure. The detection monitoring problem typically has a multi-objective nature. A multi-objective decision model (called MONIDAM which links a classic decision analysis approach with a stochastic simulation model is applied to determine the optimal groundwater monitoring system given uncertainties due to the hydrogeological conditions and contaminant source characteristics. A Monte Carlo approach is used to incorporate uncertainties. Hydraulic conductivity and the leak location are the random inputs of the simulation model. The design objectives considered in the model are: (1 maximizing the detection probability, (2 minimizing the contaminated area and, (3 minimize the total cost of the monitoring system. The results show that the monitoring systems located close to the source are optimal except for the cases with very high unit installation and sampling cost and/or very cheap unit remediation cost.

  4. Toward Optimized Bioclogging and Biocementation Through Combining Advanced Geophysical Monitoring and Reactive Transport Modeling Approaches

    Science.gov (United States)

    Hubbard, C. G.; Hubbard, S. S.; Wu, Y.; Surasani, V.; Ajo Franklin, J. B.; Commer, M.; Dou, S.; Kwon, T.; Li, L.; Fouke, B. W.; Coates, J. D.

    2012-12-01

    Bioclogging and biocementation offer exciting opportunities for solutions to diverse problems ranging from soil stabilization to microbially enhanced hydrocarbon recovery. The effectiveness of bioclogging and biocementation strategies is governed by processes and properties ranging from microbial metabolism at the submicron scale, to changes in pore geometry at the pore scale, to geological heterogeneities at the field scale. Optimization of these strategies requires advances in mechanistic reactive transport modeling and geophysical monitoring methodologies. Our research focuses on (i) performing laboratory experiments to refine understanding of reaction networks and to quantify changes in hydrological properties (e.g. permeability), the evolution of biominerals and geophysical responses (focusing on seismic and electrical techniques); (ii) developing and using a reactive transport simulator capable of predicting the induced metabolic processes to numerically explore how to optimize the desired effect; and (iii) using loosely coupled reactive transport and geophysical simulators to explore detectability and resolvability of induced bioclogging and biocementation processes at the field scale using time-lapse geophysical methods. Here we present examples of our research focused on three different microbially-mediated methods to enhance hydrocarbon recovery through selective clogging of reservior thief zones, including: (a) biopolymer clogging through dextran production; (b) biomineral clogging through iron oxide precipitation; and (c) biomineral clogging through carbonate precipitation. We will compare the utility of these approaches for enhancing hydrocarbon recovery and will describe the utility of geophysical methods to remotely monitor associated field treatments.

  5. A heuristic optimization approach for Air Quality Monitoring Network design with the simultaneous consideration of multiple pollutants.

    Science.gov (United States)

    Elkamel, A; Fatehifar, E; Taheri, M; Al-Rashidi, M S; Lohi, A

    2008-08-01

    An interactive optimization methodology for allocating the number and configuration of an Air Quality Monitoring Network (AQMN) in a vast area to identify the impact of multiple pollutants is described. A mathematical model based on the multiple cell approach (MCA) was used to create monthly spatial distributions for the concentrations of the pollutants emitted from different emission sources. These spatial temporal patterns were subject to a heuristic optimization algorithm to identify the optimal configuration of a monitoring network. The objective of the optimization is to provide maximum information about multi-pollutants (i.e., CO, NO(x) and SO(2)) emitted from each source within a given area. The model was applied to a network of existing refinery stacks and the results indicate that three stations can provide a total coverage of more than 70%. In addition, the effect of the spatial correlation coefficient (R(C)) on total area coverage was analyzed. The modeling results show that as the cutoff correlation coefficient R(C) is increased from 0.75 to 0.95, the number of monitoring stations required for total coverage is increased. A high R(C) based network may not necessarily cover the entire region, but the covered region will be well represented. A low R(C) based network, on the other hand, would offer more coverage of the region, but the covered region may not be satisfactorily represented.

  6. Strategies to optimize monitoring schemes of recreational waters from Salta, Argentina: a multivariate approach

    Science.gov (United States)

    Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz

    2014-01-01

    Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636

  7. Optimal sensors placement for monitoring a steam condenser of the distillation column using bond graph approach

    Directory of Open Access Journals (Sweden)

    Samia LATRECHE

    2014-11-01

    Full Text Available This paper deals with monitoring of a process engineering system. The steam condenser was monitored by bond graph tool. The model was constituted by nine capacitive and resistive elements which needed minimum of sensors. This method was based on Analytical Redundancy Relations which were generated from a condenser model and represented residuals. After substitution, we obtained the placement of six sensors which guaranteed the monitoring of nine components. A fault is created by the abrupt annulment of the fluid flow value provided by the source. The block diagram is elaborated on SYMBOLS software and we supervised the residuals evolution.

  8. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Chen, DI-WEN

    2001-11-21

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian

  9. Design Optimization of Structural Health Monitoring Systems

    Energy Technology Data Exchange (ETDEWEB)

    Flynn, Eric B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-06

    Sensor networks drive decisions. Approach: Design networks to minimize the expected total cost (in a statistical sense, i.e. Bayes Risk) associated with making wrong decisions and with installing maintaining and running the sensor network itself. Search for optimal solutions using Monte-Carlo-Sampling-Adapted Genetic Algorithm. Applications include structural health monitoring and surveillance.

  10. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    Science.gov (United States)

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures.

  11. Non-Invasive Fetal Monitoring: A Maternal Surface ECG Electrode Placement-Based Novel Approach for Optimization of Adaptive Filter Control Parameters Using the LMS and RLS Algorithms.

    Science.gov (United States)

    Martinek, Radek; Kahankova, Radana; Nazeran, Homer; Konecny, Jaromir; Jezewski, Janusz; Janku, Petr; Bilik, Petr; Zidek, Jan; Nedoma, Jan; Fajkus, Marcel

    2017-05-19

    This paper is focused on the design, implementation and verification of a novel method for the optimization of the control parameters (such as step size μ and filter order N) of LMS and RLS adaptive filters used for noninvasive fetal monitoring. The optimization algorithm is driven by considering the ECG electrode positions on the maternal body surface in improving the performance of these adaptive filters. The main criterion for optimal parameter selection was the Signal-to-Noise Ratio (SNR). We conducted experiments using signals supplied by the latest version of our LabVIEW-Based Multi-Channel Non-Invasive Abdominal Maternal-Fetal Electrocardiogram Signal Generator, which provides the flexibility and capability of modeling the principal distribution of maternal/fetal ECGs in the human body. Our novel algorithm enabled us to find the optimal settings of the adaptive filters based on maternal surface ECG electrode placements. The experimental results further confirmed the theoretical assumption that the optimal settings of these adaptive filters are dependent on the ECG electrode positions on the maternal body, and therefore, we were able to achieve far better results than without the use of optimization. These improvements in turn could lead to a more accurate detection of fetal hypoxia. Consequently, our approach could offer the potential to be used in clinical practice to establish recommendations for standard electrode placement and find the optimal adaptive filter settings for extracting high quality fetal ECG signals for further processing. Ultimately, diagnostic-grade fetal ECG signals would ensure the reliable detection of fetal hypoxia.

  12. Topology optimization approaches

    DEFF Research Database (Denmark)

    Sigmund, Ole; Maute, Kurt

    2013-01-01

    Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...

  13. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Status of Workers.

    Science.gov (United States)

    ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza

    2015-01-01

    Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. A multi-pollutant method (implemented as a MATLAB program) was explored for configur-ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta-tion's dosage to the total dosage in the network. Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  14. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Satus of Workers

    Directory of Open Access Journals (Sweden)

    Khaled ZoroufchiBenis

    2015-12-01

    Full Text Available Background: Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availabil­ity of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. Methods: A multi-pollutant method (implemented as a MATLAB program was explored for configur­ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids according to their ability to represent the ambient concentra­tion. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta­tion’s dosage to the total dosage in the network. Results: Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network effi­ciency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. Conclusion: The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  15. WiMAX network performance monitoring & optimization

    DEFF Research Database (Denmark)

    Zhang, Qi; Dam, H

    2008-01-01

    frequency reuse, capacity planning, proper network dimensioning, multi-class data services and so on. Furthermore, as a small operator we also want to reduce the demand for sophisticated technicians and man labour hours. To meet these critical demands, we design a generic integrated network performance......In this paper we present our WiMAX (worldwide interoperability for microwave access) network performance monitoring and optimization solution. As a new and small WiMAX network operator, there are many demanding issues that we have to deal with, such as limited available frequency resource, tight...... this integrated network performance monitoring and optimization system in our WiMAX networks. This integrated monitoring and optimization system has such good flexibility and scalability that individual function component can be used by other operators with special needs and more advanced function components can...

  16. OPTIMIZATION METHODS FOR HYDROECOLOGICAL MONITORING SYSTEMS

    OpenAIRE

    Inna Pivovarova

    2016-01-01

    The paper describes current approaches to the rational distribution of monitoring stations. A short review and the organization of the system of hydro-geological observations in different countries are presented. On the basis of real data we propose a solution to the problem of how to calculate the average area per one hydrological station, which is the main indicator of the efficiency and performance of the monitoring system in general. We conclude that a comprehensive approach to the monito...

  17. Optimal Work Effort and Monitoring Cost

    Directory of Open Access Journals (Sweden)

    Tamara Todorova

    2012-12-01

    Full Text Available Using a simple job market equilibrium model we study the relationship between work effort and monitoring by firms. Some other determinants of work effort investigated include the educational level of the worker, the minimum or start-up salary as well as the economic conjuncture. As common logic dictates, optimal work effort increases with the amount of monitoring done by the employer. Quite contrary to common logic, though, we find that at the optimum employers observe and control good workers much more stringently and meticulously than poor workers. This is because under profit maximization most of the employer’s profit and surplus result from good workers and he risks losing a large amount of profit by not observing those. Managers monitor strictly more productive workers, fast learners and those starting at a higher autonomous level of monitoring, as those contribute more substantially to the firm’s profit.

  18. OPTIMIZATION METHODS FOR HYDROECOLOGICAL MONITORING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Inna Pivovarova

    2016-09-01

    Full Text Available The paper describes current approaches to the rational distribution of monitoring stations. A short review and the organization of the system of hydro-geological observations in different countries are presented. On the basis of real data we propose a solution to the problem of how to calculate the average area per one hydrological station, which is the main indicator of the efficiency and performance of the monitoring system in general. We conclude that a comprehensive approach to the monitoring system organization is important, because only hydrometric and hydrochemical activities coordinated in time provide possibilities needed to analyse the underline causes of the observed pollutants content dynamics in water bodies in the long term.

  19. Optimization of Sensor Monitoring Strategies for Emissions

    Science.gov (United States)

    Klise, K. A.; Laird, C. D.; Downey, N.; Baker Hebert, L.; Blewitt, D.; Smith, G. R.

    2016-12-01

    Continuous or regularly scheduled monitoring has the potential to quickly identify changes in air quality. However, even with low-cost sensors, only a limited number of sensors can be placed to monitor airborne pollutants. The physical placement of these sensors and the sensor technology used can have a large impact on the performance of a monitoring strategy. Furthermore, sensors can be placed for different objectives, including maximum coverage, minimum time to detection or exposure, or to quantify emissions. Different objectives may require different monitoring strategies, which need to be evaluated by stakeholders before sensors are placed in the field. In this presentation, we outline methods to enhance ambient detection programs through optimal design of the monitoring strategy. These methods integrate atmospheric transport models with sensor characteristics, including fixed and mobile sensors, sensor cost and failure rate. The methods use site specific pre-computed scenarios which capture differences in meteorology, terrain, concentration averaging times, gas concentration, and emission characteristics. The pre-computed scenarios become input to a mixed-integer, stochastic programming problem that solves for sensor locations and types that maximize the effectiveness of the detection program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. Optimization of Routine Monitoring of Workers Exposed to Plutonium Aerosols.

    Science.gov (United States)

    Davesne, Estelle; Quesne, Benoit; De Vita, Antoine; Chojnacki, Eric; Blanchardon, Eric; Franck, Didier

    2016-10-01

    In case of incidental confinement failure, mixed oxide (MOX) fuel preparation may expose workers to plutonium aerosols. Due to its potential toxicity, occupational exposure to plutonium compounds should be kept as low as reasonably achievable. To ensure the absence of significant intake of radionuclides, workers at risk of internal contamination are monitored by periodic bioassay planned in a routine monitoring programme. From bioassay results, internal dose may be estimated. However, accurate dose calculation relies on known exposure conditions, which are rarely available when the exposure is demonstrated by routine monitoring only. Therefore, internal dose calculation is subject to uncertainty from unknown exposure conditions and from activity measurement variability. The present study calculates the minimum detectable dose (MDD) for a routine monitoring programme by considering all plausible conditions of exposure and measurement uncertainty. The MDD evaluates the monitoring quality and can be used for optimization. Here, MDDs were calculated for the monitoring of workers preparing MOX fuel. Uncertain parameters were modelled by probability distributions defined according to information provided by experts of routine monitoring, of workplace radiological protection and of bioassay analysis. Results show that the current monitoring is well adapted to potential exposure. A sensitivity study of MDD highlights high dependence on exposure condition modelling. Integrating all expert knowledge is therefore crucial to obtain reliable MDD estimates, stressing the value of a holistic approach to worker monitoring.

  1. Optimization of neutron monitor data correction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Paschalis, P. [Nuclear and Particle Physics Section, Physics Department, National and Kapodistrian University of Athens, Zografos 15783, Athens (Greece); Mavromichalaki, H., E-mail: emavromi@phys.uoa.gr [Nuclear and Particle Physics Section, Physics Department, National and Kapodistrian University of Athens, Zografos 15783, Athens (Greece)

    2013-06-21

    Nowadays, several neutron monitor stations worldwide, broadcast their cosmic ray data in real time, in order for the scientific community to be able to use these measurements immediately. In parallel, the development of the Neutron Monitor Database (NMDB; (http://www.nmdb.eu)) which collects all the high resolution real time measurements, allows the implementation of various applications and services by using these data instantly. Therefore, it is obvious that the need for high quality real time data is imperative. The quality of the data is handled by different correction algorithms that filter the real time measurements for undesired instrumental variations. In this work, an optimization of the Median Editor that is currently mainly applied to the neutron monitor data and the recently proposed ANN algorithm based on neural networks is presented. This optimization leads to the implementation of the Median Editor Plus and the ANN Plus algorithms. A direct comparison of these algorithms with the newly appeared Edge Editor is performed and the results are presented.

  2. Optimizing the spatial pattern of networks for monitoring radioactive releases

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhofel, C.J.W.; Dijk, van A.; Hiemstra, P.H.; Baume, O.P.; Stohlker, U.

    2011-01-01

    This study presents a method to optimize the sampling design of environmental monitoring networks in a multi-objective setting. We optimize the permanent network of radiation monitoring stations in the Netherlands and parts of Germany as an example. The optimization method proposed combines

  3. Optimizing the spatial pattern of networks for monitoring radioactive releases

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhofel, C.J.W.; Dijk, van A.; Hiemstra, P.H.; Baume, O.P.; Stohlker, U.

    2011-01-01

    This study presents a method to optimize the sampling design of environmental monitoring networks in a multi-objective setting. We optimize the permanent network of radiation monitoring stations in the Netherlands and parts of Germany as an example. The optimization method proposed combines minimiza

  4. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  5. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid ap

  6. Transformative monitoring approaches for reprocessing.

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin B.

    2011-09-01

    The future of reprocessing in the United States is strongly driven by plant economics. With increasing safeguards, security, and safety requirements, future plant monitoring systems must be able to demonstrate more efficient operations while improving the current state of the art. The goal of this work was to design and examine the incorporation of advanced plant monitoring technologies into safeguards systems with attention to the burden on the operator. The technologies examined include micro-fluidic sampling for more rapid analytical measurements and spectroscopy-based techniques for on-line process monitoring. The Separations and Safeguards Performance Model was used to design the layout and test the effect of adding these technologies to reprocessing. The results here show that both technologies fill key gaps in existing materials accountability that provide detection of diversion events that may not be detected in a timely manner in existing plants. The plant architecture and results under diversion scenarios are described. As a tangent to this work, both the AMUSE and SEPHIS solvent extraction codes were examined for integration in the model to improve the reality of diversion scenarios. The AMUSE integration was found to be the most successful and provided useful results. The SEPHIS integration is still a work in progress and may provide an alternative option.

  7. Monitoring and Reporting HACs - A Federalist Approach

    Data.gov (United States)

    U.S. Department of Health & Human Services — Findings from a study entitled, Monitoring and Reporting Hospital-Acquired Conditions - A Federalist Approach, published in Volume 4, Issue 4 of Medicare and...

  8. Beyond biological monitoring: an integrated approach

    NARCIS (Netherlands)

    Verdonschot, P.F.M.

    2006-01-01

    Concluding discussion ‘Beyond Biological Monitoring: An Integrated Approach’ of the section New tools and strategies for river ecology evaluation – presents decision making on what constitutes a significant environmental change; predictive modelling approaches; evaluating fluvial functioning (FFI),

  9. Smooth Optimization Approach for Sparse Covariance Selection

    OpenAIRE

    Lu, Zhaosong

    2009-01-01

    In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y.E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543--547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127--152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach...

  10. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, F.; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group lead

  11. Autonomous tools for Grid management, monitoring and optimization

    CERN Document Server

    Wislicki, Wojciech

    2007-01-01

    We outline design and lines of development of autonomous tools for the computing Grid management, monitoring and optimization. The management is proposed to be based on the notion of utility. Grid optimization is considered to be application-oriented. A generic Grid simulator is proposed as an optimization tool for Grid structure and functionality.

  12. System Optimization Using a Parallel Stochastic Approach

    Directory of Open Access Journals (Sweden)

    ZAPLATILEK, K.

    2013-05-01

    Full Text Available This paper describes an original stochastic algorithm based on a parallel approach. The algorithm is suitable especially for a real technical system optimization. A few independent pseudorandom generators are used. They generate independent variable vectors along all of the optimized system axes. Local optimal values are used to define a final pseudorandom generator with a narrower interval around the global optimum. Theoretical foundations are introduced and a few practical experiments are presented. The described method is also suitable for the quality classification of the pseudorandom generators using the selected RGB color scheme. Main advantages of this approach are discussed. The algorithm was developed in the MATLAB environment.

  13. Huber Optimization of Circuits: A Robust Approach

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.

    1993-01-01

    The authors introduce an approach to robust circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points, and statistical...... uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a one-sided Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition, a preliminary...

  14. Huber Optimization of Circuits: A Robust Approach

    DEFF Research Database (Denmark)

    Bandler, J. W.; Biernacki, R.; Chen, S.;

    1993-01-01

    The authors introduce an approach to robust circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points, and statistical...... uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a one-sided Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition, a preliminary...

  15. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods....... Many of them are investigated analytically, and the costs of the solutions are compared numerically with those of solutions obtained by simulated annealing and the costs of a global optimal solution. Using dynamical systems, a solution to the combinatorial optimization problem emerges in the limit...... of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization...

  16. Monitoring protocols: Options, approaches, implementation, benefits

    Science.gov (United States)

    Karl, Jason W.; Herrick, Jeffrey E.; Pyke, David A.

    2017-01-01

    Monitoring and adaptive management are fundamental concepts to rangeland management across land management agencies and embodied as best management practices for private landowners. Historically, rangeland monitoring was limited to determining impacts or maximizing the potential of specific land uses—typically grazing. Over the past several decades, though, the uses of and disturbances to rangelands have increased dramatically against a backdrop of global climate change that adds uncertainty to predictions of future rangeland conditions. Thus, today’s monitoring needs are more complex (or multidimensional) and yet still must be reconciled with the realities of costs to collect requisite data. However, conceptual advances in rangeland ecology and management and changes in natural resource policies and societal values over the past 25 years have facilitated new approaches to monitoring that can support rangeland management’s diverse information needs. Additionally, advances in sensor technologies and remote-sensing techniques have broadened the suite of rangeland attributes that can be monitored and the temporal and spatial scales at which they can be monitored. We review some of the conceptual and technological advancements and provide examples of how they have influenced rangeland monitoring. We then discuss implications of these developments for rangeland management and highlight what we see as challenges and opportunities for implementing effective rangeland monitoring. We conclude with a vision for how monitoring can contribute to rangeland information needs in the future.

  17. Optimization of Molecular Approaches to Genogroup Neisseria meningitidis Carriage Isolates and Implications for Monitoring the Impact of New Serogroup B Vaccines.

    Directory of Open Access Journals (Sweden)

    Eduardo Rojas

    Full Text Available The reservoir for Neisseria meningitidis (Nm is the human oropharynx. Implementation of Nm serogroup C (NmC glycoconjugate vaccines directly reduced NmC carriage. Prophylactic vaccines are now available to prevent disease caused by the five major Nm disease causing serogroups (ABCWY. Nm serogroup B (NmB vaccines are composed of antigens that are conserved across Nm serogroups and therefore have the potential to impact all Nm carriage. To assess the effect of these vaccines on carriage, standardized approaches to identify and group Nm are required. Real-time PCR (rt-PCR capsule grouping assays that were internally controlled to confirm Nm species were developed for eight serogroups associated with carriage (A, B, C, E, W, X, Y and Z. The grouping scheme was validated using diverse bacterial species associated with carriage and then used to evaluate a collection of diverse Nm carriage isolates (n=234. A scheme that also included porA and ctrA probes was able to speciate the isolates, while ctrA also provided insights on the integrity of the polysaccharide loci. Isolates were typed for the Nm vaccine antigen factor H binding protein (fHbp, and were found to represent the known diversity of this antigen. The porA rt-PCR yielded positive results with all 234 of the Nm carriage isolates. Genogrouping assays classified 76.5% (179/234 of these isolates to a group, categorized 53 as nongenogroupable (NGG and two as mixed results. Thirty seven NGG isolates evidenced a disrupted capsular polysaccharide operon judged by a ctrA negative result. Only 28.6% (67/234 of the isolates were serogrouped by slide agglutination (SASG, highlighting the reduced capability of carriage strains to express capsular polysaccharide. These rt-PCR assays provide a comprehensive means to identify and genogroup N. meningitidis in carriage studies used to guide vaccination strategies and to assess the impact of novel fHbp containing vaccines on meningococcal carriage.

  18. A Global Optimization Approach to Quantum Mechanics

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    This paper presents a global optimization approach to quantum mechanics, which describes the most fundamental dynamics of the universe. It suggests that the wave-like behavior of (sub)atomic particles could be the critical characteristic of a global optimization method deployed by nature so that (sub)atomic systems can find their ground states corresponding to the global minimum of some energy function associated with the system. The classic time-independent Schrodinger equation is shown to b...

  19. How to study optimal timing of PET/CT for monitoring of cancer treatment

    DEFF Research Database (Denmark)

    Vach, Werner; Høilund-Carlsen, Poul Flemming; Fischer, Barbara Malene Bjerregaard

    2011-01-01

    Purpose: The use of PET/CT for monitoring treatment response in cancer patients after chemo- or radiotherapy is a very promising approach to optimize cancer treatment. However, the timing of the PET/CT-based evaluation of reduction in viable tumor tissue is a crucial question. We investigated how...... to plan and analyze studies to optimize this timing. Methods: General considerations about studying the optimal timing are given and four fundamental steps are illustrated using data from a published study. Results: The optimal timing should be examined by optimizing the schedule with respect...... to predicting the overall individual time course we can observe in the case of dense measurements. The optimal timing needs not to and should not be studied by optimizing the association with the prognosis of the patient. Conclusions: The optimal timing should be examined in specific ‘schedule optimizing...

  20. Enhanced Multi-Objective Optimization of Groundwater Monitoring Networks

    DEFF Research Database (Denmark)

    Bode, Felix; Binning, Philip John; Nowak, Wolfgang

    Drinking-water well catchments include many sources for potential contaminations like gas stations or agriculture. Finding optimal positions of monitoring wells for such purposes is challenging because there are various parameters (and their uncertainties) that influence the reliability and optim...

  1. A New Approach to Optimal Cell Synthesis

    DEFF Research Database (Denmark)

    Madsen, Jan

    1989-01-01

    A set of algorithms is presented for optimal layout generation of CMOS complex gates. The algorithms are able to handle global physical constraints, such as pin placement, and to capture timing aspects. Results show that this novel approach provides better solutions in area and speed compared t...

  2. Parameters Optimization of Synergetic Recognition Approach

    Institute of Scientific and Technical Information of China (English)

    GAOJun; DONGHuoming; SHAOJing; ZHAOJing

    2005-01-01

    Synergetic pattern recognition is a novel and effective pattern recognition method, and has some advantages in image recognition. Researches have shown that attention parameters λ and parameters B, C directly influence on the recognition results, but there is no general research theory to control these parameters in the recognition process. We abstractly analyze these parameters in this paper, and purpose a novel parameters optimization method based on simulated annealing algorithm. SA algorithm has good optimization performance and is used to search the global optimized solution of these parameters. Theoretic analysis and experimental results both show that the proposed parameters optimization method is effective, which can fully improve the performance of synergetic recognition approach, and the algorithm realization is simple and fast.

  3. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  4. Use Alkalinity Monitoring to Optimize Bioreactor Performance.

    Science.gov (United States)

    Jones, Christopher S; Kult, Keegan J

    2016-05-01

    In recent years, the agricultural community has reduced flow of nitrogen from farmed landscapes to stream networks through the use of woodchip denitrification bioreactors. Although deployment of this practice is becoming more common to treat high-nitrate water from agricultural drainage pipes, information about bioreactor management strategies is sparse. This study focuses on the use of water monitoring, and especially the use of alkalinity monitoring, in five Iowa woodchip bioreactors to provide insights into and to help manage bioreactor chemistry in ways that will produce desirable outcomes. Results reported here for the five bioreactors show average annual nitrate load reductions between 50 and 80%, which is acceptable according to established practice standards. Alkalinity data, however, imply that nitrous oxide formation may have regularly occurred in at least three of the bioreactors that are considered to be closed systems. Nitrous oxide measurements of influent and effluent water provide evidence that alkalinity may be an important indicator of bioreactor performance. Bioreactor chemistry can be managed by manipulation of water throughput in ways that produce adequate nitrate removal while preventing undesirable side effects. We conclude that (i) water should be retained for longer periods of time in bioreactors where nitrous oxide formation is indicated, (ii) measuring only nitrate and sulfate concentrations is insufficient for proper bioreactor operation, and (iii) alkalinity monitoring should be implemented into protocols for bioreactor management.

  5. Expected Utility Optimization - Calculus of Variations Approach

    CERN Document Server

    Tran, Khoa

    2007-01-01

    In this paper, I'll derive the Hamilton-Jacobi (HJ) equation for Merton's problem in Utility Optimization Theory using a Calculus of Variations (CoV) Approach. For stochastic control problems, Dynamic Programming (DP) has been used as a standard method. To the best of my knowledge, no one has used CoV for this problem. In addition, while the DP approach cannot guarantee that the optimum satisfies the HJ equation, the CoV approach does. Be aware that this is the first draft of this paper and many flaws might be introduced.

  6. A synthetic approach to multiobjective optimization

    CERN Document Server

    Lovison, Alberto

    2010-01-01

    We propose a strategy for approximating Pareto optimal sets based on the global analysis framework proposed by Smale (Dynamical systems, Academic Press, New York (1973) 531--544). We speak about \\emph{synthetic} approach because the optimal set is natively approximated by means of a compound geometrical object, i.e., a simplicial complex, rather than by an unstructured scatter of individual optima. The method distinguishes the hierarchy between singular set, Pareto critical set and stable Pareto critical set. Furthermore, a quadratic convergence result in set wise sense is proven and tested over numerical examples.

  7. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  8. FREQUENCY OPTIMIZATION FOR SECURITY MONITORING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Вogatyrev V.A.

    2015-03-01

    Full Text Available The subject areas of the proposed research are monitoring facilities for protection of computer systems exposed to destructive attacks of accidental and malicious nature. The interval optimization model of test monitoring for the detection of hazardous states of security breach caused by destructive attacks is proposed. Optimization function is to maximize profit in case of requests servicing in conditions of uncertainty, and intensity variance of the destructive attacks including penalties when servicing of requests is in dangerous conditions. The vector task of system availability maximization and minimization of probabilities for its downtime and dangerous conditions is proposed to be reduced to the scalar optimization problem based on the criterion of profit maximization from information services (service of requests that integrates these private criteria. Optimization variants are considered with the definition of the averaged periodic activities of monitoring and adapting of these periods to the changes in the intensity of destructive attacks. Adaptation efficiency of the monitoring frequency to changes in the activity of the destructive attacks is shown. The proposed solutions can find their application for optimization of test monitoring intervals to detect hazardous conditions of security breach that makes it possible to increase the system effectiveness, and specifically, to maximize the expected profit from information services.

  9. Multiple Optimal Solutions and Sag Occurrence Index Based Placement of Voltage Sag Monitors

    Directory of Open Access Journals (Sweden)

    M.A. Ali

    2014-05-01

    Full Text Available This study presents optimal placement of voltage sag monitors based on new Sag Occurrence Index (SOI which ensures observability even in case of monitor failure or line outages. Multiple solutions for optimal placement of voltage sag monitors for voltage sag detection have been obtained by genetic algorithm approach such that observability of the whole system is guaranteed. A new Sag Occurrence Index (SOI is proposed to obtain the severity of voltage sag at all the buses in the system. To obtain the best monitor arrangement in the system, the sum of SOI for each optimal combination is determined. IEEE 24-bus Reliability Test System (RTS and IEEE 57-bus system were used to demonstrate the effectiveness of the proposed method. The details of implementation and simulation results are also presented.

  10. Novel Optimization Approach to Mixing Process Intensification

    Institute of Scientific and Technical Information of China (English)

    Guo Kai; Liu Botan; Li Qi; Liu Chunjiang

    2015-01-01

    An approach was presented to intensify the mixing process. Firstly, a novel concept, the dissipationof mass transfer ability(DMA) associated with convective mass transfer, was defined via an analogy to the heat-work conversion. Accordingly, the focus on mass transfer enhancement can be shifted to seek the extremum of the DMA of the system. To this end, an optimization principle was proposed. A mathematical model was then developed to formu-late the optimization into a variational problem. Subsequently, the intensification of the mixing process for a gas mix-ture in a micro-tube was provided to demonstrate the proposed principle. In the demonstration example, an optimized velocity field was obtained in which the mixing ability was improved, i.e., the mixing process should be intensifiedby adjusting the velocity field in related equipment. Therefore, a specific procedure was provided to produce a mixer with geometric irregularities associated with an ideal velocity.

  11. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  12. Optimal estuarine sediment monitoring network design with simulated annealing.

    Science.gov (United States)

    Nunes, L M; Caeiro, S; Cunha, M C; Ribeiro, L

    2006-02-01

    An objective function based on geostatistical variance reduction, constrained to the reproduction of the probability distribution functions of selected physical and chemical sediment variables, is applied to the selection of the best set of compliance monitoring stations in the Sado river estuary in Portugal. These stations were to be selected from a large set of sampling stations from a prior field campaign. Simulated annealing was chosen to solve the optimisation function model. Both the combinatorial problem structure and the resulting candidate sediment monitoring networks are discussed, and the optimal dimension and spatial distribution are proposed. An optimal network of sixty stations was obtained from an original 153-station sampling campaign.

  13. How to study optimal timing of PET/CT for monitoring of cancer treatment

    DEFF Research Database (Denmark)

    Vach, Werner; Høilund-Carlsen, Poul Flemming; Fischer, Barbara Malene Bjerregaard

    2011-01-01

    Purpose: The use of PET/CT for monitoring treatment response in cancer patients after chemo- or radiotherapy is a very promising approach to optimize cancer treatment. However, the timing of the PET/CT-based evaluation of reduction in viable tumor tissue is a crucial question. We investigated how...

  14. How to study optimal timing of PET/CT for monitoring of cancer treatment

    DEFF Research Database (Denmark)

    Vach, Werner; Høilund-Carlsen, Poul Flemming; Fischer, Barbara Malene Bjerregaard;

    2011-01-01

    Purpose: The use of PET/CT for monitoring treatment response in cancer patients after chemo- or radiotherapy is a very promising approach to optimize cancer treatment. However, the timing of the PET/CT-based evaluation of reduction in viable tumor tissue is a crucial question. We investigated how...

  15. A Bayesian approach to optimizing cryopreservation protocols

    Directory of Open Access Journals (Sweden)

    Sammy Sambu

    2015-06-01

    Full Text Available Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors as preliminary meta-data, a decision tree learning analysis (DTLA was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents, loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach.

  16. Optimization approaches for planning external beam radiotherapy

    Science.gov (United States)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  17. Interpolation and optimal monitoring in space and time

    NARCIS (Netherlands)

    Boer, E.P.J.

    2002-01-01

    This thesis shows how statistics can be used for both analysing data and for determining the (optimal) design for collecting data in environmental research. An important question is often where to place monitoring stations to meet the objective of measuring as good as possible. In thi

  18. Optimizing IT Infrastructure by Virtualization Approach

    Science.gov (United States)

    Budiman, Thomas; Suroso, Jarot S.

    2017-04-01

    The goal of this paper is to get the best potential configuration which can be applied to a physical server without compromising service performance for the clients. Data were compiled by direct observation in the data center observed. Data was then analyzed using the hermeneutics approach to understand the condition by textual data gathered understanding. The results would be the best configuration for a physical server which contains several virtual machines logically separated by its functions. It can be concluded that indeed one physical server machine can be optimized using virtualization so that it may deliver the peak performance of the machine itself and the impact are throughout the organization.

  19. Jordan algebraic approach to symmetric optimization

    NARCIS (Netherlands)

    Vieira, M.V.C.

    2007-01-01

    In this thesis we present a generalization of interior-point methods for linear optimization based on kernel functions to symmetric optimization. It covers the three standard cases of conic optimization: linear optimization, second-order cone optimization and semi-definite optimization. We give an

  20. Near-optimal sensor placement for health monitoring of civil structures

    Science.gov (United States)

    van der Linden, Gwendolyn W.; Emami-Naeini, Abbas; Kosut, Robert L.; Sederat, Hassan; Lynch, Jerome P.

    2010-04-01

    In this paper we focus on the optimal placement of sensors for state estimation-based continuous health monitoring of structures using three approaches. The first aims to minimize the static estimation error of the structure deflections, using the linear stiffness matrix derived from a finite element model. The second approach aims to maximize the observability of the derived linear state space model. The third approach aims to minimize the dynamic estimation error of the deflections using a Linear Quadratic Estimator. Both nonlinear mixed-integer and relaxed convex optimization formulations are presented. A simple search-based optimization implementation for each of the three approaches is demonstrated on a model of the long-span New Carquinez Bridge in California.

  1. The optimization approach to regional environmental security

    Directory of Open Access Journals (Sweden)

    N.V. Kameneva

    2014-03-01

    Full Text Available The aim of the article. The aim of this paper is to work out a conceptual approach to the problem of environmental safety securing, the protection of population against unfavourable environmental impact and ecological risks and the maximization of economic effect from business activity including its ecological part. The following purposes were set and achieved: definition of the notion of optimal level of environmental safety; working out of a more precise classification of the elements of economic effect from ecological activity; outlining of possibilities to use economic tools in managing environmental safety at the regional level. The results of the analysis. Economically optimal level of environmental safety is one, which meets basic requirements concerning protection of population against negative environmental impact and threats of such impact, and provides the maximum economic effect from ecological activity. The gradation of environmental safety levels is based on the assessment of levels of ecological risk. The final economic result of ecological activity may be positive or negative depending on the amounts of expenditures and effect. The purpose of optimization will therefore be either maximization of earnings or minimization of loss. In general, the expenditures related to ecological activity grow when the level of environmental safety gets higher. For most of populated territories, the achievement of the maximum theoretically possible level of environmental safety is not only impractical in present but also not desirable in principle. Elimination or reducing to insignificant values of all ecological risks would actually require transforming a given territory to a natural reserve with consequent stopping all business activity, which would lead to very high economic losses. Conclusions and directions of further researches. Environmental safety of a territory may have different levels, which are characterized, in particular, by the

  2. Optimized, budget-constrained monitoring well placement using DREAM

    Energy Technology Data Exchange (ETDEWEB)

    Yonkofski, Catherine MR; Davidson, Casie L.; Rodriguez, Luke R.; Porter, Ellen A.; Bender, Sadie R.; Brown, Christopher F.

    2017-07-01

    Defining the ideal suite of monitoring technologies to be deployed at a carbon capture and storage (CCS) site presents a challenge to project developers, financers, insurers, regulators and other stakeholders. The monitoring, verification, and accounting (MVA) toolkit offers a suite of technologies to monitor an extensive range of parameters across a wide span of spatial and temporal resolutions, each with their own degree of sensitivity to changes in the parameter being monitored. Understanding how best to optimize MVA budgets to minimize the time to leak detection could help to address issues around project risks, and in turn help support broad CCS deployment. This paper presents a case study demonstrating an application of the Designs for Risk Evaluation and Management (DREAM) tool using an ensemble of CO2 leakage scenarios taken from a previous study on leakage impacts to groundwater. Impacts were assessed and monitored as a function of pH, total dissolved solids (TDS), and trace metal concentrations of arsenic (As), cadmium (Cd), chromium (Cr), and lead (Pb). Using output from the previous study, DREAM was used to optimize monitoring system designs based on variable sampling locations and parameters. The algorithm requires the user to define a finite budget to limit the number of monitoring wells and technologies deployed, and then iterates well placement and sensor type and location until it converges on the configuration with the lowest time to first detection of the leak averaged across all scenarios. To facilitate an understanding of the optimal number of sampling wells, DREAM was used to assess the marginal utility of additional sampling locations. Based on assumptions about monitoring costs and replacement costs of degraded water, the incremental cost of each additional sampling well can be compared against its marginal value in terms of avoided aquifer degradation. Applying this method, DREAM identified the most cost-effective ensemble with 14

  3. Monitoring alcoholic fermentation: an untargeted approach.

    Science.gov (United States)

    Ferreira, António César Silva; Monforte, Ana Rita; Teixeira, Carla Silva; Martins, Rosa; Fairbairn, Samantha; Bauer, Florian F

    2014-07-16

    This work describes the utility and efficiency of a metabolic profiling pipeline that relies on an unsupervised and untargeted approach applied to a HS-SPME/GC-MS data. This noninvasive and high throughput methodology enables "real time" monitoring of the metabolic changes inherent to the biochemical dynamics of a perturbed complex biological system and the extraction of molecular candidates that are latter validated on its biochemical context. To evaluate the efficiency of the pipeline five different fermentations, carried on a synthetic media and whose perturbation was the nitrogen source, were performed in 5 and 500 mL. The smaller volume fermentations were monitored online by HS-SPME/GC-MS, allowing to obtain metabolic profiles and molecular candidates time expression. Nontarget analysis was applied using MS data in two ways: (i) one dimension (1D), where the total ion chromatogram per sample was used, (ii) two dimensions (2D), where the integrity time vs m/z per sample was used. Results indicate that the 2D procedure captured the relevant information more efficiently than the 1D. It was also seen that although there were differences in the fermentation performance in different scales, the metabolic pathways responsible for production of metabolites that impact the quality of the volatile fraction was unaffected, so the proposed pipeline is suitable for the study of different fermentation systems that can undergo subsequent sensory validation on a larger scale.

  4. Monitoring active volcanoes: The geochemical approach

    Directory of Open Access Journals (Sweden)

    Takeshi Ohba

    2011-06-01

    Full Text Available

    The geochemical surveillance of an active volcano aims to recognize possible signals that are related to changes in volcanic activity. Indeed, as a consequence of the magma rising inside the volcanic "plumbing system" and/or the refilling with new batches of magma, the dissolved volatiles in the magma are progressively released as a function of their relative solubilities. When approaching the surface, these fluids that are discharged during magma degassing can interact with shallow aquifers and/or can be released along the main volcano-tectonic structures. Under these conditions, the following main degassing processes represent strategic sites to be monitored.

    The main purpose of this special volume is to collect papers that cover a wide range of topics in volcanic fluid geochemistry, which include geochemical characterization and geochemical monitoring of active volcanoes using different techniques and at different sites. Moreover, part of this volume has been dedicated to the new geochemistry tools.

  5. Optimal positioning of sensors for the monitoring of water dams; Optimale Positionierung von Messeinrichtungen an Staumauern zur Bauwerksueberwachung

    Energy Technology Data Exchange (ETDEWEB)

    Lahmer, Tom [Bauhaus-Univ. Weimar (Germany). DFG-Graduiertenkolleg 1462; Koenke, Carsten [Bauhaus-Univ. Weimar (Germany). Inst. fuer Strukturmechanik; Bettzieche, Volker [Ruhrverband, Essen (Germany)

    2010-07-01

    This article discusses cases of damages of water dams and describes well proven methods for the monitoring of the structures. Additionally, the effects of damages are investigated with the help of a multifield Finite Element Simulation. Using an inverse approach these damages are identified from the combined consideration of mechanical and hydraulic measurements. An optimal positioning of sensors monitoring the dams is proposed. (orig.)

  6. A model based wireless monitoring approach for traffic noise

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2011-01-01

    In order to have a good understanding of the environmental acoustic effects of traffic it is important to perform long term monitoring within large areas. With traditional monitoring approaches this is quite unfeasible and the costs are relatively high. Within TNO a new wireless monitoring approach

  7. Optimized Radar Remote Sensing for Levee Health Monitoring

    Science.gov (United States)

    Jones, Cathleen E.

    2013-01-01

    Radar remote sensing offers great potential for high resolution monitoring of ground surface changes over large areas at one time to detect movement on and near levees and for location of seepage through levees. Our NASA-funded projects to monitor levees in the Sacramento Delta and the Mississippi River have developed and demonstrated methods to use radar remote sensing to measure quantities relevant to levee health and of great value to emergency response. The DHS-funded project will enable us is to define how to optimally monitor levees in this new way and set the stage for transition to using satellite SAR (synthetic aperture radar) imaging for better temporal and spatial coverage at lower cost to the end users.

  8. Entropy-Based Approach to Remove Redundant Monitoring Wells from Regional-Scale Groundwater Network

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An entropy-based approach is applied to identify redundant wells in the network. In the process of this research, groundwater-monitoring network is considered as a communication system with a capability to transfer information, and monitoring wells are taken as information receivers. The concepts of entropy and mutual information are then applied to measure the information content of individual monitoring well and information relationship between monitoring well pairs. The efficiency of information transfer among monitoring wells is the basis to judge the redundancy in the network. And the capacity of the monitoring wells to provide information on groundwater is the point of evaluation to identify redundant monitoring wells. This approach is demonstrated using the data from a regional-scale groundwater network in Hebei plain, China. The result shows that the entropy-based method is recommendable in optimizing groundwater networks, especially for those within media of higher heterogeneities and anisotropies.

  9. Optimizing bulk milk dioxin monitoring based on costs and effectiveness.

    Science.gov (United States)

    Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M

    2013-07-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food chains. However, the costs and effectiveness of such programs have not been evaluated. In this study, the costs and effectiveness of bulk milk dioxin monitoring in milk trucks were estimated to optimize the sampling and pooling monitoring strategies aimed at detecting at least 1 contaminated dairy farm out of 20,000 at a target dioxin concentration level. Incidents of different proportions, in terms of the number of contaminated farms, and concentrations were simulated. A combined testing strategy, consisting of screening and confirmatory methods, was assumed as well as testing of pooled samples. Two optimization models were built using linear programming. The first model aimed to minimize monitoring costs subject to a minimum required effectiveness of finding an incident, whereas the second model aimed to maximize the effectiveness for a given monitoring budget. Our results show that a high level of effectiveness is possible, but at high costs. Given specific assumptions, monitoring with 95% effectiveness to detect an incident of 1 contaminated farm at a dioxin concentration of 2 pg of toxic equivalents/g of fat [European Commission's (EC) action level] costs €2.6 million per month. At the same level of effectiveness, a 73% cost reduction is possible when aiming to detect an incident where 2 farms are contaminated at a dioxin concentration of 3 pg of toxic equivalents/g of fat (EC maximum level). With a fixed budget of €40,000 per month, the probability of detecting an incident with a single contaminated farm at a dioxin concentration equal to the EC action level is 4.4%. This probability almost doubled (8.0%) when aiming to detect the same incident but with a dioxin

  10. A multi-sensor approach to monitor slope displacement

    Science.gov (United States)

    Bouali, E. H. Y.; Oommen, T.; Escobar-Wolf, R. P.

    2015-12-01

    The use of remote sensing toward slope monitoring and landslide detection has been widespread. Common techniques include interferometric synthetic aperture radar (InSAR), light detection and ranging (LiDAR) and optical photogrammetric methods. Each technique can measure ground motion when data over the same region are acquired through multiple acquisitions, with typical data outputs displayed in spatial form (e.g., displacement/velocity maps or two- and three-dimensional change detection models) or in temporal form (e.g., displacement time series). The authors apply a multi-sensor approach - combining satellite-based InSAR, terrestrial LiDAR, and aerial optical photogrammetry - in order to optimize these remote sensing techniques based on their advantages and limitations. This application is conducted over a railroad corridor in southeastern Nevada. InSAR results include the calculation of displacement rates across many slopes over a long period of time. Two slopes, identified as potentially hazardous, are further analyzed in greater detail using LiDAR and optical photogrammetry. Slope displacements are measured using a point-cloud change detection analysis; the potential for stacking acquisitions to create displacement time-series is also explored. Overall, the goal is to illustrate the benefits of using a multi-sensor, remote sensing approach towards the monitoring of slope instability.

  11. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    Science.gov (United States)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  12. Applied research of correspondence analysis method in waste tailings reservoir heavy metal pollution monitoring points optimization

    Institute of Scientific and Technical Information of China (English)

    WANG Cong-lu; WU Chao; LI Zi-jun; XUE Sheng-guo

    2010-01-01

    In order to optimize monitoring points and monitoring factor, the relationship between pollutants and soil sample were established by correspondence analysis. The study results show that the reflecting monitoring points and monitoring factors in the graphic on the same factor axis can clearly express the intrinsic link between pollutants and monitoring points and distribution characteristics. To determine the main monitoring point and the main monitoring indicators can reduce and optimize the number of monitoring points under the premise of ensuring the typical and representative of monitoring data.Using the correlation of pollutants can reduce the number of monitoring indicators and improve the effectiveness of data collection.

  13. Site location optimization of regional air quality monitoring network in China: methodology and case study.

    Science.gov (United States)

    Zheng, Junyu; Feng, Xiaoqiong; Liu, Panwei; Zhong, Liuju; Lai, Senchao

    2011-11-01

    Regional air quality monitoring networks (RAQMN) are urgently needed in China due to increasing regional air pollution in city clusters, arising from rapid economic development in recent decades. This paper proposes a methodological framework for site location optimization in designing a RAQMN adapting to air quality management practice in China. The framework utilizes synthetic assessment concentrations developed from simulated data from a regional air quality model in order to simplify the optimal process and to reduce costs. On the basis of analyzing various constraints such as cost and budget, terrain conditions, administrative district, population density and spatial coverage, the framework takes the maximum approximate degree as an optimization objective to achieve site location optimization of a RAQMN. An expert judgment approach was incorporated into the framework to help adjust initial optimization results in order to make the network more practical and representative. A case study was used to demonstrate the application of the framework, indicating that it is feasible to conduct site optimization for a RAQMN design in China. The effects of different combinations of primary and secondary pollutants on site location optimization were investigated. It is suggested that the network design considering both primary and secondary pollutants could better represent regional pollution characteristics and more extensively reflect temporal and spatial variations of regional air quality. The work shown in this study can be used as a reference to guide site location optimization of a RAQMN design in China or other regions of the world.

  14. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve

  15. Concept for integrated environmental monitoring. Scientific approach

    Energy Technology Data Exchange (ETDEWEB)

    Haber, W. [comp.; Schoenthaler, K.; Kerner, H.F.; Koeppel, J.; Spandau, L.

    1998-09-01

    Despite considerable expenditures for environmental protection and intensified efforts in the areas of environmental research and monitoring, environmental damage increasingly occurs, sometimes with global effects, largely due to the lack of early diagnosis. In the past few years various institutions have therefore demanded improvements in environmental monitoring. The Council of Experts on Environmental Issues (`Rat von Sachverstaendigen fuer Umweltfragen`, SRU), in particular, in its `Environmental Report` of 1987 and in its Special Report on `General Ecological Environmental Monitoring` (1990) presented far-reaching demands for a nationwide ecological early warning system which should integrate the various local, regional, national, and even global monitoring levels, and which should encompass an environmental monitoring of entire ecosystems at representative locations. This is aimed at creating the prerequisites for - detection of long-term gradual environmental change, - confirmation of refutation of initial assumptions regarding the causes of these environmental changes, - permitting decisions on preventive actions to stabilize or improve environmental conditions and - making it possible to assess the success of environmental protection policies. This report includes an abbreviated version and documentation of the conference on the `Concept for Integrated Environmental Monitoring` and the final report `Specification of the Concept for Integrated Environmental Monitoring from the Perspective of Nature Conservation`. (orig.)

  16. A Novel Solitude Conserving Location Monitoring Approach for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Pravallika. K

    2014-09-01

    Full Text Available Observing individual locations with a capable untrusted server impose secrecy threats to the monitored individuals. In this paper we propose “A Novel Solitude Conserving Location Monitoring approach for Wireless Sensor networks”. We design two approaches to study nondescript locations in-network approaches, namely quality-aware and resource-aware approaches, that aims to enable the system to give high end quality location monitoring services for end users, while conserving personal location privacy. Both approaches are worked based on k-anonymity solitude (i.e.,an object is indistinguishable among k objects, to enable highly trusted sensor nodes to provide the collective location data of monitored objects for our system. Each collective location is in a form of a observed area X along with the number of monitored objects reside in X. The resource-aware approach objective to optimize the computational and communication value, while quality-aware approach aims to increase the reliability of the collective location data by reducing their observing areas. We use spatial histogram methodology to estimates the distribution of observing objects based on the gathered collective location data. We evaluated these two approaches through simulated experiments. The simulation results shows that these approaches gives high quality location observing services for end users and assure the location secrecy of the monitored objects.

  17. Greenhouse climate management : an optimal control approach

    NARCIS (Netherlands)

    Henten, van E.J.

    1994-01-01

    In this thesis a methodology is developed for the construction and analysis of an optimal greenhouse climate control system.

    In chapter 1, the results of a literature survey are presented and the research objectives are defined. In the literature, optimal greenhouse climate

  18. Greenhouse climate management: an optimal control approach.

    NARCIS (Netherlands)

    Henten, van E.J.

    1994-01-01

    In this thesis a methodology is developed for the construction and analysis of an optimal greenhouse climate control system.In chapter 1, the results of a literature survey are presented and the research objectives are defined. In the literature, optimal greenhouse climate management systems have be

  19. Pressure Vessel Optimization a Fuzzy Approach

    Directory of Open Access Journals (Sweden)

    Mr. Uday V. Aswalekar

    2015-05-01

    Full Text Available Optimization has become a significant area of development, both in research and for practicing design engineers. In this work here for optimization of air receiver tank, of reciprocating air compressor, the sequential linear programming method is being used. The capacity of tank is considered as optimization constraint. Conventional dimension of the tank are utilized as reference for defining range. Inequality constraints such as different design stresses for different parts of tank are determined and suitable values are selected. Algorithm is prepared and conventional SLP is done in MATLAB Software with C++ interface toget optimized dimension of tank. The conventional SLP is modified by introducing fuzzy heuristics and the relevant algorithm is prepared. Fuzzy based sequential linear programming is prepared and executed in MATLAB Software using fuzzy toolbox and optimization tool box and corresponding dimension are obtained. After comparison FSLP with SLP it is observed that FSLP is easier in execution.

  20. Optimization of remediation strategies using vadose zone monitoring systems

    Science.gov (United States)

    Dahan, Ofer

    2016-04-01

    In-situ bio-remediation of the vadose zone depends mainly on the ability to change the subsurface hydrological, physical and chemical conditions in order to enable development of specific, indigenous, pollutants degrading bacteria. As such the remediation efficiency is much dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. These conditions are usually determined in laboratory experiments where parameters such as the chemical composition of the soil water solution, redox potential and water content of the sediment are fully controlled. Usually, implementation of desired optimal degradation conditions in deep vadose zone at full scale field setups is achieved through infiltration of water enriched with chemical additives on the land surface. It is assumed that deep percolation into the vadose zone would create chemical conditions that promote biodegradation of specific compounds. However, application of water with specific chemical conditions near land surface dose not necessarily results in promoting of desired chemical and hydraulic conditions in deep sections of the vadose zone. A vadose-zone monitoring system (VMS) that was recently developed allows continuous monitoring of the hydrological and chemical properties of deep sections of the unsaturated zone. The VMS includes flexible time-domain reflectometry (FTDR) probes which allow continuous monitoring of the temporal variation of the vadose zone water content, and vadose-zone sampling ports (VSPs) which are designed to allow frequent sampling of the sediment pore-water and gas at multiple depths. Implementation of the vadose zone monitoring system in sites that undergoes active remediation provides real time information on the actual chemical and hydrological conditions in the vadose zone as the remediation process progresses. Up-to-date the system has been successfully implemented in several studies on water flow and contaminant transport in

  1. Examining the Bernstein global optimization approach to optimal power flow problem

    Science.gov (United States)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  2. Velocity model optimization for surface microseismic monitoring via amplitude stacking

    Science.gov (United States)

    Jiang, Haiyu; Wang, Zhongren; Zeng, Xiaoxian; Lü, Hao; Zhou, Xiaohua; Chen, Zubin

    2016-12-01

    A usable velocity model in microseismic projects plays a crucial role in achieving statistically reliable microseismic event locations. Existing methods for velocity model optimization rely mainly on picking arrival times at individual receivers. However, for microseismic monitoring with surface stations, seismograms of perforation shots have such low signal-to-noise ratios (S/N) that they do not yield sufficiently reliable picks. In this study, we develop a framework for constructing a 1-D flat-layered a priori velocity model using a non-linear optimization technique based on amplitude stacking. The energy focusing of the perforation shot is improved thanks to very fast simulated annealing (VFSA), and the accuracies of shot relocations are used to evaluate whether the resultant velocity model can be used for microseismic event location. Our method also includes a conventional migration-based location technique that utilizes successive grid subdivisions to improve computational efficiency and source location accuracy. Because unreasonable a priori velocity model information and interference due to additive noise are the major contributors to inaccuracies in perforation shot locations, we use velocity model optimization as a compensation scheme. Using synthetic tests, we show that accurate locations of perforation shots can be recovered to within 2 m, even with pre-stack S/N ratios as low as 0.1 at individual receivers. By applying the technique to a coal-bed gas reservoir in Western China, we demonstrate that perforation shot location can be recovered to within the tolerance of the well tip location.

  3. Optimal Reinsurance: A Risk Sharing Approach

    Directory of Open Access Journals (Sweden)

    Alejandro Balbas

    2013-08-01

    Full Text Available This paper proposes risk sharing strategies, which allow insurers to cooperate and diversify non-systemic risk. We deal with both deviation measures and coherent risk measures and provide general mathematical methods applying to optimize them all. Numerical examples are given in order to illustrate how efficiently the non-systemic risk can be diversified and how effective the presented mathematical tools may be. It is also illustrated how the existence of huge disasters may lead to wrong solutions of our optimal risk sharing problem, in the sense that the involved risk measure could ignore the existence of a non-null probability of "global ruin" after the design of the optimal risk sharing strategy. To overcome this caveat, one can use more conservative risk measures. The stability in the large of the optimal sharing plan guarantees that "the global ruin caveat" may be also addressed and solved with the presented methods.

  4. Group Counseling Optimization: A Novel Approach

    Science.gov (United States)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  5. On a Variational Approach to Optimization of Hybrid Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Vadim Azhmyakov

    2010-01-01

    Full Text Available This paper deals with multiobjective optimization techniques for a class of hybrid optimal control problems in mechanical systems. We deal with general nonlinear hybrid control systems described by boundary-value problems associated with hybrid-type Euler-Lagrange or Hamilton equations. The variational structure of the corresponding solutions makes it possible to reduce the original “mechanical” problem to an auxiliary multiobjective programming reformulation. This approach motivates possible applications of theoretical and computational results from multiobjective optimization related to the original dynamical optimization problem. We consider first order optimality conditions for optimal control problems governed by hybrid mechanical systems and also discuss some conceptual algorithms.

  6. A linear programming approach for optimal contrast-tone mapping.

    Science.gov (United States)

    Wu, Xiaolin

    2011-05-01

    This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

  7. Optimal Reverse Carpooling Over Wireless Networks - A Distributed Optimization Approach

    CERN Document Server

    ParandehGheibi, Ali; Effros, Michelle; Medard, Muriel

    2010-01-01

    We focus on a particular form of network coding, reverse carpooling, in a wireless network where the potentially coded transmitted messages are to be decoded immediately upon reception. The network is fixed and known, and the system performance is measured in terms of the number of wireless broadcasts required to meet multiple unicast demands. Motivated by the structure of the coding scheme, we formulate the problem as a linear program by introducing a flow variable for each triple of connected nodes. This allows us to have a formulation polynomial in the number of nodes. Using dual decomposition and projected subgradient method, we present a decentralized algorithm to obtain optimal routing schemes in presence of coding opportunities. We show that the primal sub-problem can be expressed as a shortest path problem on an \\emph{edge-graph}, and the proposed algorithm requires each node to exchange information only with its neighbors.

  8. Using models for the optimization of hydrologic monitoring

    Science.gov (United States)

    Fienen, Michael N.; Hunt, Randall J.; Doherty, John E.; Reeves, Howard W.

    2011-01-01

    Hydrologists are often asked what kind of monitoring network can most effectively support science-based water-resources management decisions. Currently (2011), hydrologic monitoring locations often are selected by addressing observation gaps in the existing network or non-science issues such as site access. A model might then be calibrated to available data and applied to a prediction of interest (regardless of how well-suited that model is for the prediction). However, modeling tools are available that can inform which locations and types of data provide the most 'bang for the buck' for a specified prediction. Put another way, the hydrologist can determine which observation data most reduce the model uncertainty around a specified prediction. An advantage of such an approach is the maximization of limited monitoring resources because it focuses on the difference in prediction uncertainty with or without additional collection of field data. Data worth can be calculated either through the addition of new data or subtraction of existing information by reducing monitoring efforts (Beven, 1993). The latter generally is not widely requested as there is explicit recognition that the worth calculated is fundamentally dependent on the prediction specified. If a water manager needs a new prediction, the benefits of reducing the scope of a monitoring effort, based on an old prediction, may be erased by the loss of information important for the new prediction. This fact sheet focuses on the worth or value of new data collection by quantifying the reduction in prediction uncertainty achieved be adding a monitoring observation. This calculation of worth can be performed for multiple potential locations (and types) of observations, which then can be ranked for their effectiveness for reducing uncertainty around the specified prediction. This is implemented using a Bayesian approach with the PREDUNC utility in the parameter estimation software suite PEST (Doherty, 2010). The

  9. Sensor Networks Hierarchical Optimization Model for Security Monitoring in High-Speed Railway Transport Hub

    Directory of Open Access Journals (Sweden)

    Zhengyu Xie

    2015-01-01

    Full Text Available We consider the sensor networks hierarchical optimization problem in high-speed railway transport hub (HRTH. The sensor networks are optimized from three hierarchies which are key area sensors optimization, passenger line sensors optimization, and whole area sensors optimization. Case study on a specific HRTH in China showed that the hierarchical optimization method is effective to optimize the sensor networks for security monitoring in HRTH.

  10. Non-Seismic Geophysical Approaches to Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hoversten, G.M.; Gasperikova, Erika

    2004-09-01

    This chapter considers the application of a number of different geophysical techniques for monitoring geologic sequestration of CO2. The relative merits of the seismic, gravity, electromagnetic (EM) and streaming potential (SP) geophysical techniques as monitoring tools are examined. An example of tilt measurements illustrates another potential monitoring technique, although it has not been studied to the extent of other techniques in this chapter. This work does not represent an exhaustive study, but rather demonstrates the capabilities of a number of geophysical techniques on two synthetic modeling scenarios. The first scenario represents combined CO2 enhance oil recovery (EOR) and sequestration in a producing oil field, the Schrader Bluff field on the north slope of Alaska, USA. The second scenario is of a pilot DOE CO2 sequestration experiment scheduled for summer 2004 in the Frio Brine Formation in South Texas, USA. Numerical flow simulations of the CO2 injection process for each case were converted to geophysical models using petrophysical models developed from well log data. These coupled flow simulation geophysical models allow comparrison of the performance of monitoring techniques over time on realistic 3D models by generating simulated responses at different times during the CO2 injection process. These time-lapse measurements are used to produce time-lapse changes in geophysical measurements that can be related to the movement of CO2 within the injection interval.

  11. Monitoring endurance athletes : A multidisciplinary approach

    NARCIS (Netherlands)

    Otter, Tina Ardi

    2016-01-01

    Endurance athletes seek for the optimal balance in training stress and recovery so they can perform at their best and avoid injuries. The PhD thesis of Ruby Otter at the School of Sport Studies (Hanze University of Applied Sciences) and the Center of Human Movement Sciences (UMCG, University of Gron

  12. Monitoring endurance athletes : A multidisciplinary approach

    NARCIS (Netherlands)

    Otter, Tina Ardi

    2016-01-01

    Endurance athletes seek for the optimal balance in training stress and recovery so they can perform at their best and avoid injuries. The PhD thesis of Ruby Otter at the School of Sport Studies (Hanze University of Applied Sciences) and the Center of Human Movement Sciences (UMCG, University of Gron

  13. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2017-02-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  14. Towards an integrated approach to marine benthic monitoring.

    Science.gov (United States)

    Barrio Froján, Christopher R S; Cooper, Keith M; Bolam, Stefan G

    2016-03-15

    In the UK, most marine benthic monitoring is carried out in a piecemeal fashion, funded by different sectors of industry that utilise the marine environment under licence. Monitoring requirements are imposed by licence conditions, which can vary considerably between licences. The UK Government also conducts marine environmental surveys in support of its legislative commitments. The present investigation reviews these different monitoring approaches to highlight whether synergies between them could be developed into an integrated approach to marine benthic monitoring. An integrated approach would have ecological benefits, as greater consistency in sampling and analytical protocols would reduce uncertainty in the predictions of impact, and facilitate the assessment of Good Environmental Status under the Marine Strategy Framework Directive. The same approach would also be of financial benefit, as spatio-temporal duplication in sampling would be reduced, and the value of acquired data would be maximised, resulting in a more efficient and cost-effective approach.

  15. New approaches to the design optimization of hydrofoils

    Science.gov (United States)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  16. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    Energy Technology Data Exchange (ETDEWEB)

    Murton, Jaclyn K.; Hanson, David T. (University of New Mexico, Albuquerque, NM); Turner, Tom (University of New Mexico, Albuquerque, NM); Powell, Amy Jo; James, Scott Carlton (Sandia National Laboratories, Livermore, CA); Timlin, Jerilyn Ann; Scholle, Steven (University of New Mexico, Albuquerque, NM); August, Andrew (Sandia National Laboratories, Livermore, CA); Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A. (Sandia National Laboratories, Livermore, CA)

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  17. Relational Approach to XPath Query Optimization

    NARCIS (Netherlands)

    Verhage, R.

    2005-01-01

    This thesis contributes to the Pathfinder project which aims at creating an XQuery compiler on top of a relational database system. Currently, it is being implemented on top of MonetDB, a main memory database system. For optimization and portability purposes, Pathfinder first compiles an XQuery expr

  18. Optimization of nonlinear controller with an enhanced biogeography approach

    Directory of Open Access Journals (Sweden)

    Mohammed Salem

    2014-07-01

    Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.

  19. Searchlight Correlation Detectors: Optimal Seismic Monitoring Using Regional and Global Networks

    Science.gov (United States)

    Gibbons, Steven J.; Kværna, Tormod; Näsholm, Sven Peter

    2015-04-01

    The sensitivity of correlation detectors increases greatly when the outputs from multiple seismic traces are considered. For single-array monitoring, a zero-offset stack of individual correlation traces will provide significant noise suppression and enhanced sensitivity for a source region surrounding the hypocenter of the master event. The extent of this region is limited only by the decrease in waveform similarity with increasing hypocenter separation. When a regional or global network of arrays and/or 3-component stations is employed, the zero-offset approach is only optimal when the master and detected events are co-located exactly. In many monitoring situations, including nuclear test sites and geothermal fields, events may be separated by up to many hundreds of meters while still retaining sufficient waveform similarity for correlation detection on single channels. However, the traveltime differences resulting from the hypocenter separation may result in significant beam loss on the zero-offset stack and a deployment of many beams for different hypothetical source locations in geographical space is required. The beam deployment necessary for optimal performance of the correlation detectors is determined by an empirical network response function which is most easily evaluated using the auto-correlation functions of the waveform templates from the master event. The correlation detector beam deployments for providing optimal network sensitivity for the North Korea nuclear test site are demonstrated for both regional and teleseismic monitoring configurations.

  20. Optimization of Remediation Conditions using Vadose Zone Monitoring Technology

    Science.gov (United States)

    Dahan, O.; Mandelbaum, R.; Ronen, Z.

    2010-12-01

    Success of in-situ bio-remediation of the vadose zone depends mainly on the ability to change and control hydrological, physical and chemical conditions of subsurface. These manipulations enables the development of specific, indigenous, pollutants degrading bacteria or set the environmental conditions for seeded bacteria. As such, the remediation efficiency is dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. Enhanced bioremediation of the vadose zone is achieved under field conditions through infiltration of water enriched with chemical additives. Yet, water percolation and solute transport in unsaturated conditions is a complex process and application of water with specific chemical conditions near land surface dose not necessarily result in promoting of desired chemical and hydraulic conditions in deeper sections of the vadose zone. A newly developed vadose-zone monitoring system (VMS) allows continuous monitoring of the hydrological and chemical properties of the percolating water along deep sections of the vadose zone. Implementation of the VMS at sites that undergoes active remediation provides real time information on the chemical and hydrological conditions in the vadose zone as the remediation process progresses. Manipulating subsurface conditions for optimal biodegradation of hydrocarbons is demonstrated through enhanced bio-remediation of the vadose zone at a site that has been contaminated with gasoline products in Tel Aviv. The vadose zone at the site is composed of 6 m clay layer overlying a sandy formation extending to the water table at depth of 20 m bls. The upper 5 m of contaminated soil were removed for ex-situ treatment, and the remaining 15 m vadose zone is treated in-situ through enhanced bioremedaition. Underground drip irrigation system was installed below the surface on the bottom of the excavation. Oxygen and nutrients releasing powder (EHCO, Adventus) was spread below the

  1. Design Buildings Optimally: A Lifecycle Assessment Approach

    KAUST Repository

    Hosny, Ossama

    2013-01-01

    This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.

  2. A CONSTRAINED OPTIMIZATION APPROACH FOR LCP

    Institute of Scientific and Technical Information of China (English)

    Ju-liang Zhang; Jian Chen; Xin-jian Zhuo

    2004-01-01

    In this paper, LCP is converted to an equivalent nonsmooth nonlinear equation system H(x, y) = 0 by using the famous NCP function-Fischer-Burmeister function. Note that some equations in H(x, y) = 0 are nonsmooth and nonlinear hence difficult to solve while the others are linear hence easy to solve. Then we further convert the nonlinear equation system H(x, y) = 0 to an optimization problem with linear equality constraints. After that we study the conditions under which the K T points of the optimization problem are the solutions of the original LCP and propose a method to solve the optimization problem.In this algorithm, the search direction is obtained by solving a strict convex programming at each iterative point. However, our algorithm is essentially different from traditional SQP method. The global convergence of the method is proved under mild conditions. In addition, we can prove that the algorithm is convergent superlinearly under the conditions:M is P0 matrix and the limit point is a strict complementarity solution of LCP. Preliminary numerical experiments are reported with this method.

  3. Molecular Approaches for Optimizing Vitamin D Supplementation.

    Science.gov (United States)

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. © 2016 Elsevier Inc. All rights reserved.

  4. A Supervised Approach to Delineate Built-Up Areas for Monitoring and Analysis of Settlements

    Directory of Open Access Journals (Sweden)

    Oliver Harig

    2016-08-01

    Full Text Available Monitoring urban growth and measuring urban sprawl is essential for improving urban planning and development. In this paper, we introduce a supervised approach for the delineation of urban areas using commonly available topographic data and commercial GIS software. The method uses a supervised parameter optimization approach along with buffer-based quality measuring method. The approach was developed, tested and evaluated in terms of possible usage in monitoring built-up areas in spatial science at a very fine-grained level. Results show that built-up area boundaries can be delineated automatically with higher quality compared to the settlement boundaries actually used. The approach has been applied to 166 settlement bodies in Germany. The study shows a very efficient way of extracting settlement boundaries from topographic data and maps and contributes to the quantification and monitoring of urban sprawl. Moreover, the findings from this study can potentially guide policy makers and urban planners from other countries.

  5. Designing optimal greenhouse gas monitoring networks for Australia

    Science.gov (United States)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  6. Designing optimal greenhouse gas monitoring networks for Australia

    Directory of Open Access Journals (Sweden)

    T. Ziehn

    2015-08-01

    Full Text Available Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG flux estimates from concentration measurements. The optimal location of ground based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2 performed by Ziehn et al. (2014 to also minimize the uncertainty on the flux estimates for methane (CH4 and nitrous oxide (N2O, both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to 5 new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  7. Toward Intelligent Hemodynamic Monitoring: A Functional Approach

    Directory of Open Access Journals (Sweden)

    Pierre Squara

    2012-01-01

    Full Text Available Technology is now available to allow a complete haemodynamic analysis; however this is only used in a small proportion of patients and seems to occur when the medical staff have the time and inclination. As a result of this, significant delays occur between an event, its diagnosis and therefore, any treatment required. We can speculate that we should be able to collect enough real time information to make a complete, real time, haemodynamic diagnosis in all critically ill patients. This article advocates for “intelligent haemodynamic monitoring”. Following the steps of a functional analysis, we answered six basic questions. (1 What is the actual best theoretical model for describing haemodynamic disorders? (2 What are the needed and necessary input/output data for describing this model? (3 What are the specific quality criteria and tolerances for collecting each input variable? (4 Based on these criteria, what are the validated available technologies for monitoring each input variable, continuously, real time, and if possible non-invasively? (5 How can we integrate all the needed reliably monitored input variables into the same system for continuously describing the global haemodynamic model? (6 Is it possible to implement this global model into intelligent programs that are able to differentiate clinically relevant changes as opposed to artificial changes and to display intelligent messages and/or diagnoses?

  8. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    Science.gov (United States)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  9. Multiobjective Optimization Methodology A Jumping Gene Approach

    CERN Document Server

    Tang, KS

    2012-01-01

    Complex design problems are often governed by a number of performance merits. These markers gauge how good the design is going to be, but can conflict with the performance requirements that must be met. The challenge is reconciling these two requirements. This book introduces a newly developed jumping gene algorithm, designed to address the multi-functional objectives problem and supplies a viably adequate solution in speed. The text presents various multi-objective optimization techniques and provides the technical know-how for obtaining trade-off solutions between solution spread and converg

  10. Stochastic Optimization Approaches for Solving Sudoku

    CERN Document Server

    Perez, Meir

    2008-01-01

    In this paper the Sudoku problem is solved using stochastic search techniques and these are: Cultural Genetic Algorithm (CGA), Repulsive Particle Swarm Optimization (RPSO), Quantum Simulated Annealing (QSA) and the Hybrid method that combines Genetic Algorithm with Simulated Annealing (HGASA). The results obtained show that the CGA, QSA and HGASA are able to solve the Sudoku puzzle with CGA finding a solution in 28 seconds, while QSA finding a solution in 65 seconds and HGASA in 1.447 seconds. This is mainly because HGASA combines the parallel searching of GA with the flexibility of SA. The RPSO was found to be unable to solve the puzzle.

  11. Optimization of Orthopaedic Drilling: A Taguchi Approach

    Directory of Open Access Journals (Sweden)

    Rupesh Kumar Pandey

    2012-06-01

    Full Text Available Bone drilling is a common procedure to prepare an implant site during orthopaedic surgery. An increase in temperature during such a procedure can result in thermal ostenecrosis which may delay healing or reduce the stability of the fixation. Therefore it is important to minimize the thermal invasion of bone during drilling. The Taguchi method has been applied to investigate the optimal combination of drill diameter, feed rate and spindle speed in dry drilling of Polymethylmethacrylate (PMMA for minimizing the temperature produced.

  12. Scientific Opportunities for Monitoring at Environmental Remediation Sites (SOMERS): Integrated Systems-Based Approaches to Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, Amoret L.; Wellman, Dawn M.; Deeb, Rula A.; Hawley, Elizabeth L.; Truex, Michael J.; Peterson, Mark; Freshley, Mark D.; Pierce, Eric M.; McCord, John; Young, Michael H.; Gilmore, Tyler J.; Miller, Rick; Miracle, Ann L.; Kaback, Dawn; Eddy-Dilek, Carol; Rossabi, Joe; Lee, Michelle H.; Bush, Richard P.; Beam , Paul; Chamberlain, G. M.; Marble, Justin; Whitehurst, Latrincy; Gerdes, Kurt D.; Collazo, Yvette

    2012-05-15

    Through an inter-disciplinary effort, DOE is addressing a need to advance monitoring approaches from sole reliance on cost- and labor-intensive point-source monitoring to integrated systems-based approaches such as flux-based approaches and the use of early indicator parameters. Key objectives include identifying current scientific, technical and implementation opportunities and challenges, prioritizing science and technology strategies to meet current needs within the DOE complex for the most challenging environments, and developing an integrated and risk-informed monitoring framework.

  13. Differences between fully Bayesian and pragmatic methods to assess predictive uncertainty and optimal monitoring designs

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Gosses, Moritz; Nowak, Wolfgang

    2015-04-01

    Data acquisition for monitoring the state in different compartments of complex, coupled environmental systems is often time consuming and expensive. Therefore, experimental monitoring strategies are ideally designed such that most can be learned about the system at minimal costs. Bayesian methods for uncertainty quantification and optimal design (OD) of monitoring strategies are well suited to handle the non-linearity exhibited by most coupled environmental systems. However, their high computational demand restricts their applicability to models with comparatively low run-times. Therefore, pragmatic approaches have been used predominantly in the past where data worth and OD analyses have been restricted to linear or linearised problems and methods. Bayesian (nonlinear) and pragmatic (linear) OD approaches are founded on different assumptions and typically follow different steps in the modelling chain of 1) model calibration, 2) uncertainty quantification, and 3) optimal design analysis. The goal of this study is to follow through these steps for a Bayesian and a pragmatic approach and to discuss the impact of different assumptions (prior uncertainty), calibration strategies, and OD analysis methods on the proposed monitoring designs and their reliability to reduce predictive uncertainty. The OD framework PreDIA (Leube et al. 2012) is used for the nonlinear assessment with a conditional model ensemble obtained with Markov-chain Monte Carlo simulation representing the initial predictive uncertainty. PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. In the pragmatic OD approach, the parameter calibration was performed with a non-linear global search and the initial predictive uncertainty was estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. PREDUNC was also used for the linear OD analysis. We applied PreDIA and PREDUNC for uncertainty

  14. Stochastic learning and optimization a sensitivity-based approach

    CERN Document Server

    Cao, Xi-Ren

    2007-01-01

    Performance optimization is vital in the design and operation of modern engineering systems. This book provides a unified framework based on a sensitivity point of view. It introduces new approaches and proposes new research topics.

  15. Optimization approaches to volumetric modulated arc therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  16. Optimal Configuration of a Redundant Robotic Arm: Compliance Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Applications of robots in tasks where the robot's end-effector bears loads, such as manipulating or assembling an object, picking-and-placing loads, grinding or drilling, demand precision. One aspect that improves precision is the limitation, if not elimination, of manipulator compliance. This paper presents a manipulator compliance optimization approach for determining an optimal manipulator configuration for a given position in the robot's task space. A numerical solution for minimal compliance, a nonlinear constrained optimization problem, is presented for an arbitrary position and illustrated by an example, using a model developed on ADAMS software and using MATLAB optimization tools. Also, this paper investigates the optimal value function for robot tasks in which the tool-point is subjected to applied force as it generates an important trajectory such as in grinding processes. The optimal value function is needed for optimal configuration control.

  17. MONITORING STAFFING LOCAL GOVERNMENT: GENDER APPROACH

    OpenAIRE

    2010-01-01

     Ukraine adopted a number of international instruments, including the UN Millennium Declaration (2000 p.) [7], which provides for its further development by tender democratic society, based on a consensus direction to achieve equal opportunities for women and men. Ignoring tender approach inhibits the development of all countries: developed and those developing and transition countries, in particular with respect and Ukraine, which makes too slow steps towards the declared goal.  Украина п...

  18. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  19. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    Science.gov (United States)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A

  20. Optimal Approach to SAR Image Despeckling

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Speckle filtering of synthetic aperture radar (SAR) images while preserving the spatial signal variability (texture and fine structures) still remains a challenge. Many algorithms have been proposed for the SAR imagery despeckling. However,simulated annealing (SA) method is one of excellent choices currently. A critical problem in the study on SA is to provide appropriate cooling schedules that ensure fast convergence to near-optimal solutions. This paper gives a new necessary and sufficient condition for the cooling schedule so that the algorithm state converges in all probability to the set of globally minimum cost states.Moreover, it constructs an appropriate objective function for SAR image despeckling. An experimental result of the actual SAR image processing is obtained.

  1. An optimization approach for the satisfiability problem

    Directory of Open Access Journals (Sweden)

    S. Noureddine

    2015-01-01

    Full Text Available We describe a new approach for solving the satisfiability problem by geometric programming. We focus on the theoretical background and give details of the algorithmic procedure. The algorithm is provably efficient as geometric programming is in essence a polynomial problem. The correctness of the algorithm is discussed. The version of the satisfiability problem we study is exact satisfiability with only positive variables, which is known to be NP-complete.

  2. Chronopsychological Approach for Optimizing Human Performance.

    Science.gov (United States)

    1980-03-01

    xpovos, meaning time, and psychology), was first introduced by Folkard (1977), but earlier Halberg (1973) proposed "educative chronobiology " to represent...or phase delay was equally rapid. They felt that the rapid adjustment derives from the individuals’ genetic makeup. They noticed, however, that the... chronobiological approach. Agard Lecture Series on "Sleep, Wakefulness and Circadian Rhythm", AGARD-LS-105, 1979. 62 k ,D C):i i ittee I t) ~ c, i n-on on

  3. EPSILON-CONTINUATION APPROACH FOR TRUSS TOPOLOGY OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    GUO Xu; CHENG Gengdong

    2004-01-01

    In the present paper, a so-called epsilon-continuation approach is proposed for the solution of singular optimum in truss topology optimization problems. This approach is an improved version of the epsilon-relaxed approach developed by the authors previously. In the proposed approach,we start the optimization process from a relaxation parameter with a relatively large value and obtain a solution by applying the epsilon-relaxed approach. Then we decrease the value of the relaxation parameter by a small amount and choose the optimal solution found from the previous optimization process as the initial design for the next optimization. This continuation process is continued until a small termination value of the relaxation parameter is reached. Convergence analysis of the proposed approach is also presented. Numerical examples show that this approach can alleviate the dependence of the final solution on the initial choice of the design variable and enhance the probability of finding the singular optimum from rather arbitrary initial designs.

  4. Optimality approaches to describe characteristic fluvial patterns on landscapes.

    Science.gov (United States)

    Paik, Kyungrock; Kumar, Praveen

    2010-05-12

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes.

  5. A Metaheuristic Approach for IT Projects Portfolio Optimization

    CERN Document Server

    Pushkar, Shashank; Mishra, Akhileshwar

    2010-01-01

    Optimal selection of interdependent IT Projects for implementation in multi periods has been challenging in the framework of real option valuation. This paper presents a mathematical optimization model for multi-stage portfolio of IT projects. The model optimizes the value of the portfolio within a given budgetary and sequencing constraints for each period. These sequencing constraints are due to time wise interdependencies among projects. A Metaheuristic approach is well suited for solving this kind of a problem definition and in this paper a genetic algorithm model has been proposed for the solution. This optimization model and solution approach can help IT managers taking optimal funding decision for projects prioritization in multiple sequential periods. The model also gives flexibility to the managers to generate alternative portfolio by changing the maximum and minimum number of projects to be implemented in each sequential period.

  6. Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219638; Yahyapour, Ramin

    The demand on computational and storage resources is growing along with the amount of information that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid performance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and serviceability of the computing platform can be measured according to the const...

  7. Assessing and optimizing the performance of infrasound networks to monitor volcanic eruptions

    Science.gov (United States)

    Tailpied, Dorianne; Le Pichon, Alexis; Marchetti, Emanuele; Assink, Jelle; Vergniolle, Sylvie

    2017-01-01

    We propose a numerical modeling technique based on a frequency-dependent attenuation relation to assess, quantify and optimize the performance of any arbitrary infrasound network to monitor explosive sources such as volcanic eruptions. Simulations are further enhanced by including realistic sources and propagation effects. We apply our approach to both hemispheres by considering the Euro-Mediterranean and the Eastern Australian regions. In these regions, we use quasi-permanent infrasound signals from Mt. Etna recorded in Tunisia and from Mt. Yasur recorded in New Caledonia. These well-instrumented volcanoes offer a unique opportunity to validate our attenuation model. In particular, accurate comparisons between near- and far-field recordings demonstrate the potential of the proposed methodology to remotely monitor volcanoes. A good agreement is found between modeled and observed results, especially when incorporating representative 10 m s-1 wind perturbations in the atmospheric specifications according to previous campaign measurements. To optimize the network layout in order to ensure the best monitoring of the volcanoes, we proceed through a grid search to find optimum locations of an additional array. We show that adding one array at an appropriate location in both regions under study could significantly improve detections half of the year. The application of the proposed methodology can provide in near real-time a realistic confidence level of volcanic eruption detections, useful to mitigate the risk of aircrafts encountering volcanic ash.

  8. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    Science.gov (United States)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  9. A practical multiscale approach for optimization of structural damping

    DEFF Research Database (Denmark)

    Andreassen, Erik; Jensen, Jakob Søndergaard

    2016-01-01

    A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...

  10. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

  11. An Efficient PageRank Approach for Urban Traffic Optimization

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2012-01-01

    to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.

  12. A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems

    DEFF Research Database (Denmark)

    Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak

    2012-01-01

    We propose an approach to reduce the optimal controller synthesis problem of hybrid systems to quantifier elimination; furthermore, we also show how to combine quantifier elimination with numerical computation in order to make it more scalable but at the same time, keep arising errors due...... to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...

  13. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    Science.gov (United States)

    2014-09-01

    ER-200717) Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis...N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data...8 2.1.2 The Geophysical Signatures of Bioremediation ......................................... 8 2.2 PRIOR

  14. Condition based maintenance optimization for wind power generation systems under continuous monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhigang; Wu, Bairong; Ding, Fangfang [Concordia Institute for Information Systems Engineering, Concordia University, 1515 Ste-Catherine Street, West EV-7.637, Montreal (Canada); Jin, Tongdan [Ingram School of Engineering, Texas State University (United States)

    2011-05-15

    By utilizing condition monitoring information collected from wind turbine components, condition based maintenance (CBM) strategy can be used to reduce the operation and maintenance costs of wind power generation systems. The existing CBM methods for wind power generation systems deal with wind turbine components separately, that is, maintenance decisions are made on individual components, rather than the whole system. However, a wind farm generally consists of multiple wind turbines, and each wind turbine has multiple components including main bearing, gearbox, generator, etc. There are economic dependencies among wind turbines and their components. That is, once a maintenance team is sent to the wind farm, it may be more economical to take the opportunity to maintain multiple turbines, and when a turbine is stopped for maintenance, it may be more cost-effective to simultaneously replace multiple components which show relatively high risks. In this paper, we develop an optimal CBM solution to the above-mentioned issues. The proposed maintenance policy is defined by two failure probability threshold values at the wind turbine level. Based on the condition monitoring and prognostics information, the failure probability values at the component and the turbine levels can be calculated, and the optimal CBM decisions can be made accordingly. A simulation method is developed to evaluate the cost of the CBM policy. A numerical example is provided to illustrate the proposed CBM approach. A comparative study based on commonly used constant-interval maintenance policy demonstrates the advantage of the proposed CBM approach in reducing the maintenance cost. (author)

  15. Multiobjective Network Optimization for Soil Monitoring of the Loess Hilly Region in China

    Directory of Open Access Journals (Sweden)

    Dianfeng Liu

    2014-01-01

    Full Text Available The soil monitoring network plays an important role in detecting the spatial distribution of soil attributes and facilitates sustainable land-use decision making. Reduced costs, higher speed, greater scope, and a loss of accuracy are necessary to design a regional monitoring network effectively. In this paper, we present a stochastic optimization design method for regional soil carbon and water content monitoring networks with a minimum sample size based on a modified particle swarm optimization algorithm equipped with multiobjective optimization technique. Our effort is to reconcile the conflicts between various objectives, that is, kriging variance, survey budget, spatial accessibility, spatial interval, and the amount of monitoring sites. We applied the method to optimize the soil monitoring networks in a semiarid loess hilly area located in northwest China. The results reveal that the proposed method is both effective and robust and outperforms the standard binary particle swarm optimization and spatial simulated annealing algorithm.

  16. [Study on the optimization of monitoring indicators of drinking water quality during health supervision].

    Science.gov (United States)

    Ye, Bixiong; E, Xueli; Zhang, Lan

    2015-01-01

    To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.

  17. A team-based approach to reducing cardiac monitor alarms.

    Science.gov (United States)

    Dandoy, Christopher E; Davies, Stella M; Flesch, Laura; Hayward, Melissa; Koons, Connie; Coleman, Kristen; Jacobs, Jodi; McKenna, Lori Ann; Olomajeye, Alero; Olson, Chad; Powers, Jessica; Shoemaker, Kimberly; Jodele, Sonata; Alessandrini, Evaline; Weiss, Brian

    2014-12-01

    Excessive cardiac monitor alarms lead to desensitization and alarm fatigue. We created and implemented a standardized cardiac monitor care process (CMCP) on a 24-bed pediatric bone marrow transplant unit. The aim of this project was to decrease monitor alarms through the use of team-based standardized care and processes. Using small tests of change, we developed and implemented a standardized CMCP that included: (1) a process for initial ordering of monitor parameters based on age-appropriate standards; (2) pain-free daily replacement of electrodes; (3) daily individualized assessment of cardiac monitor parameters; and (4) a reliable method for appropriate discontinuation of monitor. The Model for Improvement was used to design, test, and implement changes. The changes that were implemented after testing and adaptation were: family/patient engagement in the CMCP; creation of a monitor care log to address parameters, lead changes, and discontinuation; development of a pain-free process for electrode removal; and customized monitor delay and customized threshold parameters. From January to November 2013, percent compliance with each of the 4 components of the CMCP increased. Overall compliance with the CMCP increased from a median of 38% to 95%. During this time, the median number of alarms per patient-day decreased from 180 to 40. Implementation of the standardized CMCP resulted in a significant decrease in cardiac monitor alarms per patient day. We recommend a team-based approach to monitor care, including individualized assessment of monitor parameters, daily lead change, and proper discontinuation of the monitors. Copyright © 2014 by the American Academy of Pediatrics.

  18. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  19. Departures from optimality when pursuing multiple approach or avoidance goals.

    Science.gov (United States)

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record

  20. A Riccati approach for constrained linear quadratic optimal control

    Science.gov (United States)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  1. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming.

    Science.gov (United States)

    Qin, Sitian; Le, Xinyi; Wang, Jun

    2016-08-19

    This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

  2. Universal approach to optimal photon storage in atomic media.

    Science.gov (United States)

    Gorshkov, Alexey V; André, Axel; Fleischhauer, Michael; Sørensen, Anders S; Lukin, Mikhail D

    2007-03-23

    We present a universal physical picture for describing storage and retrieval of photon wave packets in a Lambda-type atomic medium. This physical picture encompasses a variety of different approaches to pulse storage ranging from adiabatic reduction of the photon group velocity and pulse-propagation control via off-resonant Raman fields to photon-echo-based techniques. Furthermore, we derive an optimal control strategy for storage and retrieval of a photon wave packet of any given shape. All these approaches, when optimized, yield identical maximum efficiencies, which only depend on the optical depth of the medium.

  3. Computational Approach for Multi Performances Optimization of EDM

    Directory of Open Access Journals (Sweden)

    Yusoff Yusliza

    2016-01-01

    Full Text Available This paper proposes a new computational approach employed in obtaining optimal parameters of multi performances EDM. Regression and artificial neural network (ANN are used as the modeling techniques meanwhile multi objective genetic algorithm (multiGA is used as the optimization technique. Orthogonal array L256 is implemented in the procedure of network function and network architecture selection. Experimental studies are carried out to verify the machining performances suggested by this approach. The highest MRR value obtained from OrthoANN – MPR – MultiGA is 205.619 mg/min and the lowest Ra value is 0.0223μm.

  4. Monitoring simultaneous photocatalytic-ozonation of mixture of pharmaceuticals in the presence of immobilized TiO2 nanoparticles using MCR-ALS: Identification of intermediates and multi-response optimization approach.

    Science.gov (United States)

    Fathinia, Mehrangiz; Khataee, Alireza; Naseri, Abdolhosein; Aber, Soheil

    2015-02-05

    The present study has focused on the degradation of a mixture of three pharmaceuticals, i.e. methyldopa (MDP), nalidixic acid (NAD) and famotidine (FAM) which were quantified simultaneously during photocatalytic-ozonation process. The experiments were conducted in a semi-batch reactor where TiO2 nanoparticles (crystallites mean size 8nm) were immobilized on ceramic plates irradiated by UV-A light in the proximity of oxygen and/or ozone. The surface morphology and roughness of the bare and TiO2-coated ceramic plates were analyzed using scanning electron microscopy (SEM) and atomic force microscopy (AFM). An analytical methodology was successfully developed based on both recording ultraviolet-visible (UV-Vis) spectra during the degradation process and a data analysis using multivariate curve resolution with alternating least squares (MCR-ALS). This methodology enabled the researchers to obtain the concentration and spectral profiles of the chemical compounds which were involved in the process. A central composite design was used to study the effect of several factors on multiple responses namely MDP removal (Y1), NAD removal (Y2) and FAM removal (Y3) in the simultaneous photocatalytic-ozonation of these pharmaceuticals. A multi-response optimization procedure based on global desirability of the factors was used to simultaneously maximize Y1, Y2 and Y3. The results of the global desirability revealed that 8mg/L MAD, 8mg/L NAD, 8mg/L FAM, 6L/h ozone flow rate and a 30min-reaction time were the best conditions under which the optimized values of various responses were Y1=95.03%, Y2=84.93% and Y3=99.15%. Also, the intermediate products of pharmaceuticals generated in the photocatalytic-ozonation process were identified by gas chromatography coupled to mass spectrometry.

  5. Monitoring simultaneous photocatalytic-ozonation of mixture of pharmaceuticals in the presence of immobilized TiO2 nanoparticles using MCR-ALS: Identification of intermediates and multi-response optimization approach

    Science.gov (United States)

    Fathinia, Mehrangiz; Khataee, Alireza; Naseri, Abdolhosein; Aber, Soheil

    2015-02-01

    The present study has focused on the degradation of a mixture of three pharmaceuticals, i.e. methyldopa (MDP), nalidixic acid (NAD) and famotidine (FAM) which were quantified simultaneously during photocatalytic-ozonation process. The experiments were conducted in a semi-batch reactor where TiO2 nanoparticles (crystallites mean size 8 nm) were immobilized on ceramic plates irradiated by UV-A light in the proximity of oxygen and/or ozone. The surface morphology and roughness of the bare and TiO2-coated ceramic plates were analyzed using scanning electron microscopy (SEM) and atomic force microscopy (AFM). An analytical methodology was successfully developed based on both recording ultraviolet-visible (UV-Vis) spectra during the degradation process and a data analysis using multivariate curve resolution with alternating least squares (MCR-ALS). This methodology enabled the researchers to obtain the concentration and spectral profiles of the chemical compounds which were involved in the process. A central composite design was used to study the effect of several factors on multiple responses namely MDP removal (Y1), NAD removal (Y2) and FAM removal (Y3) in the simultaneous photocatalytic-ozonation of these pharmaceuticals. A multi-response optimization procedure based on global desirability of the factors was used to simultaneously maximize Y1, Y2 and Y3. The results of the global desirability revealed that 8 mg/L MAD, 8 mg/L NAD, 8 mg/L FAM, 6 L/h ozone flow rate and a 30 min-reaction time were the best conditions under which the optimized values of various responses were Y1 = 95.03%, Y2 = 84.93% and Y3 = 99.15%. Also, the intermediate products of pharmaceuticals generated in the photocatalytic-ozonation process were identified by gas chromatography coupled to mass spectrometry.

  6. Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Bindu

    2012-06-01

    Full Text Available One of the category of algorithm Problems are basically exponential problems. These problems are basically exponential problems and take time to find the solution. In the present work we are optimising one of the common NP complete problem called Travelling Salesman Problem. In our work we have defined a genetic approach by combining fuzzy approach along with genetics. In this work we have implemented the modified DPX crossover to improve genetic approach. The work is implemented in MATLAB environment and obtained results shows the define approach has optimized the existing genetic algorithm results

  7. Nonlinear Cointegration Approach for Condition Monitoring of Wind Turbines

    Directory of Open Access Journals (Sweden)

    Konrad Zolna

    2015-01-01

    Full Text Available Monitoring of trends and removal of undesired trends from operational/process parameters in wind turbines is important for their condition monitoring. This paper presents the homoscedastic nonlinear cointegration for the solution to this problem. The cointegration approach used leads to stable variances in cointegration residuals. The adapted Breusch-Pagan test procedure is developed to test for the presence of heteroscedasticity in cointegration residuals obtained from the nonlinear cointegration analysis. Examples using three different time series data sets—that is, one with a nonlinear quadratic deterministic trend, another with a nonlinear exponential deterministic trend, and experimental data from a wind turbine drivetrain—are used to illustrate the method and demonstrate possible practical applications. The results show that the proposed approach can be used for effective removal of nonlinear trends form various types of data, allowing for possible condition monitoring applications.

  8. A global optimization approach to multi-polarity sentiment analysis.

    Science.gov (United States)

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  9. A NEW APPROACH TO PIP CROP MONITORING USING REMOTE SENSING

    Science.gov (United States)

    Current plantings of 25+ million acres of transgenic corn in the United States require a new approach to monitor this important crop for the development of pest resistance. Remote sensing by aerial or satellite images may provide a method of identifying transgenic pesticidal cro...

  10. Remotely sensed monitoring of small reservoir dynamics: a Bayesian approach

    NARCIS (Netherlands)

    Eilander, D.M.; Annor, F.O.; Iannini, L.; Van de Giesen, N.C.

    2014-01-01

    Multipurpose small reservoirs are important for livelihoods in rural semi-arid regions. To manage and plan these reservoirs and to assess their hydrological impact at a river basin scale, it is important to monitor their water storage dynamics. This paper introduces a Bayesian approach for monitorin

  11. Monitoring contaminant strategies: tools, techniques, methodologies and model approaches

    Science.gov (United States)

    A century-long history of experiments on solute transport in soils has resulted in a wide range of experimental setups and procedures, as well as methods for interpreting observations which has led to considerable ambiguity regarding monitoring approaches. This presentation will focus on results an...

  12. Optimizing master event templates for CTBT monitoring with dimensionality reduction techniques: real waveforms vs. synthetics.

    Science.gov (United States)

    Rozhkov, Mikhail; Bobrov, Dmitry; Kitov, Ivan

    2014-05-01

    The Master Event technique is a powerful tool for Expert Technical Analysis within the CTBT framework as well as for real-time monitoring with the waveform cross-correlation (CC) (match filter) approach. The primary goal of CTBT monitoring is detection and location of nuclear explosions. Therefore, the cross-correlation monitoring should be focused on finding such events. The use of physically adequate waveform templates may significantly increase the number of valid, both natural and manmade, events in the Reviewed Event Bulletin (REB) of the International Data Centre. Inadequate templates for master events may increase the number of CTBT irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event template simulation and narrowing the classes of CC templates used in detection and location process based on principal and independent component analysis (PCA and ICA). Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared. The prototype of CC-based Global Grid monitoring system developed in IDC during last year was populated with different hybrid waveform templates (synthetics, synthetics components, and real components) and its performance was assessed with the world seismicity data flow, including the DPRK-2013 event. The specific features revealed in this study for the P-waves from the DPRK underground nuclear explosions (UNEs) can reduce the global detection threshold of seismic monitoring under the CTBT by 0.5 units of magnitude. This corresponds to the reduction in the test yield by a

  13. Y-12 Groundwater Protection Program Monitoring Optimization Plan for Groundwater Monitoring Wells at the U.S. Department of Energy Y-12 National Security Complex, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2017-04-01

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee. The plan describes the technical approach that is implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 that provide the most useful hydrologic and groundwater quality monitoring data. The technical approach is based on the GWPP status designation for each well. Under this approach, wells granted “active” status are used by the GWPP for hydrologic monitoring and/or groundwater quality sampling, whereas wells granted “inactive” status are not used for either purpose. The status designation also defines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP. Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans.

  14. Y-12 Groundwater Protection Program Monitoring Optimization Plan For Groundwater Monitoring Wells At The U.S. Department Of Energy Y-12 National Security Complex, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2013-09-01

    This document is the monitoring optimization plan for groundwater monitoring wells associated with the U.S. Department of Energy (DOE) Y-12 National Security Complex (Y-12) in Oak Ridge, Tennessee. The plan describes the technical approach that is implemented under the Y-12 Groundwater Protection Program (GWPP) to focus available resources on the monitoring wells at Y-12 that provide the most useful hydrologic and groundwater quality monitoring data. The technical approach is based on the GWPP status designation for each well. Under this approach, wells granted "active" status are used by the GWPP for hydrologic monitoring and/or groundwater quality sampling, whereas wells granted "inactive" status are not used for either purpose. The status designation also defines the frequency at which the GWPP will inspect applicable wells, the scope of these well inspections, and extent of any maintenance actions initiated by the GWPP. Details regarding the ancillary activities associated with implementation of this plan (e.g., well inspection) are deferred to the referenced GWPP plans. This plan applies to groundwater wells associated with Y-12 and related waste management areas and facilities located within three hydrogeologic regimes.

  15. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov;

    2016-01-01

    to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems....... The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between...... experimental results and model forecasts was obtained and the system was able to maintain the load at more than 33 K below the oil well temperature. Results of this study support topology optimizationas a powerful design tool for thermal design of thermoelectric systems....

  16. OPTIMIZING LOCALIZATION ROUTE USING PARTICLE SWARM-A GENETIC APPROACH

    Directory of Open Access Journals (Sweden)

    L. Lakshmanan

    2014-01-01

    Full Text Available One of the most key problems in wireless sensor networks is finding optimal algorithms for sending packets from source node to destination node. Several algorithms exist in literature, since some are in vital role other may not. Since WSN focus on low power consumption during packet transmission and receiving, finally we adopt by merging swarm particle based algorithm with genetic approach. Initially we order the nodes based on their energy criterion and then focusing towards node path; this can be done using Proactive route algorithm for finding optimal path between Source-Destination (S-D nodes. Fast processing and pre traversal can be done using selective flooding approach and results are in genetic. We have improved our results with high accuracy and optimality in rendering routes.

  17. An optimal control approach to probabilistic Boolean networks

    Science.gov (United States)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  18. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  19. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    This paper introduces an approach based on the Swarm Variant of the ... comprehensive learning particle swarm optimization (CLPSO), genetic ... DOI: http://dx.doi.org/10.4314/ijest.v7i3.12S ..... machine power systems: a comparative study.

  20. The optimality of potential rescaling approaches in land data assimilation

    Science.gov (United States)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  1. Discuss Optimal Approaches to Learning Strategy Instruction for EFL Learners

    Institute of Scientific and Technical Information of China (English)

    邢菊如

    2009-01-01

    Numerous research studies reveal that learning strategies have played an important role in language learning processes.This paper explores as English teachers.can we impmve students' language proficiency by giving them optimal learning strategy instruction and what approaches are most effective and efficient?

  2. Stochastic Approaches to Interactive Multi-Criteria Optimization Problems

    OpenAIRE

    1986-01-01

    A stochastic approach to the development of interactive algorithms for multicriteria optimization is discussed in this paper. These algorithms are based on the idea of a random search and the use of a decision-maker who can compare any two decisions. The questions of both theoretical analysis (proof of convergence, investigation of stability) and practical implementation of these algorithms are discussed.

  3. Optimization of a Groundwater Monitoring Network for a Sustainable Development of the Maheshwaram Catchment, India

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmed

    2011-02-01

    Full Text Available Groundwater is one of the most valuable resources for drinking water and irrigation in the Maheshwaram Catchment, Central India, where most of the local population depends on it for agricultural activities. An increasing demand for irrigation and the growing concern about potential water contamination makes imperative the implementation of a systematic groundwater-quality monitoring program in the region. Nonetheless, limited funding and resources emphasize the need to achieve a representative but cost-effective sampling strategy. In this context, field observations were combined with a geostatistical analysis to define an optimized monitoring network able to provide sufficient and non-redundant information on key hydrochemical parameters. A factor analysis was used to evaluate the interrelationship among variables, and permitted to reduce the original dataset into a new configuration of monitoring points still able to capture the spatial variability in the groundwater quality of the basin. The approach is useful to maximize data collection and contributes to better manage the allocation of resources under budget constrains.

  4. Monitoring and optimization of energy consumption of base transceiver stations

    CERN Document Server

    Spagnuolo, Antonio; Vetromile, Carmela; Formosi, Roberto; Lubritto, Carmine

    2015-01-01

    The growth and development of the mobile phone network has led to an increased demand for energy by the telecommunications sector, with a noticeable impact on the environment. Monitoring of energy consumption is a great tool for understanding how to better manage this consumption and find the best strategy to adopt in order to maximize reduction of unnecessary usage of electricity. This paper reports on a monitoring campaign performed on six Base Transceiver Stations (BSs) located central Italy, with different technology, typology and technical characteristics. The study focuses on monitoring energy consumption and environmental parameters (temperature, noise, and global radiation), linking energy consumption with the load of telephone traffic and with the air conditioning functions used to cool the transmission equipment. Moreover, using experimental data collected, it is shown, with a Monte Carlo simulation based on power saving features, how the BS monitored could save energy.

  5. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  6. Global Optimization Approach to Non-convex Problems

    Institute of Scientific and Technical Information of China (English)

    LU Zi-fang; ZHENG Hui-li

    2004-01-01

    A new approach to find the global optimal solution of the special non-convex problems is proposed in this paper. The non-convex objective problem is first decomposed into two convex sub-problems. Then a generalized gradient is introduced to determine a search direction and the evolution equation is built to obtain a global minimum point. By the approach, we can prevent the search process from some local minima and search a global minimum point. Two numerical examples are given to prove the approach to be effective.

  7. Computer monitoring and optimization of the steam boiler performance

    OpenAIRE

    Sobota Tomasz

    2017-01-01

    The paper presents a method for determination of thermo-flow parameters for steam boilers. This method allows to perform the calculations of the boiler furnace chamber and heat flow rates absorbed by superheater stages. These parameters are important for monitoring the performance of the power unit. Knowledge of these parameters allows determining the degree of the furnace chamber slagging. The calculation can be performed in online mode and use to monitoring of steam boiler. The presented me...

  8. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  9. SOI built-in heat spreader with temperature and pressure integrated sensors for cooling optimization and in situ monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bercu, Bogdan, E-mail: bogdan_bercu@yahoo.com [Institute of Microelectronics, Electromagnetism and Photonics (IMEP-LAHC) 3, parvis Louis Neel - BP 257, 38016 Grenoble Cedex 01 (France); Montes, Laurent; Morfouli, Panagiota [Institute of Microelectronics, Electromagnetism and Photonics (IMEP-LAHC) 3, parvis Louis Neel - BP 257, 38016 Grenoble Cedex 01 (France)

    2011-03-15

    This contribution presents an original solution for sensor integration into a heat spreader which is directly micromachined into the silicon substrate of the device to be cooled. Having both a high thermal conductivity coefficient and a high level of miniaturization, the vapor chamber heat spreader provides a high robustness due to the absence of any moving pumping parts. Simulation results as well as experimental results obtained with a prototype of the heat spreader with integrated temperature and pressure microsensors are presented. The results concerning device cooling optimization using the integrated sensors are highlighting the interest of this approach for accurate in situ monitoring and cooling optimization of silicon-integrated heat spreaders.

  10. An efficient approach for reliability-based topology optimization

    Science.gov (United States)

    Kanakasabai, Pugazhendhi; Dhingra, Anoop K.

    2016-01-01

    This article presents an efficient approach for reliability-based topology optimization (RBTO) in which the computational effort involved in solving the RBTO problem is equivalent to that of solving a deterministic topology optimization (DTO) problem. The methodology presented is built upon the bidirectional evolutionary structural optimization (BESO) method used for solving the deterministic optimization problem. The proposed method is suitable for linear elastic problems with independent and normally distributed loads, subjected to deflection and reliability constraints. The linear relationship between the deflection and stiffness matrices along with the principle of superposition are exploited to handle reliability constraints to develop an efficient algorithm for solving RBTO problems. Four example problems with various random variables and single or multiple applied loads are presented to demonstrate the applicability of the proposed approach in solving RBTO problems. The major contribution of this article comes from the improved efficiency of the proposed algorithm when measured in terms of the computational effort involved in the finite element analysis runs required to compute the optimum solution. For the examples presented with a single applied load, it is shown that the CPU time required in computing the optimum solution for the RBTO problem is 15-30% less than the time required to solve the DTO problems. The improved computational efficiency allows for incorporation of reliability considerations in topology optimization without an increase in the computational time needed to solve the DTO problem.

  11. A Bayesian experimental design approach to structural health monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Flynn, Eric [UCSD; Todd, Michael [UCSD

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  12. From numbers to ecosystems and biodiversity: A mechanistic approach to monitoring

    Directory of Open Access Journals (Sweden)

    Sam Ferreira

    2011-05-01

    Full Text Available Diverse political, cultural and biological needs epitomise the contrasting demands impacting on the mandate of the South African National Parks (SANParks to maintain biological diversity. Systems-based approaches and strategic adaptive management (learn by doing enable SANParks to accommodate these demands. However, such a management strategy creates new information needs, which require an appropriate analytical approach. We use conceptual links between objectives, indicators, mechanisms and modulators to identify key concerns in the context of and related to management objectives. Although our suggested monitoring designs are based mostly on defined or predicted underlying mechanisms of a concern, SANParks requires inventory monitoring to evaluate its key mandate. We therefore propose a predictive inventory approach based on species assemblages related to habitat preferences. Inventories alone may not always adequately serve unpacking of mechanisms: in some cases population size needs to be estimated to meet the information needs of management strategies, but actual population sizes may indirectly affect how the species impact on other values. In addition, ecosystem objectives require multivariate assessments of key communities, which can be used in trend analysis. SANParks therefore needs to know how to detect and define trends efficiently, which, in turn, requires precision of measures of variables. Conservation implications: Current research needs with regard to monitoring should focus on defining designs to yield optimal precision whilst taking methodology, survey trade-offs and analytical approaches into account. Use of these directives and research will guide monitoring during evaluation of SANParks objectives at various scales.

  13. Effects of optimism on creativity under approach and avoidance motivation

    Directory of Open Access Journals (Sweden)

    Tamar eIcekson

    2014-02-01

    Full Text Available Focusing on avoiding failure or negative outcomes (avoidance motivation can undermine creativity, due to cognitive (e.g., threat appraisals, affective (e.g., anxiety, and volitional processes (e.g., low intrinsic motivation. This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement - barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  14. Effects of optimism on creativity under approach and avoidance motivation.

    Science.gov (United States)

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement-barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  15. Global optimal design of ground water monitoring network using embedded kriging.

    Science.gov (United States)

    Dhar, Anirban; Datta, Bithin

    2009-01-01

    We present a methodology for global optimal design of ground water quality monitoring networks using a linear mixed-integer formulation. The proposed methodology incorporates ordinary kriging (OK) within the decision model formulation for spatial estimation of contaminant concentration values. Different monitoring network design models incorporating concentration estimation error, variance estimation error, mass estimation error, error in locating plume centroid, and spatial coverage of the designed network are developed. A big-M technique is used for reformulating the monitoring network design model to a linear decision model while incorporating different objectives and OK equations. Global optimality of the solutions obtained for the monitoring network design can be ensured due to the linear mixed-integer programming formulations proposed. Performances of the proposed models are evaluated for both field and hypothetical illustrative systems. Evaluation results indicate that the proposed methodology performs satisfactorily. These performance evaluation results demonstrate the potential applicability of the proposed methodology for optimal ground water contaminant monitoring network design.

  16. Optimal control of underactuated mechanical systems: A geometric approach

    Science.gov (United States)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  17. Optimal Control of Underactuated Mechanical Systems: A Geometric Approach

    CERN Document Server

    Colombo, L; Zuccalli, M

    2009-01-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  18. Wireless Sensing, Monitoring and Optimization for Campus-Wide Steam Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Allgood, Glenn O [ORNL; Kuruganti, Phani Teja [ORNL; Sukumar, Sreenivas R [ORNL; Woodworth, Ken [ORNL; Lake, Joe E [ORNL

    2011-11-01

    The US Congress has passed legislation dictating that all government agencies establish a plan and process for improving energy efficiencies at their sites. In response to this legislation, Oak Ridge National Laboratory (ORNL) has recently conducted a pilot study to explore the deployment of a wireless sensor system for a real-time measurement-based energy efficiency optimization. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize energy delivery within the steam distribution system. Our approach leverages an integrated wireless sensor and real-time monitoring capability. We make real time state assessment on the steam trap health and steam flow estimate of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observing measurements of these sensors with state estimators for system health. Our assessments are based on a spectral-based energy signature scheme that interprets acoustic vibration sensor data to estimate steam flow rates and assess steam traps status. Experimental results show that the energy signature scheme has the potential to identify different steam trap states and it has sufficient sensitivity to estimate flow rate. Moreover, results indicate a nearly quadratic relationship over the test region between the overall energy signature factor and flow rate in the pipe. We are able to present the steam flow and steam trap status, sensor readings, and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. The goal is to achieve significant energy-saving in steam lines by monitoring and acting on leaking steam pipes/traps/valves. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  19. Dynamic Query Optimization Approach for Semantic Database Grid

    Institute of Scientific and Technical Information of China (English)

    Xiao-Qing Zheng; Hua-Jun Chen; Zhao-Hui Wu; Yu-Xin Mao

    2006-01-01

    Fundamentally, semantic grid database is about bringing globally distributed databases together in order to coordinate resource sharing and problem solving in which information is given well-defined meaning, and DartGrid Ⅱ is the implemented database gird system whose goal is to provide a semantic solution for integrating database resources on the Web.Although many algorithms have been proposed for optimizing query-processing in order to minimize costs and/or response time, associated with obtaining the answer to query in a distributed database system, database grid query optimization problem is fundamentally different from traditional distributed query optimization. These differences are shown to be the consequences of autonomy and heterogeneity of database nodes in database grid. Therefore, more challenges have arisen for query optimization in database grid than traditional distributed database. Following this observation, the design of a query optimizer in DartGrid Ⅱ is presented, and a heuristic, dynamic and parallel query optimization approach to processing query in database grid is proposed. A set of semantic tools supporting relational database integration and semantic-based information browsing has also been implemented to realize the above vision.

  20. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  1. A Hybrid Optimization Approach for SRM FINOCYL Grain Design

    Institute of Scientific and Technical Information of China (English)

    Khurram Nisar; Liang Guozhu; Qasim Zeeshan

    2008-01-01

    This article presents a method to design and optimize 3D FINOCYL grain (FCG) configuration for solid rocket motors (SRMs). The design process of FCG configuration involves mathematical modeling of the geometry and parametric evaluation of various inde-pendent geometric variables that define the complex configuration. Vh'tually infinite combinations of these variables will satisfy the requirements of mass of propellant, thrust, and burning time in addition to satisfying basic needs for volumetric loading fraction and web fraction. In order to ensure the acquisition of the best possible design to be acquired, a sound approach of design and optimization is essentially demanded. To meet this need, a method is introduced to acquire the finest possible performance. A series of computations are carried out to formulate the grain geometry in terms of various combinations of key shapes inclusive of ellipsoid, cone, cylinder, sphere, torus, and inclined plane. A hybrid optimization (HO) technique is established by associating genetic algorithm (GA) for global solution convergence with sequential quadratic programming (SQP) for further local convergence of the solution, thus achieving the final optimal design. A comparison of the optimal design results derived from SQP, GA, and HO algorithms is presented. By using HO technique, the parameter of propellant mass is optimized to the minimum value with the required level of thrust staying within the constrained burning time, nozzle and propellant parameters, and a fixed length and outer diameter of grain,

  2. A hybrid optimization approach in non-isothermal glass molding

    Science.gov (United States)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  3. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    Science.gov (United States)

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  4. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ranganathan Mohanasundaram

    2015-01-01

    Full Text Available The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  5. Does intense monitoring matter? A quantile regression approach

    Directory of Open Access Journals (Sweden)

    Fekri Ali Shawtari

    2017-06-01

    Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.

  6. Optical Performance Monitoring and Signal Optimization in Optical Networks

    DEFF Research Database (Denmark)

    Petersen, Martin Nordal

    2006-01-01

    -optical-electrical regeneration points decreases. This thesis evaluates the impact of signal degrading effects that are becoming of increasing concern in all-optical high-speed networks due to all-optical switching and higher bit-rates. Especially group-velocity-dispersion (GVD) and a number of nonlinear effects will require......The thesis studies performance monitoring for the next generation optical networks. The focus is on all-optical networks with bit-rates of 10 Gb/s or above. Next generation all-optical networks offer large challenges as the optical transmitted distance increases and the occurrence of electrical...... enhanced attention to avoid signal degradations. The requirements for optical performance monitoring features are discussed, and the thesis evaluates the advantages and necessity of increasing the level of performance monitoring parameters in the physical layer. In particular, methods for optical...

  7. Development of a Multi-Event Trajectory Optimization Tool for Noise-Optimized Approach Route Design

    NARCIS (Netherlands)

    Braakenburg, M.L.; Hartjes, S.; Visser, H.G.; Hebly, S.J.

    2011-01-01

    This paper presents preliminary results from an ongoing research effort towards the development of a multi-event trajectory optimization methodology that allows to synthesize RNAV approach routes that minimize a cumulative measure of noise, taking into account the total noise effect aggregated for a

  8. A nonlinear cointegration approach with applications to structural health monitoring

    Science.gov (United States)

    Shi, H.; Worden, K.; Cross, E. J.

    2016-09-01

    One major obstacle to the implementation of structural health monitoring (SHM) is the effect of operational and environmental variabilities, which may corrupt the signal of structural degradation. Recently, an approach inspired from the community of econometrics, called cointegration, has been employed to eliminate the adverse influence from operational and environmental changes and still maintain sensitivity to structural damage. However, the linear nature of cointegration may limit its application when confronting nonlinear relations between system responses. This paper proposes a nonlinear cointegration method based on Gaussian process regression (GPR); the method is constructed under the Engle-Granger framework, and tests for unit root processes are conducted both before and after the GPR is applied. The proposed approach is examined with real engineering data from the monitoring of the Z24 Bridge.

  9. OPTIMIZATION APPROACH FOR HYBRID ELECTRIC VEHICLE POWERTRAIN DESIGN

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhengli; Zhang Jianwu; Yin Chengliang

    2005-01-01

    According to bench test results of fuel economy and engine emission for the real powertrain system of EQ7200HEV car, a 3-D performance map oriented quasi-linear model is developed for the configuration of the powertrain components such as internal combustion engine, traction electric motor, transmission, main retarder and energy storage unit. A genetic algorithm based on optimization procedure is proposed and applied for parametric optimization of the key components by consideration of requirements of some driving cycles. Through comparison of numerical results obtained by the genetic algorithm with those by traditional optimization methods, it is shown that the present approach is quite effective and efficient in emission reduction and fuel economy for the design of the hybrid electric car powertrain.

  10. Structural Weight Optimization of Aircraft Wing Component Using FEM Approach.

    Directory of Open Access Journals (Sweden)

    Arockia Ruban M,

    2015-06-01

    Full Text Available One of the main challenges for the civil aviation industry is the reduction of its environmental impact by better fuel efficiency by virtue of Structural optimization. Over the past years, improvements in performance and fuel efficiency have been achieved by simplifying the design of the structural components and usage of composite materials to reduce the overall weight of the structure. This paper deals with the weight optimization of transport aircraft with low wing configuration. The Linear static and Normal Mode analysis were carried out using MSc Nastran & Msc Patran under different pressure conditions and the results were verified with the help of classical approach. The Stress and displacement results were found and verified and hence arrived to the conclusion about the optimization of the wing structure.

  11. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  12. TSP based Evolutionary optimization approach for the Vehicle Routing Problem

    Science.gov (United States)

    Kouki, Zoulel; Chaar, Besma Fayech; Ksouri, Mekki

    2009-03-01

    Vehicle Routing and Flexible Job Shop Scheduling Problems (VRP and FJSSP) are two common hard combinatorial optimization problems that show many similarities in their conceptual level [2, 4]. It was proved for both problems that solving techniques like exact methods fail to provide good quality solutions in a reasonable amount of time when dealing with large scale instances [1, 5, 14]. In order to overcome this weakness, we decide in the favour of meta heuristics and we focalize on evolutionary algorithms that have been successfully used in scheduling problems [1, 5, 9]. In this paper we investigate the common properties of the VRP and the FJSSP in order to provide a new controlled evolutionary approach for the CVRP optimization inspired by the FJSSP evolutionary optimization algorithms introduced in [10].

  13. OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS

    Science.gov (United States)

    The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...

  14. Optimizing bulk milk dioxin monitoring based on costs and effectiveness

    NARCIS (Netherlands)

    Lascano Alcoser, V.; Velthuis, A.G.J.; Fels-Klerx, van der H.J.; Hoogenboom, L.A.P.; Oude Lansink, A.G.J.M.

    2013-01-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food

  15. Optimizing bulk milk dioxin monitoring based on costs and effectiveness

    NARCIS (Netherlands)

    Lascano Alcoser, V.; Velthuis, A.G.J.; Fels-Klerx, van der H.J.; Hoogenboom, L.A.P.; Oude Lansink, A.G.J.M.

    2013-01-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food ch

  16. Optimizing bulk milk dioxin monitoring based on costs and effectiveness

    NARCIS (Netherlands)

    Lascano Alcoser, V.; Velthuis, A.G.J.; Fels-Klerx, van der H.J.; Hoogenboom, L.A.P.; Oude Lansink, A.G.J.M.

    2013-01-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food ch

  17. Riverbed clogging associated with a California riverbank filtration system: An assessment of mechanisms and monitoring approaches

    Science.gov (United States)

    Ulrich, Craig; Hubbard, Susan S.; Florsheim, Joan; Rosenberry, Donald O.; Borglin, Sharon; Trotta, Marcus; Seymour, Donald

    2015-01-01

    An experimental field study was performed to investigate riverbed clogging processes and associated monitoring approaches near a dam-controlled riverbank filtration facility in Northern California. Motivated by previous studies at the site that indicated riverbed clogging plays an important role in the performance of the riverbank filtration system, we investigated the spatiotemporal variability and nature of the clogging. In particular, we investigated whether the clogging was due to abiotic or biotic mechanisms. A secondary aspect of the study was the testing of different methods to monitor riverbed clogging and related processes, such as seepage. Monitoring was conducted using both point-based approaches and spatially extensive geophysical approaches, including: grain-size analysis, temperature sensing, electrical resistivity tomography, seepage meters, microbial analysis, and cryocoring, along two transects. The point monitoring measurements suggested a substantial increase in riverbed biomass (2 orders of magnitude) after the dam was raised compared to the small increase (∼2%) in fine-grained sediment. These changes were concomitant with decreased seepage. The decreased seepage eventually led to the development of an unsaturated zone beneath the riverbed, which further decreased infiltration capacity. Comparison of our time-lapse grain-size and biomass datasets suggested that biotic processes played a greater role in clogging than did abiotic processes. Cryocoring and autonomous temperature loggers were most useful for locally monitoring clogging agents, while electrical resistivity data were useful for interpreting the spatial extent of a pumping-induced unsaturated zone that developed beneath the riverbed after riverbed clogging was initiated. The improved understanding of spatiotemporally variable riverbed clogging and monitoring approaches is expected to be useful for optimizing the riverbank filtration system operations.

  18. A global optimization approach to multi-polarity sentiment analysis.

    Directory of Open Access Journals (Sweden)

    Xinmiao Li

    Full Text Available Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG and support vector machines (SVM are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA and grid

  19. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  20. Blood platelet production: a novel approach for practical optimization.

    Science.gov (United States)

    van Dijk, Nico; Haijema, René; van der Wal, Jan; Sibinga, Cees Smit

    2009-03-01

    The challenge of production and inventory management for blood platelets (PLTs) is the requirement to meet highly uncertain demands. Shortages are to be minimized, if not to be avoided at all. Overproduction, in turn, leads to high levels of outdating as PLTs have a limited "shelf life." Outdating is to be minimized for ethical and cost reasons. Operations research (OR) methodology was applied to the PLT inventory management problem. The problem can be formulated in a general mathematical form. To solve this problem, a five-step procedure was used. This procedure is based on a combination of two techniques, a mathematical technique called stochastic dynamic programming (SDP) and computer simulation. The approach identified an optimal production policy, leading to the computation of a simple and nearly optimal PLT production "order-up-to" rule. This rule prescribes a fixed order-up-to level for each day of the week. The approach was applied to a test study with actual data for a regional Dutch blood bank. The main finding in the test study was that outdating could be reduced from 15-20 percent to less than 0.1 percent with virtually no shortages. Blood group preferences and extending the shelf life of more than 5 days appeared to be of marginal effect. In this article the worlds of blood management and the mathematical discipline of OR are brought together for the optimization of blood PLT production. This leads to simple nearly optimal blood PLT production policies that are suitable for practical implementation.

  1. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato

    Directory of Open Access Journals (Sweden)

    Maarten L. A. T. M. Hertog

    2017-04-01

    Full Text Available In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. “Savior” was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  2. An Efficient PageRank Approach for Urban Traffic Optimization

    OpenAIRE

    2012-01-01

    The cities are not static environments. They change constantly. When we talk about traffic in the city, the evolution of traffic lights is a journey from mindless automation to increasingly intelligent, fluid traffic management. In our approach, presented in this paper, reinforcement-learning mechanism based on cost function is introduced to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999))...

  3. MULTISCALE OPTIMIZATION OF FLOW DISTRIBUTION BY CONSTRUCTAL APPROACH

    Institute of Scientific and Technical Information of China (English)

    Lingai Luo; Daniel Tondeur

    2005-01-01

    Constructal approach is a recent concept allowing to generate and optimize multi-scale structures, in particular, branching structures, connecting a microscopic world to a macroscopic one, from an engineer's point of view.Branching morphologies are found in many types of natural phenomena, and may be associated to some kind of optimization, expressing the evolutionary adaptation of natural systems to their environment. In a sense, the constructal approach tries to imitate this morphogenesis while short-cutting the trial-and-error of nature.The basic ideas underlying the constructal concept and methodology are illustrated here by the examples of fluid distribution to a multi-channel reactor, and of the design of a porous material and system for gas adsorption and storage. In usual constructal theory, a tree branching is postulated for the channels or flow-paths or conductors, usually a dichotomic tree (every branch is divided into two "daughters"). The objective function of the optimization is built from the resistances to mass or heat transport, expressed here as "characteristic transport times", and the geometric result is expressed as a shape factor of a domain. The optimized shape expresses the compromise between the mass or heat transport characteristics at adjacent scales. Under suitable assumptions, simple analytical scaling laws are found, which relate the geometric and transport properties of different scales.Some challenging geometric problems may arise when applying the constructal approach to practical situations where strong geometric constraints exist. The search for analytical solutions imposes simplifying assumptions which may be at fault, calling for less constraining approaches, for example making only weak assumptions on the branching structure.Some of these challenges are brought forward along this text.

  4. Computational approaches for microalgal biofuel optimization: a review.

    Science.gov (United States)

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  5. Computational Approaches for Microalgal Biofuel Optimization: A Review

    Directory of Open Access Journals (Sweden)

    Joseph Koussa

    2014-01-01

    Full Text Available The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  6. Optimizing streamflow monitoring networks using joint permutation entropy

    Science.gov (United States)

    Stosic, Tatijana; Stosic, Borko; Singh, Vijay P.

    2017-09-01

    Using joint permutation entropy we address the issue of minimizing the cost of monitoring, while minimizing redundancy of the information content, of daily streamflow data recorded during the period 1989-2016 at twelve gauging stations on Brazos River, Texas, USA. While the conventional entropy measures take into account only the probability of occurrence of a given set of events, permutation entropy also takes into account local ordering of the sequential values, thus enriching the analysis. We find that the best cost efficiency is achieved by performing weekly measurements, in comparison with which daily measurements exhibit information redundancy, and monthly measurements imply information loss. We also find that the cumulative information redundancy of the twelve considered stations is over 10% for the observed period, and that the number of monitoring stations can be reduced by half bringing the cumulative redundancy level to less than 1%.

  7. Locating monitoring wells in groundwater systems using embedded optimization and simulation models.

    Science.gov (United States)

    Bashi-Azghadi, Seyyed Nasser; Kerachian, Reza

    2010-04-15

    In this paper, a new methodology is proposed for optimally locating monitoring wells in groundwater systems in order to identify an unknown pollution source using monitoring data. The methodology is comprised of two different single and multi-objective optimization models, a Monte Carlo analysis, MODFLOW, MT3D groundwater quantity and quality simulation models and a Probabilistic Support Vector Machine (PSVM). The single-objective optimization model, which uses the results of the Monte Carlo analysis and maximizes the reliability of contamination detection, provides the initial location of monitoring wells. The objective functions of the multi-objective optimization model are minimizing the monitoring cost, i.e. the number of monitoring wells, maximizing the reliability of contamination detection and maximizing the probability of detecting an unknown pollution source. The PSVMs are calibrated and verified using the results of the single-objective optimization model and the Monte Carlo analysis. Then, the PSVMs are linked with the multi-objective optimization model, which maximizes both the reliability of contamination detection and probability of detecting an unknown pollution source. To evaluate the efficiency and applicability of the proposed methodology, it is applied to Tehran Refinery in Iran.

  8. Transfer of European Approach to Groundwater Monitoring in China

    Science.gov (United States)

    Zhou, Y.

    2007-12-01

    Major groundwater development in North China has been a key factor in the huge economic growth and the achievement of self sufficiency in food production. Groundwater accounts for more than 70 percent of urban water supply and provides important source of irrigation water during dry period. This has however caused continuous groundwater level decline and many associated problems: hundreds of thousands of dry wells, dry river beds, land subsidence, seawater intrusion and groundwater quality deterioration. Groundwater levels in the shallow unconfined aquifers have fallen 10m up to 50m, at an average rate of 1m/year. In the deep confined aquifers groundwater levels have commonly fallen 30m up to 90m, at an average rate of 3 to 5m/year. Furthermore, elevated nitrate concentrations have been found in shallow groundwater in large scale. Pesticides have been detected in vulnerable aquifers. Urgent actions are necessary for aquifer recovery and mitigating groundwater pollution. Groundwater quantity and quality monitoring plays a very important role in formulating cost-effective groundwater protection strategies. In 2000 European Union initiated a Water Framework Directive (2000/60/EC) to protect all waters in Europe. The objective is to achieve good water and ecological status by 2015 cross all member states. The Directive requires monitoring surface and groundwater in all river basins. A guidance document for monitoring was developed and published in 2003. Groundwater monitoring programs are distinguished into groundwater level monitoring and groundwater quality monitoring. Groundwater quality monitoring is further divided into surveillance monitoring and operational monitoring. The monitoring guidance specifies key principles for the design and operation of monitoring networks. A Sino-Dutch cooperation project was developed to transfer European approach to groundwater monitoring in China. The project aims at building a China Groundwater Information Centre. Case studies

  9. Commonalities and complementarities among approaches to conservation monitoring and evaluation

    DEFF Research Database (Denmark)

    Mascia, Michael B.; Pailler, Sharon; Thieme, Michele L.

    2014-01-01

    to conservation M&E, characterizing each approach in eight domains: the focal question driving each approach, when in the project cycle each approach is employed, scale of data collection, the methods of data collection and analysis, the implementers of data collection and analysis, the users of M&E outputs......, and the decisions informed by these outputs. Ambient monitoring measures status and change in ambient social and ecological conditions, independent of any conservation intervention. Management assessment measures management inputs, activities, and outputs, as the basis for investments to build management capacity...... for conservation projects. Performance measurement assesses project or program progress toward desired levels of specific activities, outputs, and outcomes. Impact evaluation is the systematic process of measuring the intended and unintended causal effects of conservation interventions, with emphasis upon long...

  10. An improved approach for process monitoring in laser material processing

    Science.gov (United States)

    König, Hans-Georg; Pütsch, Oliver; Stollenwerk, Jochen; Loosen, Peter

    2016-04-01

    Process monitoring is used in many different laser material processes due to the demand for reliable and stable processes. Among different methods, on-axis process monitoring offers multiple advantages. To observe a laser material process it is unavoidable to choose a wavelength for observation that is different to the one used for material processing, otherwise the light of the processing laser would outshine the picture of the process. By choosing a different wavelength, lateral chromatic aberration occurs in not chromatically corrected optical systems with optical scanning units and f-Theta lenses. These aberrations lead to a truncated image of the process on the camera or the pyrometer, respectively. This is the reason for adulterated measurements and non-satisfying images of the process. A new approach for solving the problem of field dependent lateral chromatic aberration in process monitoring is presented. Therefore, the scanner-based optical system is reproduced in a simulation environment, to predict the occurring lateral chromatic aberrations. In addition, a second deflecting system is integrated into the system. By using simulation, a predictive control is designed that uses the additional deflecting system to introduce reverse lateral deviations in order to compensate the lateral effect of chromatic aberration. This paper illustrates the concept and the implementation of the predictive control, which is used to eliminate lateral chromatic aberrations in process monitoring, the simulation on which the system is based the optical system as well as the control concept.

  11. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  12. Developing GP monitoring systems guided by a soft systems approach.

    Science.gov (United States)

    Hindle, T

    1995-11-01

    This paper describes a selected aspect of a research project concerned with 'contracts and competition' in the recently reformed National Health Service. The particular feature highlighted in this paper is the central role played by the general practitioners in the health service as principal sources of the demands made on provider units (particularly hospitals) and, hence, critical determinants of volumes and costs in contracting. A practical outcome of the research has been the development of GP monitoring systems to be used by provider units particularly in the context of marketing-led referral expectations. The approach used to highlight areas of potential GP contract management and monitoring improvements has been a development of soft systems methodology.

  13. Indoor Wireless Localization-hybrid and Unconstrained Nonlinear Optimization Approach

    Directory of Open Access Journals (Sweden)

    R. Jayabharathy

    2013-07-01

    Full Text Available In this study, a hybrid TOA/RSSI wireless localization is proposed for accurate positioning in indoor UWB systems. The major problem in indoor localization is the effect of Non-Line of Sight (NLOS propagation. To mitigate the NLOS effects, an unconstrained nonlinear optimization approach is utilized to process Time-of-Arrival (TOA and Received Signal Strength (RSS in the location system.TOA range measurements and path loss model are used to discriminate LOS and NLOS conditions. The weighting factors assigned by hypothesis testing, is used for solving the objective function in the proposed approach. This approach is used for describing the credibility of the TOA range measurement. Performance of the proposed technique is done based on MATLAB simulation. The result shows that the proposed technique performs well and achieves improved positioning under severe NLOS conditions.

  14. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  15. Genetic braid optimization: A heuristic approach to compute quasiparticle braids

    Science.gov (United States)

    McDonald, Ross B.; Katzgraber, Helmut G.

    2013-02-01

    In topologically protected quantum computation, quantum gates can be carried out by adiabatically braiding two-dimensional quasiparticles, reminiscent of entangled world lines. Bonesteel [Phys. Rev. Lett.10.1103/PhysRevLett.95.140503 95, 140503 (2005)], as well as Leijnse and Flensberg [Phys. Rev. B10.1103/PhysRevB.86.104511 86, 104511 (2012)], recently provided schemes for computing quantum gates from quasiparticle braids. Mathematically, the problem of executing a gate becomes that of finding a product of the generators (matrices) in that set that approximates the gate best, up to an error. To date, efficient methods to compute these gates only strive to optimize for accuracy. We explore the possibility of using a generic approach applicable to a variety of braiding problems based on evolutionary (genetic) algorithms. The method efficiently finds optimal braids while allowing the user to optimize for the relative utilities of accuracy and/or length. Furthermore, when optimizing for error only, the method can quickly produce efficient braids.

  16. Optimizing communication satellites payload configuration with exact approaches

    Science.gov (United States)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  17. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  18. Ant colony optimization approach to estimate energy demand of Turkey

    Energy Technology Data Exchange (ETDEWEB)

    Duran Toksari, M. [Erciyes University, Kayseri (Turkey). Engineering Faculty, Industrial Engineering Department

    2007-08-15

    This paper attempts to shed light on the determinants of energy demand in Turkey. Energy demand model is first proposed using the ant colony optimization (ACO) approach. It is multi-agent systems in which the behavior of each ant is inspired by the foraging behavior of real ants to solve optimization problem. ACO energy demand estimation (ACOEDE) model is developed using population, gross domestic product (GDP), import and export. All equations proposed here are linear and quadratic. Quadratic{sub A}COEDE provided better-fit solution due to fluctuations of the economic indicators. The ACOEDE model plans the energy demand of Turkey until 2025 according to three scenarios. The relative estimation errors of the ACOEDE model are the lowest when they are compared with the Ministry of Energy and Natural Resources (MENR) projection. (author)

  19. FLUKA simulations for the optimization of the Beam Loss Monitors

    CERN Document Server

    Brugger, M; Ferrari, A; Magistris, M; Santana-Leitner, M; Vlachoudis, V; CERN. Geneva. AB Department

    2006-01-01

    The collimation system in the beam cleaning insertion IR7 of the Large Hadron Collider (LHC) is expected to clean the primary halo and the secondary radiation of a beam with unprecedented energy and intensity. Accidental beam losses can therefore entail severe consequences to the hardware of the machine. Thus, protection mechanisms, e.g. beam abort, must be instantaneously triggered by a set of Beam Loss Monitors (BLM's). The readings in the BLM's couple the losses from various collimators, thus rendering the identification of any faulty unit rather complex. In the present study the detailed geometry of IR7 is upgraded with the insertion of the BLM's, and the Monte Carlo FLUKA transport code is used to estimate the individual contribution of every collimator to the showers detected in each BLM.

  20. Novel anomaly detection approach for telecommunication network proactive performance monitoring

    Institute of Scientific and Technical Information of China (English)

    Yanhua YU; Jun WANG; Xiaosu ZHAN; Junde SONG

    2009-01-01

    The mode of telecommunication network management is changing from "network oriented" to "subscriber oriented". Aimed at enhancing subscribers'feeling, proactive performance monitoring (PPM) can enable a fast fault correction by detecting anomalies designating performance degradation. In this paper, a novel anomaly detection approach is the proposed taking advantage of time series prediction and the associated confidence interval based on multiplicative autoregressive integrated moving average (ARIMA). Furthermore, under the assumption that the training residual is a white noise process following a normal distribution, the associated confidence interval of prediction can be figured out under any given confidence degree 1-α by constructing random variables satisfying t distribution. Experimental results verify the method's effectiveness.

  1. Measuring and monitoring IT using a balanced scorecard approach.

    Science.gov (United States)

    Gash, Deborah J; Hatton, Todd

    2007-01-01

    Ensuring that the information technology department is aligned with the overall health system strategy and is performing at a consistently high level is a priority at Saint Luke's Health System in Kansas City, Mo. The information technology department of Saint Luke's Health System has been using the balanced scorecard approach described in this article to measure and monitor its performance for four years. This article will review the structure of the IT department's scorecard; the categories and measures used; how benchmarks are determined; how linkage to the organizational scorecard is made; how results are reported; how changes are made to the scorecard; and tips for using a scorecard in other IT departments.

  2. An Optimal Rubrics-Based Approach to Real Estate Appraisal

    Directory of Open Access Journals (Sweden)

    Zhangcheng Chen

    2017-05-01

    Full Text Available Traditional real estate appraisal methods obtain estimates of real estate by using mathematical modeling to analyze the existing sample data. However, the information of sample data sometimes cannot fully reflect the real-time quotes. For example, in a thin real estate market, the correlated sample data for estimated object is lacking, which limits the estimates of these traditional methods. In this paper, an optimal rubrics-based approach to real estate appraisal is proposed, which brings in crowdsourcing. The valuation estimate can serve as a market indication for the potential real estate buyers or sellers. It is not only based on the information of the existing sample data (just like these traditional methods, but also on the extra real-time market information from online crowdsourcing feedback, which makes the estimated result close to that of the market. The proposed method constructs the rubrics model from sample data. Based on this, the cosine similarity function is used to calculate the similarity between each rubric for selecting the optimal rubrics. The selected optimal rubrics and the estimated point are posted on a crowdsourcing platform. After comparing the information of the estimated point with the optimal rubrics on the crowdsourcing platform, those users who are connected with the estimated object complete the appraisal with their knowledge of the real estate market. The experiment results show that the average accuracy of the proposed approach is over 70%; the maximum accuracy is 90%. This supports that the proposed method can easily provide a valuable market reference for the potential real estate buyers or sellers, and is an attempt to use the human-computer interaction in the real estate appraisal field.

  3. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    Science.gov (United States)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  4. Perspective: Codesign for materials science: An optimal learning approach

    Science.gov (United States)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  5. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    Science.gov (United States)

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  6. A Hybrid Approach to the Optimization of Multiechelon Systems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2015-01-01

    Full Text Available In freight transportation there are two main distribution strategies: direct shipping and multiechelon distribution. In the direct shipping, vehicles, starting from a depot, bring their freight directly to the destination, while in the multiechelon systems, freight is delivered from the depot to the customers through an intermediate points. Multiechelon systems are particularly useful for logistic issues in a competitive environment. The paper presents a concept and application of a hybrid approach to modeling and optimization of the Multi-Echelon Capacitated Vehicle Routing Problem. Two ways of mathematical programming (MP and constraint logic programming (CLP are integrated in one environment. The strengths of MP and CLP in which constraints are treated in a different way and different methods are implemented and combined to use the strengths of both. The proposed approach is particularly important for the discrete decision models with an objective function and many discrete decision variables added up in multiple constraints. An implementation of hybrid approach in the ECLiPSe system using Eplex library is presented. The Two-Echelon Capacitated Vehicle Routing Problem (2E-CVRP and its variants are shown as an illustrative example of the hybrid approach. The presented hybrid approach will be compared with classical mathematical programming on the same benchmark data sets.

  7. Performance monitoring and optimization of industrial processes [abstract

    Directory of Open Access Journals (Sweden)

    Sainlez, M.

    2010-01-01

    Full Text Available Data mining refers to extracting useful knowledge from large amounts of data. It is a result of the natural evolution of information technology and development of recent algorithms. Starting from large databases, the main objective is to find interesting latent patterns. In the end, the quality of a model is assessed by its performance for predicting new observations. Bagging and boosting are general strategies for improving classifier and predictor accuracy. They are examples of ensemble methods, or methods that use a combination of models. The bagging algorithm creates an ensemble of models (by boostrap sampling for a learning scheme where each model gives an equally-weighted prediction. Particularly, random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. Internal estimates are also used to measure variable importance. Within the framework of a Kraft pulp mill, we analyze recovery boilers pollutants and steam production. This kind of boiler acts both as a high-pressure steam boiler and as a chemical reactor with reductive and oxidative zones. The steam is used in other mill processes and to run a steam turbine in order to produce electrical energy. Significant perspectives are already existing to optimize this production and reduce atmospheric pollutants. Nowadays random forests modeling is a promising way to achieve that.

  8. Application of TRIZ approach to machine vibration condition monitoring problems

    Science.gov (United States)

    Cempel, Czesław

    2013-12-01

    Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.

  9. Logaritmic Fuzzy Preference Programming Approach for Evaluating University Ranking Optimization

    Directory of Open Access Journals (Sweden)

    Tenia Wahyuningrum

    2017-05-01

    Full Text Available Assesing quality university’s website trough webometrics is becoming one of many measures in World Class University. To get good grades, so that it can compete with other universities in the world, it needs to be pursued strategies based on the achievement of the perspective of cost (expenses and the condition of the availability and readiness of human resource (HR owned by the institution. Webometrics ranking optimization tailored to the institutional capacity is absolutely necessary, in order to achieve the expected goals effectively and fuel-efficient. Therefore, this paper discussed the application of the Analytical Hierarchy Process with Logarithmic Fuzzy Preference Programming combination proved to covered of the methods FPP on the university web ranking optimization. From the results of sub-criteria weighting based on the perspective of cost and human resources, earned the highest ranking among other factors recommended monitoring the ranking of sites ahrefs (C332 and majesticseo (C331 as well as increasing the number of links from other websites (C321. 

  10. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    Science.gov (United States)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  11. Taxes, subsidies and unemployment - a unified optimization approach

    Directory of Open Access Journals (Sweden)

    Erik Bajalinov

    2010-12-01

    Full Text Available Like a linear programming (LP problem, linear-fractional programming (LFP problem can be usefully applied in a wide range of real-world applications. In the last few decades a lot of research papers and monographs were published throughout the world where authors (mainly mathematicians investigated different theoretical and algorithmic aspects of LFP problems in various forms. In this paper we consider these two approaches to optimization (based on linear and linear-fractional objective functions on the same feasible set, compare results they lead to and give interpretation in terms of taxes, subsidies and manpower requirement. We show that in certain cases both approaches are closely connected with one another and may be fruitfully utilized simultaneously.

  12. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    Science.gov (United States)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  13. Selecting optimal monitoring site locations for peak ambient particulate material concentrations using the MM5-CAMx4 numerical modelling system.

    Science.gov (United States)

    Sturman, Andrew; Titov, Mikhail; Zawar-Reza, Peyman

    2011-01-15

    Installation of temporary or long term monitoring sites is expensive, so it is important to rationally identify potential locations that will achieve the requirements of regional air quality management strategies. A simple, but effective, numerical approach to selecting ambient particulate matter (PM) monitoring site locations has therefore been developed using the MM5-CAMx4 air pollution dispersion modelling system. A new method, 'site efficiency,' was developed to assess the ability of any monitoring site to provide peak ambient air pollution concentrations that are representative of the urban area. 'Site efficiency' varies from 0 to 100%, with the latter representing the most representative site location for monitoring peak PM concentrations. Four heavy pollution episodes in Christchurch (New Zealand) during winter 2005, representing 4 different aerosol dispersion patterns, were used to develop and test this site assessment technique. Evaluation of the efficiency of monitoring sites was undertaken for night and morning aerosol peaks for 4 different particulate material (PM) spatial patterns. The results demonstrate that the existing long term monitoring site at Coles Place is quite well located, with a site efficiency value of 57.8%. A temporary ambient PM monitoring site (operating during winter 2006) showed a lower ability to capture night and morning peak aerosol concentrations. Evaluation of multiple site locations used during an extensive field campaign in Christchurch (New Zealand) in 2000 indicated that the maximum efficiency achieved by any site in the city would be 60-65%, while the efficiency of a virtual background site is calculated to be about 7%. This method of assessing the appropriateness of any potential monitoring site can be used to optimize monitoring site locations for any air pollution measurement programme.

  14. PERCEPTIVE APPROACH FOR ROUTE OPTIMIZATION IN MOBILE IP

    Directory of Open Access Journals (Sweden)

    Vinay Kumar Nigam

    2010-12-01

    Full Text Available The recent advances in wireless communication technology and the unprecedented growth of the Internet have paved the way for wireless networking and IP mobility. Mobile Internet protocol[1,2] has been designed within the IETF to support the mobility[2] of users who wish to connect to the internet and maintain communications as they move from place to place. Mobile IPV6 allows a mobile node to talk directly to its peers while retaining the ability to move around and change the currently used IP addresses. This mode of operation is called Route Optimization[7,10].In this approach, the correspondent node learns a binding between the Mobile nodes permanent home address and its current temporary care-of-address. This introduces several security vulnerabilities to Mobile IP, among them the most important one is the authentication and authorization of binding updates. This paper describes the Route optimization by the introduction of mobility. In this paper , we proposed a new efficient technique for route optimization in mobile IP for smoothly communication while MN moving from one network domain to other without losing the connection. Our technique will also be improve the path in intra-network communication[9].

  15. Specific energy optimization in sawing of rocks using Taguchi approach

    Institute of Scientific and Technical Information of China (English)

    Izzet Karakurt

    2014-01-01

    This work aims at selecting optimal operating variables to obtain the minimum specific energy (SE) in sawing of rocks. A particular granite was sampled and sawn by a fully automated circular diamond sawblades. The peripheral speed, the traverse speed, the cut depth and the flow rate of cooling fluid were selected as the operating variables. Taguchi approach was adopted as a statistical design of experimental technique for optimization studies. The results were evaluated based on the analysis of variance and signal-to-noise ratio (S/N ratio). Statistically significant operating variables and their percentage contribution to the process were also determined. Additionally, a statistical model was developed to demonstrate the relationship between SE and operating variables using regression analysis and the model was then verified. It was found that the optimal combination of operating variables for minimum SE is the peripheral speed of 25 m/s, the traverse speed of 70 cm/min, the cut depth of 2 cm and the flow rate of cooling fluid of 100 mL/s. The cut depth and traverse speed were statistically determined as the significant operating variables affecting the SE, respectively. Furthermore, the regression model results reveal that the predictive model has a high applicability for practical applications.

  16. Replication in Overlay Networks: A Multi-objective Optimization Approach

    Science.gov (United States)

    Al-Haj Hassan, Osama; Ramaswamy, Lakshmish; Miller, John; Rasheed, Khaled; Canfield, E. Rodney

    Recently, overlay network-based collaborative applications such as instant messaging, content sharing, and Internet telephony are becoming increasingly popular. Many of these applications rely upon data-replication to achieve better performance, scalability, and reliability. However, replication entails various costs such as storage for holding replicas and communication overheads for ensuring replica consistency. While simple rule-of-thumb strategies are popular for managing the cost-benefit tradeoffs of replication, they cannot ensure optimal resource utilization. This paper explores a multi-objective optimization approach for replica management, which is unique in the sense that we view the various factors influencing replication decisions such as access latency, storage costs, and data availability as objectives, and not as constraints. This enables us to search for solutions that yield close to optimal values for these parameters. We propose two novel algorithms, namely multi-objective Evolutionary (MOE) algorithm and multi-objective Randomized Greedy (MORG) algorithm for deciding the number of replicas as well as their placement within the overlay. While MOE yields higher quality solutions, MORG is better in terms of computational efficiency. The paper reports a series of experiments that demonstrate the effectiveness of the proposed algorithms.

  17. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  18. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  19. Modeling the crop transpiration using an optimality-based approach

    Institute of Scientific and Technical Information of China (English)

    Stanislaus; J.Schymanski; Murugesu; Sivapalan

    2008-01-01

    Evapotranspiration constitutes more than 80% of the long-term water balance in Northern China.In this area,crop transpiration due to large areas of agriculture and irrigation is responsible for the majority of evapotranspiration.A model for crop transpiration is therefore essential for estimating the agricultural water consumption and understanding its feedback to the environment.However,most existing hydrological models usually calculate transpiration by relying on parameter calibration against local observations,and do not take into account crop feedback to the ambient environment.This study presents an optimality-based ecohydrology model that couples an ecological hypothesis,the photosynthetic process,stomatal movement,water balance,root water uptake and crop senescence,with the aim of predicting crop characteristics,CO2 assimilation and water balance based only on given meteorological data.Field experiments were conducted in the Weishan Irrigation District of Northern China to evaluate performance of the model.Agreement between simulation and measurement was achieved for CO2 assimilation,evapotranspiration and soil moisture content.The vegetation optimality was proven valid for crops and the model was applicable for both C3 and C4 plants.Due to the simple scheme of the optimality-based approach as well as its capability for modeling dynamic interactions between crops and the water cycle without prior vegetation information,this methodology is potentially useful to couple with the distributed hydrological model for application at the watershed scale.

  20. Why do colder mothers produce larger eggs? An optimality approach.

    Science.gov (United States)

    Bownds, Celeste; Wilson, Robbie; Marshall, Dustin J

    2010-11-15

    One of the more common patterns of offspring size variation is that mothers tend to produce larger offspring at lower temperatures. Whether such variation is adaptive remains unclear. Determining whether optimal offspring size differs between thermal environments provides a direct way of assessing the adaptive significance of temperature-driven variation in egg size. Here, we examined the relationship between offspring size and performance at three temperatures for several important fitness components in the zebra fish, Danio rerio. The effects of egg size on performance were highly variable among life-history stages (i.e. pre- and post-hatching) and dependent on the thermal environment; offspring size positively affected performance at some temperatures but negatively affected performance at others. When we used these data to generate a simple optimality model, the model predicted that mothers should produce the largest size offspring at the lowest temperature, offspring of intermediate size at the highest temperature and the smallest offspring at the intermediate temperature. An experimental test of these predictions showed that the rank order of observed offspring sizes produced by mothers matched our predictions. Our results suggest that mothers adaptively manipulate the size of their offspring in response to thermally driven changes in offspring performance and highlight the utility of optimality approaches for understanding offspring size variation.

  1. Silanization of glass chips—A factorial approach for optimization

    Science.gov (United States)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  2. Optimization of minoxidil microemulsions using fractional factorial design approach.

    Science.gov (United States)

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  3. Synthesize, optimize, analyze, repeat (SOAR): Application of neural network tools to ECG patient monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Watrous, R.; Towell, G.; Glassman, M.S. [Siemens Corporate Research, Princeton, NJ (United States)

    1995-12-31

    Results are reported from the application of tools for synthesizing, optimizing and analyzing neural networks to an ECG Patient Monitoring task. A neural network was synthesized from a rule-based classifier and optimized over a set of normal and abnormal heartbeats. The classification error rate on a separate and larger test set was reduced by a factor of 2. When the network was analyzed and reduced in size by a factor of 40%, the same level of performance was maintained.

  4. A Fuzzy Approach of the Optimal Analysis Based of Failure States in Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    E. Minca

    2012-12-01

    Full Text Available This article proposes an algorithm for prognosis in optimal analysis of manufacturing systems. Uncertain knowledge of such task requires for specific reasoning and adaptive model base of fuzzy logic analyzes. The proposed method performs the interfaces between the results provided by the fuzzy supervision model and the algorithm witch identify the real state of the monitored system. The supervisory system sends failure signals described in a fuzzy approach. These ones represent inputs values in the system of failure optimal analysis which identifies the current degradation states by recurrent identification cycle. The proposed algorithm has also predictive component capable to determine the possible evolution of the system state towards a critical state of failure.

  5. Optimization of in-vivo monitoring program for radiation emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Wi Ho; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2016-12-15

    In case of radiation emergencies, internal exposure monitoring for the members of public will be required to confirm internal contamination of each individual. In-vivo monitoring technique using portable gamma spectrometer can be easily applied for internal exposure monitoring in the vicinity of the on-site area. In this study, minimum detectable doses (MDDs) for '1'3'4Cs, {sup 137}Cs, and {sup 131}I were calculated adjusting minimum detectable activities (MDAs) from 50 to 1,000 Bq to find out the optimal in-vivo counting condition. DCAL software was used to derive retention fraction of Cs and I isotopes in the whole body and thyroid, respectively. A minimum detectable level was determined to set committed effective dose of 0.1 mSv for emergency response. We found that MDDs at each MDA increased along with the elapsed time. 1,000 Bq for {sup 134}Cs and {sup 137}Cs, and 100 Bq for {sup 131}I were suggested as optimal MDAs to provide in-vivo monitoring service in case of radiation emergencies. In-vivo monitoring program for emergency response should be designed to achieve the optimal MDA suggested from the present work. We expect that a reduction of counting time compared with routine monitoring program can achieve the high throughput system in case of radiation emergencies.

  6. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. Surface Water Quality Monitoring Site Optimization for Poyang Lake, the Largest Freshwater Lake in China

    Directory of Open Access Journals (Sweden)

    Hua Wang

    2014-11-01

    Full Text Available In this paper, we propose a coupled method to optimize the surface water quality monitoring sites for a huge freshwater lake based on field investigations, mathematical analysis, and numerical simulation tests. Poyang Lake, the largest freshwater lake in China, was selected as the research area. Based on the field investigated water quality data in the 5 years from 2008 to 2012, the water quality inter-annual variation coefficients at all the present sites and the water quality correlation coefficients between adjacent sites were calculated and analyzed to present an optimization scheme. A 2-D unsteady water quality model was established to get the corresponding water quality data at the optimized monitoring sites, which were needed for the rationality test on the optimized monitoring network. We found that: (1 the water quality of Piaoshan (No. 10 fluctuated most distinguishably and the inter-annual variation coefficient of NH3-N and TP could reach 99.77% and 73.92%, respectively. The four studied indexes were all closely related at Piaoshan (No. 10 and Tangyin (No. 11, and the correlation coefficients of COD and NH3-N could reach 0.91 and 0.94 separately. (2 It was suggested that the present site No. 10 be removed to avoid repeatability, and it was suggested that the three sites of Changling, Huzhong, and Nanjiang be added to improve the representativeness of the monitoring sites. (3 According to the rationality analysis, the 21 optimized water quality monitoring sites could scientifically replace the primary network, and the new monitoring network could better reflect the water quality of the whole lake.

  8. Clinicoanatomic study of optimal arthroscopic approaches to the elbow

    Directory of Open Access Journals (Sweden)

    I. A. Kuznetsov

    2015-01-01

    Full Text Available The purpose: development and topographic substantiation of optimal arthroscopic approaches to the elbow, taking into account the location of the neurovascular structures in different functional positions. Material and methods: Anatomical relationships of elbow nerves and bony structures were studied by dissection of non-fixed anatomical material (6 elbow joints. To investigate the variant anatomy of the brachial artery, MRI in 23 patients were performed. In 10 patients the authors used ultrasound to study the topographic relationships of elbow nerve structures at different functional positions of the upper extremity Variability of the brachial artery deviation, depending on the angle of elbow flexion, was studied in six angiograms of non-fixed anatomical material. Statistical analysis was performed using Instant + and Past 306 software. Results: It was found that elbow flexion of 180°-90° moves the brachial artery away from the bones with a maximum distance from the humerus of 5 cm above the joint space. Distance increases from 23.5±3.1 mm to 23.9±3.1 mm. In 90° elbow flexion radial and median nerves are at the maximum distance from bony structures - 16.01±0.43 and 20.48±0.28 mm, respectively. Conclusion: These findings allowed justification of the conclusion that the lateral arthroscopic approaches to the elbow are the safest. It is possible to perform two lateral arthroscopic approaches: optical and instrumental, without conflict with major neurovascular structures. The optimal position for the surgery is 90° elbow flexion.

  9. On the practical convergence of coda-based correlations: a window optimization approach

    Science.gov (United States)

    Chaput, J.; Clerc, V.; Campillo, M.; Roux, P.; Knox, H.

    2016-02-01

    We present a novel optimization approach to improve the convergence of interstation coda correlation functions towards the medium's empirical Green's function. For two stations recording a series of impulsive events in a multiply scattering medium, we explore the impact of coda window selection through a Markov Chain Monte Carlo scheme, with the aim of generating a gather of correlation functions that is the most coherent and symmetric over events, thus recovering intuitive elements of the interstation Green's function without any nonlinear post-processing techniques. This approach is tested here for a 2-D acoustic finite difference model, where a much improved correlation function is obtained, as well as for a database of small impulsive icequakes recorded on Erebus Volcano, Antarctica, where similar robust results are shown. The average coda solutions, as deduced from the posterior probability distributions of the optimization, are further representative of the scattering strength of the medium, with stronger scattering resulting in a slightly delayed overall coda sampling. The recovery of singly scattered arrivals in the coda of correlation functions are also shown to be possible through this approach, and surface wave reflections from outer craters on Erebus volcano were mapped in this fashion. We also note that, due to the improvement of correlation functions over subsequent events, this approach can further be used to improve the resolution of passive temporal monitoring.

  10. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    Science.gov (United States)

    Huff, Daniel W.

    Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time

  11. New approach to monitor transboundary particulate pollution over northeast Asia

    Directory of Open Access Journals (Sweden)

    M. E. Park

    2013-06-01

    Full Text Available A new approach to more accurately monitor and evaluate transboundary particulate matter (PM pollution is introduced based on aerosol optical products from Korea's geostationary ocean color imager (GOCI. The area studied is northeast Asia including eastern parts of China, the Korean peninsula and Japan, where GOCI has been monitoring since June 2010. The hourly multi-spectral aerosol optical data that were retrieved from GOCI sensor onboard geostationary satellite COMS (Communication, Ocean, and Meteorology Satellite through Yonsei aerosol retrieval algorithm were first presented and used in this study. The GOCI-retrieved aerosol optical data are integrated with estimated aerosol distributions from US EPA Models-3/CMAQ v4.5.1 model simulations via data assimilation technique, thereby making the aerosol data spatially continuous and available even for cloud contamination cells. The assimilated aerosol optical data are utilized to provide quantitative estimates of transboundary PM pollution from China to the Korean peninsula and Japan. For the period of 1 April to 31 May 2011 this analysis yields estimates that AOD as a proxy for surface-level PM2.5 or PM10 during long-range transport events increased by 117–265% compared to background average AOD at the four AERONET sites in Korea, and average AOD increases of 121% were found when averaged over the entire Korean peninsula. The paper demonstrates that the use of multi-spectral AOD retrievals from geostationary satellites can improve estimates of transboundary PM pollution. Such data will become more widely available later this decade when new sensors such as GEMS (Geostationary Environment Monitoring Spectrometer and GOCI-2 are scheduled to be launched.

  12. New Approach to Monitor Transboundary Particulate Pollution over Northeast Asia

    Science.gov (United States)

    Park, M. E.; Song, C. H.; Park, R. S.; Lee, Jaehwa; Kim, J.; Lee, S.; Woo, J. H.; Carmichael, G. R.; Eck, Thomas F.; Holben, Brent N.; Lee, S. S.; Song, C. K.; Hong, Y. D.

    2014-01-01

    A new approach to more accurately monitor and evaluate transboundary particulate matter (PM) pollution is introduced based on aerosol optical products from Korea's Geostationary Ocean Color Imager (GOCI). The area studied is Northeast Asia (including eastern parts of China, the Korean peninsula and Japan), where GOCI has been monitoring since June 2010. The hourly multi-spectral aerosol optical data that were retrieved from GOCI sensor onboard geostationary satellite COMS (Communication, Ocean, and Meteorology Satellite) through the Yonsei aerosol retrieval algorithm were first presented and used in this study. The GOCI-retrieved aerosol optical data are integrated with estimated aerosol distributions from US EPA Models-3/CMAQ (Community Multi-scale Air Quality) v4.5.1 model simulations via data assimilation technique, thereby making the aerosol data spatially continuous and available even for cloud contamination cells. The assimilated aerosol optical data are utilized to provide quantitative estimates of transboundary PM pollution from China to the Korean peninsula and Japan. For the period of 1 April to 31 May, 2011 this analysis yields estimates that AOD as a proxy for PM2.5 or PM10 during long-range transport events increased by 117-265% compared to background average AOD (aerosol optical depth) at the four AERONET sites in Korea, and average AOD increases of 121% were found when averaged over the entire Korean peninsula. This paper demonstrates that the use of multi-spectral AOD retrievals from geostationary satellites can improve estimates of transboundary PM pollution. Such data will become more widely available later this decade when new sensors such as the GEMS (Geostationary Environment Monitoring Spectrometer) and GOCI-2 are scheduled to be launched.

  13. A Google Trends-based approach for monitoring NSSI

    Directory of Open Access Journals (Sweden)

    Bragazzi NL

    2013-12-01

    Full Text Available Nicola Luigi Bragazzi DINOGMI, Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health, Section of Psychiatry, University of Genoa, Genoa, Italy Abstract: Non-suicidal self-injury (NSSI is an intentional, direct, and socially unacceptable behavior resulting in the destruction of one's own body tissues with no intention of dying or committing suicide, even though it is associated with a higher risk of attempted, planned, or just considered suicide. In this preliminary report, we introduce the concept of “NSSI 2.0”; that is to say, the study of the Internet usage by subjects with NSSI, and we introduce a Google Trends-based approach for monitoring NSSI, called NSSI infodemiology and infoveillance. Despite some limitations, Google Trends has already proven to be reliable for infectious diseases monitoring, and here we extend its application and potentiality in the field of suicidology. Ad hoc web portals and surveys could be designed in light of the reported results for helping people with NSSI. Keywords: infodemiology, infoveillance, Internet, non-suicidal self-injury

  14. Remotely Sensed Monitoring of Small Reservoir Dynamics: A Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dirk Eilander

    2014-01-01

    Full Text Available Multipurpose small reservoirs are important for livelihoods in rural semi-arid regions. To manage and plan these reservoirs and to assess their hydrological impact at a river basin scale, it is important to monitor their water storage dynamics. This paper introduces a Bayesian approach for monitoring small reservoirs with radar satellite images. The newly developed growing Bayesian classifier has a high degree of automation, can readily be extended with auxiliary information and reduces the confusion error to the land-water boundary pixels. A case study has been performed in the Upper East Region of Ghana, based on Radarsat-2 data from November 2012 until April 2013. Results show that the growing Bayesian classifier can deal with the spatial and temporal variability in synthetic aperture radar (SAR backscatter intensities from small reservoirs. Due to its ability to incorporate auxiliary information, the algorithm is able to delineate open water from SAR imagery with a low land-water contrast in the case of wind-induced Bragg scattering or limited vegetation on the land surrounding a small reservoir.

  15. Deployment of Wireless Sensor Networks for Oilfield Monitoring by Multiobjective Discrete Binary Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Zhen-Lun Yang

    2016-01-01

    Full Text Available The deployment problem of wireless sensor networks for real time oilfield monitoring is studied. As a characteristic of oilfield monitoring system, all sensor nodes have to be installed on designated spots. For the energy efficiency, some relay nodes and sink nodes are deployed as a delivery subsystem. The major concern of the construction of the monitoring system is the optimum placement of data delivery subsystem to ensure the full connectivity of the sensor nodes while keeping the construction cost as low as possible, with least construction and maintenance complexity. Due to the complicated landform of oilfields, in general, it is rather difficult to satisfy these requirements simultaneously. The deployment problem is formulated as a constrained multiobjective optimization problem and solved through a novel scheme based on multiobjective discrete binary particle swarm optimization to produce optimal solutions from the minimum financial cost to the minimum complexity of construction and maintenance. Simulation results validated that comparing to the three existing state-of-the-art algorithms, that is, NSGA-II, JGGA, and SPEA2, the proposed scheme is superior in locating the Pareto-optimal front and maintaining the diversity of the solutions, thus providing superior candidate solutions for the design of real time monitoring systems in oilfields.

  16. US EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS

    Science.gov (United States)

    The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...

  17. Supplemental Assessment of the Y-12 Groundwater Protection Program Using Monitoring and Remediation Optimization System Software

    Energy Technology Data Exchange (ETDEWEB)

    Elvado Environmental LLC; GSI Environmental LLC

    2009-01-01

    A supplemental quantitative assessment of the Groundwater Protection Program (GWPP) at the Y-12 National Security Complex (Y-12) in Oak Ridge, TN was performed using the Monitoring and Remediation Optimization System (MAROS) software. This application was previously used as part of a similar quantitative assessment of the GWPP completed in December 2005, hereafter referenced as the 'baseline' MAROS assessment (BWXT Y-12 L.L.C. [BWXT] 2005). The MAROS software contains modules that apply statistical analysis techniques to an existing GWPP analytical database in conjunction with hydrogeologic factors, regulatory framework, and the location of potential receptors, to recommend an improved groundwater monitoring network and optimum sampling frequency for individual monitoring locations. The goal of this supplemental MAROS assessment of the Y-12 GWPP is to review and update monitoring network optimization recommendations resulting from the 2005 baseline report using data collected through December 2007. The supplemental MAROS assessment is based on the findings of the baseline MAROS assessment and includes only the groundwater sampling locations (wells and natural springs) currently granted 'Active' status in accordance with the Y-12 GWPP Monitoring Optimization Plan (MOP). The results of the baseline MAROS assessment provided technical rationale regarding the 'Active' status designations defined in the MOP (BWXT 2006). One objective of the current report is to provide a quantitative review of data collected from Active but infrequently sampled wells to confirm concentrations at these locations. This supplemental MAROS assessment does not include the extensive qualitative evaluations similar to those presented in the baseline report.

  18. Optimal precursors triggering the Kuroshio Extension state transition obtained by the Conditional Nonlinear Optimal Perturbation approach

    Science.gov (United States)

    Zhang, Xing; Mu, Mu; Wang, Qiang; Pierini, Stefano

    2017-06-01

    In this study, the initial perturbations that are the easiest to trigger the Kuroshio Extension (KE) transition connecting a basic weak jet state and a strong, fairly stable meandering state, are investigated using a reduced-gravity shallow water ocean model and the CNOP (Conditional Nonlinear Optimal Perturbation) approach. This kind of initial perturbation is called an optimal precursor (OPR). The spatial structures and evolutionary processes of the OPRs are analyzed in detail. The results show that most of the OPRs are in the form of negative sea surface height (SSH) anomalies mainly located in a narrow band region south of the KE jet, in basic agreement with altimetric observations. These negative SSH anomalies reduce the meridional SSH gradient within the KE, thus weakening the strength of the jet. The KE jet then becomes more convoluted, with a high-frequency and large-amplitude variability corresponding to a high eddy kinetic energy level; this gradually strengthens the KE jet through an inverse energy cascade. Eventually, the KE reaches a high-energy state characterized by two well defined and fairly stable anticyclonic meanders. Moreover, sensitivity experiments indicate that the spatial structures of the OPRs are not sensitive to the model parameters and to the optimization times used in the analysis.

  19. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  20. Approaches of Russian oil companies to optimal capital structure

    Science.gov (United States)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  1. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  2. Performance optimization of Jatropha biodiesel engine model using Taguchi approach

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)

    2009-11-15

    This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)

  3. Frost Formation: Optimizing solutions under a finite volume approach

    Science.gov (United States)

    Bartrons, E.; Perez-Segarra, C. D.; Oliet, C.

    2016-09-01

    A three-dimensional transient formulation of the frost formation process is developed by means of a finite volume approach. Emphasis is put on the frost surface boundary condition as well as the wide range of empirical correlations related to the thermophysical and transport properties of frost. A study of the numerical solution is made, establishing the parameters that ensure grid independence. Attention is given to the algorithm, the discretised equations and the code optimization through dynamic relaxation techniques. A critical analysis of four cases is carried out by comparing solutions of several empirical models against tested experiments. As a result, a discussion on the performance of such parameters is started and a proposal of the most suitable models is presented.

  4. Multipurpose Water Reservoir Management: An Evolutionary Multiobjective Optimization Approach

    Directory of Open Access Journals (Sweden)

    Luís A. Scola

    2014-01-01

    Full Text Available The reservoirs that feed large hydropower plants should be managed in order to provide other uses for the water resources. Those uses include, for instance, flood control and avoidance, irrigation, navigability in the rivers, and other ones. This work presents an evolutionary multiobjective optimization approach for the study of multiple water usages in multiple interlinked reservoirs, including both power generation objectives and other objectives not related to energy generation. The classical evolutionary algorithm NSGA-II is employed as the basic multiobjective optimization machinery, being modified in order to cope with specific problem features. The case studies, which include the analysis of a problem which involves an objective of navigability on the river, are tailored in order to illustrate the usefulness of the data generated by the proposed methodology for decision-making on the problem of operation planning of multiple reservoirs with multiple usages. It is shown that it is even possible to use the generated data in order to determine the cost of any new usage of the water, in terms of the opportunity cost that can be measured on the revenues related to electric energy sales.

  5. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  6. A Multiscale Approach to Optimal In Situ Bioremediation Design

    Science.gov (United States)

    Minsker, B. S.; Liu, Y.

    2001-12-01

    The use of optimization methods for in situ bioremediation design is quite challenging because the dynamics of bioremediation require that fine spatial and temporal scales be used in simulation, which substantially increases computational effort for optimization. In this paper, we present a multiscale approach that can be used to solve substantially larger-scale problems than previously possible. The multiscale method starts from a coarse mesh and proceeds to a finer mesh when it converges. While it is on a finer mesh, it switches back to a coarser mesh to calculate derivatives. The derivatives are then interpolated back to the finer mesh to approximate the derivatives on the finer mesh. To demonstrate the method, a four-level case study with 6,500 state variables is solved in less than 9 days, compared with nearly one year that would have been required using the original single-scale model. These findings illustrate that the multiscale method will allow solution of substantially larger-scale problems than previously possible, particularly since the method also enables easy parallelization of the model in the future.

  7. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  8. Using tailored methodical approaches to achieve optimal science outcomes

    Science.gov (United States)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  9. An improved ant colony optimization approach for optimization of process planning.

    Science.gov (United States)

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach.

  10. A participatory approach to design monitoring indicators of production diseases in organic dairy farms.

    Science.gov (United States)

    Duval, J E; Fourichon, C; Madouasse, A; Sjöström, K; Emanuelson, U; Bareille, N

    2016-06-01

    Production diseases have an important negative effect on the health and welfare of dairy cows. Although organic animal production systems aim for high animal health levels, compliance with European organic farming regulations does not guarantee that this is achieved. Herd health and production management (HHPM) programs aim at optimizing herd health by preventing disease and production problems, but as yet they have not been consistently implemented by farmers. We hypothesize that one reason is the mismatch between what scientists propose as indicators for herd health monitoring and what farmers would like to use. Herd health monitoring is a key element in HHPM programs as it permits a regular assessment of the functioning of the different components of the production process. Planned observations or measurements of these components are indispensable for this monitoring. In this study, a participatory approach was used to create an environment in which farmers could adapt the indicators proposed by scientists for monitoring the five main production diseases on dairy cattle farms. The adaptations of the indicators were characterized and the farmers' explanations for the changes made were described. The study was conducted in France and Sweden, which differ in terms of their national organic regulations and existing advisory services. In both countries, twenty certified organic dairy farmers and their animal health management advisors participated in the study. All of the farmers adapted the initial monitoring plan proposed by scientists to specific production and animal health situation on their farm. This resulted in forty unique and farm-specific combinations of indicators for herd health monitoring. All but three farmers intended to monitor five health topics simultaneously using the constructed indicators. The qualitative analysis of the explanations given by farmers for their choices enabled an understanding of farmers' reasons for selecting and adapting

  11. Sequential Optimal Monitoring Network Design using Iterative Kriging for Identification of Unknown Groundwater Pollution Sources Location

    Science.gov (United States)

    Prakash, O.; Datta, B.

    2011-12-01

    Identification of unknown groundwater pollution source characteristics, in terms of location, magnitude and activity duration is important for designing an effective pollution remediation strategy. Precise source characterization also becomes very important to ascertain liability, and to recover the cost of remediation from parties responsible for the groundwater pollution. Due to the uncertainties in accurately predicting the aquifer response to source flux injection, generally encountered sparsity of concentration observation data in the field, and the non uniqueness in the aquifer response to the subjected hydraulic and chemical stresses, groundwater pollution source characterization remains a challenging task. A scientifically designed pollutant concentration monitoring network becomes imperative for accurate pollutant source characterization. The efficiency of the unknown source locations identification process is largely determined by locations of monitoring wells where the pollutant concentration is observed. The proposed method combines spatial interpolation of concentration measurements and Simulated Annealing as optimization algorithm to find the optimum locations for monitoring wells. Initially, the observed concentration data at few sparsely and arbitrarily distributed wells are used to interpolate the concentration data for the aquifer study area. The concentration information is passed to the optimization algorithm (decision model) as concentration gradient which in turn finds the optimum locations for implementing the next sequence of monitoring wells. Concentration measurement data from these designed monitoring wells and already implemented monitoring network are iteratively used as feedback information for potential groundwater pollution source locations identification. The potential applicability of the developed methodology is demonstrated for an illustrative study area.

  12. Optimization approach within an interventional radiology department; Demarche d'optimisation au sein d'un service de radiologie interventionnelle

    Energy Technology Data Exchange (ETDEWEB)

    Mozziconacci, J.G.; Brot, A.M. [Centre Hospitalier de Bourges, PCR, 18 (France); Jarrige, V. [Centre Hospitalier de Bourges, PSRPM, 18 (France)

    2009-07-01

    The authors present an approach aimed at optimizing working conditions and radioprotection for the different actors in interventional radiology. This approach comprises a monitoring of personnel dosimetry, a workstation analysis with risk assessment, and the taking into account of patient dosimetry. For each of these aspects, the authors discuss procedures and available devices (dosemeters and other detection or dose measurement equipment)

  13. A stochastic method for optimal location of groundwater monitoring sites at aquifer scale

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    With the growth of public environmental awareness and the improvement in national and EU legislation regarding the environment, monitoring assumed great importance in the frame of all managerial activities related to territories. In particular, recently, a number of public environmental agencies have invested great resources in planning and operating improvements on existing monitoring networks within their regions. In this framework, and, at the light of the Water Framework Directive, the optimal monitoring of the qualitative and quantitative state of groundwater becomes a priority, particularly, when severe economic constraints must be imposed and the territory to be monitored is quite wide. There are a lot of reasons justifying the optimal extension of a monitoring network. In fact, a modest coverage of the monitored area often makes impossible to provide the manager with a sufficient knowledge for decision-making processes. In general, monitoring networks are characterized by a scarce number of existing wells, irregularly spread over the considered area. This is a typical case of optimization and it may be solved seeking among existing, but unused, wells, all and only those able to make the monitoring network coverage, the most uniform among any arrangement. Using existing wells as new monitoring sites, allows one to drastically reduce the needed budget. In this paper, a four step method, based on simulated annealing, has been implemented with the aim of identifying scarcely monitored zones within the groundwater system boundaries. The steps are the following: I. Define aquifer boundaries, number and location of the existing monitoring sites and number and location of candidate new monitoring sites. Any constraint about the network size, and wells' location and characteristics need also to be identified at this step; II. Carry out stochastic simulations producing a large number of possible realizations of the improved monitoring network and choose the transient

  14. Optimal placement of dampers and actuators based on stochastic approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A general method is developed for optimal application of dampers and actuators by installing them at optimal location on seismic-resistant structures. The study includes development of a statistical criterion, formulation of a general optimization problem and establishment of a solution procedure. Numerical analysis of the seismic response in time-history of controlled structures is used to verify the proposed method for optimal device application and to demonstrate the effectiveness of seismic response control with optimal device location. This study shows that the proposed method for the optimal device application is simple and general, and that the optimally applied dampers and actuators are very efficient for seismic response reduction.

  15. A New Approach to Monitoring Coastal Marshes for Persistent Flooding

    Science.gov (United States)

    Kalcic, M. T.; Undersood, Lauren W.; Fletcher, Rose

    2012-01-01

    compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane s destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.

  16. Optimal Control Approaches to the Aggregate Production Planning Problem

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2015-12-01

    Full Text Available In the area of production planning and control, the aggregate production planning (APP problem represents a great challenge for decision makers in production-inventory systems. Tradeoff between inventory-capacity is known as the APP problem. To address it, static and dynamic models have been proposed, which in general have several shortcomings. It is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the APP. For this reason, we propose the use of an Optimal Control (OC formulation via the approach of energy-based and Hamiltonian-present value. The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order system, incorporating production rate, inventory level, and capacity as well with the associated cost by work force in the same formulation. Also, a novel result in relation with the Hamiltonian-present value in the OC formulation is that it reduces the inventory level compared with the pure energy based approach for APP. A set of simulations are provided which verifies the theoretical contribution of this work.

  17. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  18. Time-Saving Approach for Optimal Mining of Association Rules

    Directory of Open Access Journals (Sweden)

    Mouhir Mohammed

    2016-10-01

    Full Text Available Data mining is the process of analyzing data so as to get useful information to be exploited by users. Association rules is one of data mining techniques used to detect different correlations and to reveal relationships among data individual items in huge data bases. These rules usually take the following form: if X then Y as independent attributes. An association rule has become a popular technique used in several vital fields of activity such as insurance, medicine, banks, supermarkets… Association rules are generated in huge numbers by algorithms known as Association Rules Mining algorithms. The generation of huge quantities of Association Rules may be time-and-effort consuming this is the reason behind an urgent necessity of an efficient and scaling approach to mine only the relevant and significant association rules. This paper proposes an innovative approach which mines the optimal rules from a large set of Association Rules in a distributive processing way to improve its efficiency and to decrease the running time.

  19. Optimizing Concurrent M3-Transactions: A Fuzzy Constraint Satisfaction Approach

    Directory of Open Access Journals (Sweden)

    Peng LI

    2004-10-01

    Full Text Available Due to the high connectivity and great convenience, many E-commerce application systems have a high transaction volume. Consequently, the system state changes rapidly and it is likely that customers issue transactions based on out-of-date state information. Thus, the potential of transaction abortion increases greatly. To address this problem, we proposed an M3-transaction model. An M3-transaction is a generalized transaction where users can issue their preferences in a request by specifying multiple criteria and optional data resources simultaneously within one transaction. In this paper, we introduce the transaction grouping and group evaluation techniques. We consider evaluating a group of M3-transactions arrived to the system within a short duration together. The system makes optimal decisions in allocating data to transactions to achieve better customer satisfaction and lower transaction failure rate. We apply the fuzzy constraint satisfaction approach for decision-making. We also conduct experimental studies to evaluate the performance of our approach. The results show that the M3-transaction with group evaluation is more resilient to failure and yields much better performance than the traditional transaction model.

  20. Landslide monitoring by Terrestrial SAR Interferometry: critical analysis of different data processing approaches

    Science.gov (United States)

    Brunetti, Alessandro; Crosetto, Michele; Mazzanti, Paolo; Monserrat, Oriol

    2015-04-01

    precluded. The second case study is a road embankment affected by an instability problem leading to continuous and almost linear displacement rates up to 3-4 mm/day. Also in this case, TInSAR collected dataset were subsampled in order to simulate a discontinuous monitoring with 1 image/day sampling rate for an overall period of about 1 month. The displacements of 7 corner reflectors (characterized by a very high peak-to-background amplitude) were investigated by a non-interferometric approach based on the amplitude variation tracking at a sub-pixel scale. Amplitude-based results were compared with those from the interferometric approach, based on the analysis of the same subsampled data. Although the accuracy of the amplitude-based method depends on the dimensions of SAR pixel (which in turn depends on the sensor-target distance) and is much lower than the phase-based method, an overall good agreement between the two approaches has been achieved. Specifically, the errors for 6 of 7 monitoring points were within the expected site-specific accuracy of the amplitude-based method (about 1,5 cm). The experience gained by the critical analysis of different TInSAR data processing allows a quantitative estimation about the expected accuracy and reliability of the technique in the design of future monitoring projects, representing a useful tool for the optimization of funds and the identification of the best monitoring solution.

  1. A framework for quantifying and optimizing the value of seismic monitoring of infrastructure

    Science.gov (United States)

    Omenzetter, Piotr

    2017-04-01

    This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.

  2. Monitoring Atmospheric CO2 From Space: Challenge & Approach

    Science.gov (United States)

    Lin, Bing; Harrison, F. Wallace; Nehrir, Amin; Browell, Edward; Dobler, Jeremy; Campbell, Joel; Meadows, Byron; Obland, Michael; Kooi, Susan; Fan, Tai-Fang; Ismail, Syed

    2015-01-01

    Atmospheric CO2 is the key radiative forcing for the Earth's climate and may contribute a major part of the Earth's warming during the past 150 years. Advanced knowledge on the CO2 distributions and changes can lead considerable model improvements in predictions of the Earth's future climate. Large uncertainties in the predictions have been found for decades owing to limited CO2 observations. To obtain precise measurements of atmospheric CO2, certain challenges have to be overcome. For an example, global annual means of the CO2 are rather stable, but, have a very small increasing trend that is significant for multi-decadal long-term climate. At short time scales (a second to a few hours), regional and subcontinental gradients in the CO2 concentration are very small and only in an order of a few parts per million (ppm) compared to the mean atmospheric CO2 concentration of about 400 ppm, which requires atmospheric CO2 space monitoring systems with extremely high accuracy and precision (about 0.5 ppm or 0.125%) in spatiotemporal scales around 75 km and 10-s. It also requires a decadal-scale system stability. Furthermore, rapid changes in high latitude environments such as melting ice, snow and frozen soil, persistent thin cirrus clouds in Amazon and other tropical areas, and harsh weather conditions over Southern Ocean all increase difficulties in satellite atmospheric CO2 observations. Space lidar approaches using Integrated Path Differential Absorption (IPDA) technique are considered to be capable of obtaining precise CO2 measurements and, thus, have been proposed by various studies including the 2007 Decadal Survey (DS) of the U.S. National Research Council. This study considers to use the Intensity-Modulated Continuous-Wave (IM-CW) lidar to monitor global atmospheric CO2 distribution and variability from space. Development and demonstration of space lidar for atmospheric CO2 measurements have been made through joint adventure of NASA Langley Research Center and

  3. Stennis Space Center's approach to liquid rocket engine health monitoring using exhaust plume diagnostics

    Science.gov (United States)

    Gardner, D. G.; Tejwani, G. D.; Bircher, F. E.; Loboda, J. A.; Van Dyke, D. B.; Chenevert, D. J.

    1991-01-01

    Details are presented of the approach used in a comprehensive program to utilize exhaust plume diagnostics for rocket engine health-and-condition monitoring and assessing SSME component wear and degradation. This approach incorporates both spectral and video monitoring of the exhaust plume. Video monitoring provides qualitative data for certain types of component wear while spectral monitoring allows both quantitative and qualitative information. Consideration is given to spectral identification of SSME materials and baseline plume emissions.

  4. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  5. Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach

    CERN Document Server

    Kaminka, G A; Tambe, M; 10.1613/jair.970

    2011-01-01

    Recent years are seeing an increasing need for on-line monitoring of teams of cooperating agents, e.g., for visualization, or performance tracking. However, in monitoring deployed teams, we often cannot rely on the agents to always communicate their state to the monitoring system. This paper presents a non-intrusive approach to monitoring by 'overhearing', where the monitored team's state is inferred (via plan-recognition) from team-members' routine communications, exchanged as part of their coordinated task execution, and observed (overheard) by the monitoring system. Key challenges in this approach include the demanding run-time requirements of monitoring, the scarceness of observations (increasing monitoring uncertainty), and the need to scale-up monitoring to address potentially large teams. To address these, we present a set of complementary novel techniques, exploiting knowledge of the social structures and procedures in the monitored team: (i) an efficient probabilistic plan-recognition algorithm, well...

  6. A Multiobjective Approach for the Heuristic Optimization of Compactness and Homogeneity in the Optimal Zoning

    Directory of Open Access Journals (Sweden)

    B. Bernábe-Loranca

    2012-06-01

    Full Text Available This paper presents a multiobjective methodology for optimal zoning design (OZ, based on the grouping ofgeographic data with characteristics of territorial aggregation. The two objectives considered are the minimization ofthe geometric compactness on the geographical location of the data and the homogeneity of any of the descriptivevariables. Since this problem is NP hard [1], our proposal provides an approximate solution taking into accountproperties of partitioning algorithms and design restrictions for territorial space. Approximate solutions are generatedthrough the set of optimum values (Maxima and the corresponding minimals (dual Minima [2] of the bi-objectivefunction using Variable Neighborhood Search (VNS [3] and the Pareto order defined over this set of values. Theresults obtained by our proposed approach constitute good solutions and are generated in a reasonably lowcomputational time.

  7. Topology Optimization using a Topology Description Function Approach

    NARCIS (Netherlands)

    de Ruiter, M.J.

    2005-01-01

    During the last two decades, computational structural optimization methods have emerged, as computational power increased tremendously. Designers now have topological optimization routines at their disposal. These routines are able to generate the entire geometry of structures, provided only with in

  8. Assay optimization: a statistical design of experiments approach.

    Science.gov (United States)

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  9. Echelon approach to areas of concern in synoptic regional monitoring

    Science.gov (United States)

    Myers, Wayne; Patil, Ganapati P.; Joly, Kyle

    1997-01-01

    Echelons provide an objective approach to prospecting for areas of potential concern in synoptic regional monitoring of a surface variable. Echelons can be regarded informally as stacked hill forms. The strategy is to identify regions of the surface which are elevated relative to surroundings (Relative ELEVATIONS or RELEVATIONS). These are areas which would continue to expand as islands with receding (virtual) floodwaters. Levels where islands would merge are critical elevations which delimit echelons in the vertical dimension. Families of echelons consist of surface sectors constituting separate islands for deeper waters that merge as water level declines. Pits which would hold water are disregarded in such a progression, but a complementary analysis of pits is obtained using the surface as a virtual mould to cast a counter-surface (bathymetric analysis). An echelon tree is a family tree of echelons with peaks as terminals and the lowest level as root. An echelon tree thus provides a dendrogram representation of surface topology which enables graph theoretic analysis and comparison of surface structures. Echelon top view maps show echelon cover sectors on the base plane. An echelon table summarizes characteristics of echelons as instances or cases of hill form surface structure. Determination of echelons requires only ordinal strength for the surface variable, and is thus appropriate for environmental indices as well as measurements. Since echelons are inherent in a surface rather than perceptual, they provide a basis for computer-intelligent understanding of surfaces. Echelons are given for broad-scale mammalian species richness in Pennsylvania.

  10. The future of monitoring in clinical research - a holistic approach: linking risk-based monitoring with quality management principles.

    Science.gov (United States)

    Ansmann, Eva B; Hecht, Arthur; Henn, Doris K; Leptien, Sabine; Stelzer, Hans Günther

    2013-01-01

    Since several years risk-based monitoring is the new "magic bullet" for improvement in clinical research. Lots of authors in clinical research ranging from industry and academia to authorities are keen on demonstrating better monitoring-efficiency by reducing monitoring visits, monitoring time on site, monitoring costs and so on, always arguing with the use of risk-based monitoring principles. Mostly forgotten is the fact, that the use of risk-based monitoring is only adequate if all mandatory prerequisites at site and for the monitor and the sponsor are fulfilled.Based on the relevant chapter in ICH GCP (International Conference on Harmonisation of technical requirements for registration of pharmaceuticals for human use - Good Clinical Practice) this publication takes a holistic approach by identifying and describing the requirements for future monitoring and the use of risk-based monitoring. As the authors are operational managers as well as QA (Quality Assurance) experts, both aspects are represented to come up with efficient and qualitative ways of future monitoring according to ICH GCP.

  11. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    Science.gov (United States)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  12. A systematic approach: optimization of healthcare operations with knowledge management.

    Science.gov (United States)

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  13. New Approaches to HSCT Multidisciplinary Design and Optimization

    Science.gov (United States)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  14. A multiscale optimization approach to detect exudates in the macula.

    Science.gov (United States)

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  15. Optimization of piezoelectric energy harvester for wireless smart sensors in railway health monitoring

    Science.gov (United States)

    Li, Jingcheng; Jang, Shinae; Tang, Jiong

    2013-04-01

    Wireless sensor network is one of the prospective methods for railway monitoring due to the long-term operation and low-maintenance performances. How to supply power to the wireless sensor nodes has drawn much attention recently. In railway monitoring, the idea of converting ambient vibration energy from vibration of railway track induced by passing trains to electric energy has made it a potential way for powering the wireless sensor nodes. Nowadays, most of vibration based energy harvesters are designed at resonance. However, as railway vibration frequency is a wide band range, how to design an energy harvester working at that range is critical. In this paper, the energy consumption of the wireless smart sensor platform, Imote2, at different working states were investigated. Based on the energy consumption, a design of a bimorph cantilever piezoelectric energy harvester has been optimized to generate maximum average power between a wide-band frequency range. Significant power and current outputs have been increased after optimal design. Finally, the rechargeable battery life for supplying the Imote2 for railway monitoring is predicted by using the optimized piezoelectric energy harvesting system.

  16. Optimizing urine drug testing for monitoring medication compliance in pain management.

    Science.gov (United States)

    Melanson, Stacy E F; Ptolemy, Adam S; Wasan, Ajay D

    2013-12-01

    It can be challenging to successfully monitor medication compliance in pain management. Clinicians and laboratorians need to collaborate to optimize patient care and maximize operational efficiency. The test menu, assay cutoffs, and testing algorithms utilized in the urine drug testing panels should be periodically reviewed and tailored to the patient population to effectively assess compliance and avoid unnecessary testing and cost to the patient. Pain management and pathology collaborated on an important quality improvement initiative to optimize urine drug testing for monitoring medication compliance in pain management. We retrospectively reviewed 18 months of data from our pain management center. We gathered data on test volumes, positivity rates, and the frequency of false positive results. We also reviewed the clinical utility of our testing algorithms, assay cutoffs, and adulterant panel. In addition, the cost of each component was calculated. The positivity rate for ethanol and 3,4-methylenedioxymethamphetamine were testing from our panel. We also lowered the screening cutoff for cocaine to meet the clinical needs of the pain management center. In addition, we changed our testing algorithm for 6-acetylmorphine, benzodiazepines, and methadone. For example, due the high rate of false negative results using our immunoassay-based benzodiazepine screen, we removed the screening portion of the algorithm and now perform benzodiazepine confirmation up front in all specimens by liquid chromatography-tandem mass spectrometry. Conducting an interdisciplinary quality improvement project allowed us to optimize our testing panel for monitoring medication compliance in pain management and reduce cost. Wiley Periodicals, Inc.

  17. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  18. Artificial Neural Networks Applications: from Aircraft Design Optimization to Orbiting Spacecraft On-board Environment Monitoring

    Science.gov (United States)

    Jules, Kenol; Lin, Paul P.

    2002-01-01

    This paper reviews some of the recent applications of artificial neural networks taken from various works performed by the authors over the last four years at the NASA Glenn Research Center. This paper focuses mainly on two areas. First, artificial neural networks application in design and optimization of aircraft/engine propulsion systems to shorten the overall design cycle. Out of that specific application, a generic design tool was developed, which can be used for most design optimization process. Second, artificial neural networks application in monitoring the microgravity quality onboard the International Space Station, using on-board accelerometers for data acquisition. These two different applications are reviewed in this paper to show the broad applicability of artificial intelligence in various disciplines. The intent of this paper is not to give in-depth details of these two applications, but to show the need to combine different artificial intelligence techniques or algorithms in order to design an optimized or versatile system.

  19. An Approach In Optimization Of Ad-Hoc Routing Algorithms

    Directory of Open Access Journals (Sweden)

    Sarvesh Kumar Sharma

    2012-06-01

    Full Text Available In this paper different optimization of Ad-hoc routing algorithm is surveyed and a new method using training based optimization algorithm for reducing the complexity of routing algorithms is suggested. A binary matrix is assigned to each node in the network and gets updated after each data transfer using the protocols. The use of optimization algorithm in routing algorithm can reduce the complexity of routing to the least amount possible.

  20. A Regional Approach to Market Monitoring in the West

    Energy Technology Data Exchange (ETDEWEB)

    Barmack, Matthew; Kahn, Edward; Tierney, Susan; Goldman, Charles

    2006-10-01

    Market monitoring involves the systematic analysis of pricesand behavior in wholesale power markets to determine when and whetherpotentially anti-competitive behavior is occurring. Regional TransmissionOrganizations (RTOs) typically have a market monitoring function. Becausethe West does not have active RTOs outside of California, it does nothave the market monitoring that RTOs have. In addition, because the Westoutside of California does not have RTOs that perform centralized unitcommitment and dispatch, the rich data that are typically available tomarket monitors in RTO markets are not available in the West outside ofCalifornia. This paper examines the feasibility of market monitoring inthe West outside of California given readily available data. We developsimple econometric models of wholesale power prices in the West thatmight be used for market monitoring. In addition, we examine whetherproduction cost simulations that have been developed for long-runplanning might be useful for market monitoring. We find that simpleeconometric models go a long ways towards explaining wholesale powerprices in the West and might be used to identify potentially anomalousprices. In contrast, we find that the simulated prices from a specificset of production cost simulations exhibit characteristics that aresufficiently different from observed prices that we question theirusefulness for explaining price formation in the West and hence theirusefulness as a market monitoring tool.

  1. An approach to monitor influenza vaccination uptake across Europe.

    NARCIS (Netherlands)

    Kroneman, M.; Paget, J.; Meuwissen, L.; Joseph, C.; Kennedy, H.

    2008-01-01

    Currently, the monitoring of influenza vaccination uptake is mainly a national issue. As influenza infection easily crosses international borders, it is in the interest of all countries to have a high vaccine uptake in people who may be vulnerable when influenza spreads. A Europe-wide monitoring

  2. An optimized Leave One Out approach to efficiently identify outliers

    Science.gov (United States)

    Biagi, L.; Caldera, S.; Perego, D.

    2012-04-01

    Least squares are a well established and very popular statistical toolbox in geomatics. Particularly, LS are applied to routinely adjust geodetic networks in the cases both of classical surveys and of modern GNSS permanent networks, both at the local and at the global spatial scale. The linearized functional model between the observables and a vector of unknowns parameters is given. A vector of N observations and its apriori covariance is available. Typically, the observations vector can be decomposed into n subvectors, internally correlated but reciprocally uncorrelated. This happens, for example, when double differences are built from undifferenced observations and are processed to estimate the network coordinates of a GNSS session. Note that when all the observations are independent, n=N: this is for example the case of the adjustment of a levelling network. LS provide the estimates of the parameters, the observables, the residuals and of the a posteriori variance. The testing of the initial hypotheses, the rejection of outliers and the estimation of accuracies and reliabilities can be performed at different levels of significance and power. However, LS are not robust. The a posteriori estimation of the variance can be biased by one unmodelled outlier in the observations. In some case, the unmodelled bias is spread into all the residuals and its identification is difficult. A possible solution to this problem is given by the so called Leave One Out (LOO) approach. A particular subvector can be excluded from the adjustment, whose results are used to check the residuals of the excluded subvector. Clearly, the check is more robust, because a bias in the subvector does not affect the adjustment results. The process can be iterated on all the subvectors. LOO is robust but can be very slow, when n adjustments are performed. An optimized LLO algorithm has been studied. The usual LS adjustment on all the observations is performed to obtain a 'batch' result. The

  3. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    Science.gov (United States)

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-01

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  4. Optimal Feature Extraction Using Greedy Approach for Random Image Components and Subspace Approach in Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Mathu Soothana S.Kumar Retna Swami; Muneeswaran Karuppiah

    2013-01-01

    An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper.In this framework,features are extracted from the optimal random image components using greedy approach.These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems.The design of Gabor filters,PCA and MDA are crucial processes used for facial feature extraction.The FERET,ORL and YALE face databases are used to generate the results.Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA.Our method achieves 96.25%,99.44% and 100% recognition accuracy on the FERET,ORL and YALE databases for 30% training respectively.This is a considerably improved performance compared with other standard methodologies described in the literature.

  5. A New Approach for Parameter Optimization in Land Surface Model

    Institute of Scientific and Technical Information of China (English)

    LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin

    2011-01-01

    In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.

  6. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2010-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on semide

  7. Optimal angle reduction - a behavioral approach to linear system approximation

    NARCIS (Netherlands)

    Roorda, Berend; Fuhrmann, P.A.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is analyz

  8. A Convex Optimization Approach to pMRI Reconstruction

    CERN Document Server

    Zhang, Cishen

    2013-01-01

    In parallel magnetic resonance imaging (pMRI) reconstruction without using estimation of coil sensitivity functions, one group of algorithms reconstruct sensitivity encoded images of the coils first followed by the magnitude only image reconstruction, e.g. GRAPPA, and another group of algorithms jointly compute the image and sensitivity functions by regularized optimization which is a non-convex problem with local only solutions. For the magnitude only image reconstruction, this paper derives a reconstruction formulation, which is linear in the magnitude image, and an associated convex hull in the solution space of the formulated equation containing the magnitude of the image. As a result, the magnitude only image reconstruction for pMRI is formulated into a two-step convex optimization problem, which has a globally optimal solution. An algorithm based on split-bregman and nuclear norm regularized optimizations is proposed to implement the two-step convex optimization and its applications to phantom and in-vi...

  9. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  10. Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  11. [Evaluation of image quality using the normalized-rank approach for primary class liquid-crystal display (LCD) monitors with different colors and resolution].

    Science.gov (United States)

    Kuroki, Hidefumi; Katayama, Reiji; Sakaguchi, Taro; Maeda, Takashi; Morishita, Junji; Hayabuchi, Naofumi

    2010-11-20

    The purposes of this study were to evaluate the image quality of five types of liquid-crystal display (LCD) monitors by utilizing the normalized-rank approach and to investigate the effect of LCD monitor specifications, such as display colors, luminance, and resolution, on the evaluators' ranking. The LCD monitors used in this study were 2, 3 and 5 mega-pixel monochrome LCD monitors, and 2 and 3 mega-pixel color LCD monitors (Eizo Nanao Corporation). All LCD monitors were calibrated to the grayscale standard display function (GSDF) with different maximum luminance (recommended luminance) settings. Also, four kinds of radiographs were used for observer study based on the normalized-rank approach: three adult chest radiographs, three pediatric chest radiographs, three ankle joint radiographs, and four double-contrasted upper gastrointestinal radiographs. Ten radiological technologists participated in the observer study. Monochrome LCD monitors exhibited superior ranking with statistically significant differences (pLCD monitors in all kinds of radiographs. The major difference between monochrome and color monitors was luminance. Therefore, it is considered that the luminance of LCD monitors affects observers' evaluations based on image quality. Moreover, in the case of radiographs that include high frequency image components, the monitor resolution also affects the evaluation. In clinical practice, it is necessary to optimize the luminance and choose appropriate LCD monitors for diagnostic images.

  12. Optimized Protocol for Quantitative Multiple Reaction Monitoring-Based Proteomic Analysis of Formalin-Fixed, Paraffin-Embedded Tissues.

    Science.gov (United States)

    Kennedy, Jacob J; Whiteaker, Jeffrey R; Schoenherr, Regine M; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N; Baird, Geoffrey Stuart; Paulovich, Amanda G

    2016-08-01

    Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin-embedded (FFPE) tissues. Although the feasibility of using targeted, multiple reaction monitoring mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope-labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e., nine processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R(2) = 0.94) and immuno-MRM (R(2) = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens.

  13. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2016-01-01

    Full Text Available Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas.

  14. Revisiting support optimization at the Driskos tunnel using a quantitative risk approach

    Directory of Open Access Journals (Sweden)

    J. Connor Langford

    2016-04-01

    Full Text Available With the scale and cost of geotechnical engineering projects increasing rapidly over the past few decades, there is a clear need for the careful consideration of calculated risks in design. While risk is typically dealt with subjectively through the use of conservative design parameters, with the advent of reliability-based methods, this no longer needs to be the case. Instead, a quantitative risk approach can be considered that incorporates uncertainty in ground conditions directly into the design process to determine the variable ground response and support loads. This allows for the optimization of support on the basis of both worker safety and economic risk. This paper presents the application of such an approach to review the design of the initial lining system along a section of the Driskos twin tunnels as part of the Egnatia Odos highway in northern Greece. Along this section of tunnel, weak rock masses were encountered as well as high in situ stress conditions, which led to excessive deformations and failure of the as built temporary support. Monitoring data were used to validate the rock mass parameters selected in this area and a risk approach was used to determine, in hindsight, the most appropriate support category with respect to the cost of installation and expected cost of failure. Different construction sequences were also considered in the context of both convenience and risk cost.

  15. An adaptive multi-agent-based approach to smart grids control and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Marco [Florida Institute of Technology, Melbourne, FL (United States); Perez, Carlos; Granados, Adrian [Institute for Human and Machine Cognition, Ocala, FL (United States)

    2012-03-15

    In this paper, we describe a reinforcement learning-based approach to power management in smart grids. The scenarios we consider are smart grid settings where renewable power sources (e.g. Photovoltaic panels) have unpredictable variations in power output due, for example, to weather or cloud transient effects. Our approach builds on a multi-agent system (MAS)-based infrastructure for the monitoring and coordination of smart grid environments with renewable power sources and configurable energy storage devices (battery banks). Software agents are responsible for tracking and reporting power flow variations at different points in the grid, and to optimally coordinate the engagement of battery banks (i.e. charge/idle/discharge modes) to maintain energy requirements to end-users. Agents are able to share information and coordinate control actions through a parallel communications infrastructure, and are also capable of learning, from experience, how to improve their response strategies for different operational conditions. In this paper we describe our approach and address some of the challenges associated with the communications infrastructure for distributed coordination. We also present some preliminary results of our first simulations using the GridLAB-D simulation environment, created by the US Department of Energy (DoE) at Pacific Northwest National Laboratory (PNNL). (orig.)

  16. A Practical Approach to Governance and Optimization of Structured Data Elements.

    Science.gov (United States)

    Collins, Sarah A; Gesner, Emily; Morgan, Steven; Mar, Perry; Maviglia, Saverio; Colburn, Doreen; Tierney, Diana; Rocha, Roberto

    2015-01-01

    Definition and configuration of clinical content in an enterprise-wide electronic health record (EHR) implementation is highly complex. Sharing of data definitions across applications within an EHR implementation project may be constrained by practical limitations, including time, tools, and expertise. However, maintaining rigor in an approach to data governance is important for sustainability and consistency. With this understanding, we have defined a practical approach for governance of structured data elements to optimize data definitions given limited resources. This approach includes a 10 step process: 1) identification of clinical topics, 2) creation of draft reference models for clinical topics, 3) scoring of downstream data needs for clinical topics, 4) prioritization of clinical topics, 5) validation of reference models for clinical topics, and 6) calculation of gap analyses of EHR compared against reference model, 7) communication of validated reference models across project members, 8) requested revisions to EHR based on gap analysis, 9) evaluation of usage of reference models across project, and 10) Monitoring for new evidence requiring revisions to reference model.

  17. A Fuzzy Simulation-Based Optimization Approach for Groundwater Remediation Design at Contaminated Aquifers

    Directory of Open Access Journals (Sweden)

    A. L. Yang

    2012-01-01

    Full Text Available A fuzzy simulation-based optimization approach (FSOA is developed for identifying optimal design of a benzene-contaminated groundwater remediation system under uncertainty. FSOA integrates remediation processes (i.e., biodegradation and pump-and-treat, fuzzy simulation, and fuzzy-mean-value-based optimization technique into a general management framework. This approach offers the advantages of (1 considering an integrated remediation alternative, (2 handling simulation and optimization problems under uncertainty, and (3 providing a direct linkage between remediation strategies and remediation performance through proxy models. The results demonstrate that optimal remediation alternatives can be obtained to mitigate benzene concentration to satisfy environmental standards with a minimum system cost.

  18. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  19. Optimizing computed tomography pulmonary angiography using right atrium bolus monitoring combined with spontaneous respiration

    Energy Technology Data Exchange (ETDEWEB)

    Min, Wang; Jian, Li; Rui, Zhai [Jining No. 1 People' s Hospital, Department of Computed Tomography, Jining City, ShanDong Province (China); Wen, Li [Jining No. 1 People' s Hospital, Department of Gastroenterology, Jining, ShanDong (China); Dai, Lun-Hou [Shandong Chest Hospital, Department of Radiology, Jinan, ShanDong (China)

    2015-09-15

    CT pulmonary angiography (CTPA) aims to provide pulmonary arterial opacification in the absence of significant pulmonary venous filling. This requires accurate timing of the imaging acquisition to ensure synchronization with the peak pulmonary artery contrast concentration. This study was designed to test the utility of right atrium (RA) monitoring in ensuring optimal timing of CTPA acquisition. Sixty patients referred for CTPA were divided into two groups. Group A (n = 30): CTPA was performed using bolus triggering from the pulmonary trunk, suspended respiration and 70 ml of contrast agent (CA). Group B (n = 30): CTPA image acquisition was triggered using RA monitoring with spontaneous respiration and 40 ml of CA. Image quality was compared. Subjective image quality, average CT values of pulmonary arteries and density difference between artery and vein pairs were significantly higher whereas CT values of pulmonary veins were significantly lower in group B (all P < 0.05). There was no significant difference between the groups in the proportion of subjects where sixth grade pulmonary arteries were opacified (P > 0.05). RA monitoring combined with spontaneous respiration to trigger image acquisition in CTPA produces optimal contrast enhancement in pulmonary arterial structures with minimal venous filling even with reduced doses of CA. (orig.)

  20. Flower pollination algorithm: A novel approach for multiobjective optimization

    Science.gov (United States)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  1. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2013-01-01

    Full Text Available Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor’s method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  2. MVMO-based approach for optimal placement and tuning of supplementary damping controller

    NARCIS (Netherlands)

    Rueda Torres, J.L.; Gonzalez-Longatt, F.

    2015-01-01

    This paper introduces an approach based on the Swarm Variant of the Mean-Variance Mapping Optimization (MVMO-S) to solve the multi-scenario formulation of the optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is

  3. Sampling design optimization of a wireless sensor network for monitoring ecohydrological processes in the Babao River basin, China

    NARCIS (Netherlands)

    Ge, Y.; Wang, J.H.; Heuvelink, G.B.M.; Jin, R.; Li, X.; Wang, J.F.

    2015-01-01

    Optimal selection of observation locations is an essential task in designing an effective ecohydrological process monitoring network, which provides information on ecohydrological variables by capturing their spatial variation and distribution. This article presents a geostatistical method for mu

  4. Longevity, genes and efforts: an optimal taxation approach to prevention.

    Science.gov (United States)

    Leroux, M-L; Pestieau, P; Ponthiere, G

    2011-01-01

    This paper applies the analytical tools of optimal taxation theory to the design of the optimal subsidy on preventive behaviours, in an economy where longevity varies across agents, and depends on preventive expenditures and on longevity genes. Public intervention can be here justified on three grounds: corrections for misperceptions of the survival process and for externalities related to individual preventive behaviour, and redistribution across both earnings and genetic dimensions. The optimal subsidy on preventive expenditures is shown to depend on the combined impacts of misperception, externalities and self-selection. It is generally optimal to subsidize preventive efforts to an extent depending on the degree of individual myopia, on how productivity and genes are correlated, and on the complementarity of genes and preventive efforts in the survival function.

  5. A data fusion-based methodology for optimal redesign of groundwater monitoring networks

    Science.gov (United States)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.

  6. Potential and challenges in home care service process optimization : a route optimization approach

    OpenAIRE

    Nakari, Pentti J. E.

    2016-01-01

    Aging of the population is an increasing problem in many countries, including Finland, and it poses a challenge to public services such as home care. Vehicle routing optimization (VRP) type optimization solutions are one possible way to decrease the time required for planning home visits and driving to customer addresses, as well as decreasing transportation costs. Although VRP optimization is widely and succesfully applied to commercial and industrial logistics, the home care ...

  7. Structural Weight Optimization of Aircraft Wing Component Using FEM Approach.

    OpenAIRE

    Arockia Ruban M,; Kaveti Aruna

    2015-01-01

    One of the main challenges for the civil aviation industry is the reduction of its environmental impact by better fuel efficiency by virtue of Structural optimization. Over the past years, improvements in performance and fuel efficiency have been achieved by simplifying the design of the structural components and usage of composite materials to reduce the overall weight of the structure. This paper deals with the weight optimization of transport aircraft with low wing configuratio...

  8. FINANCIAL STRUCTURE OPTIMIZATION BY USING A GOAL PROGRAMMING APPROACH

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2012-12-01

    Full Text Available This paper proposes a new methodology for solving the multiple objective fractional linear programming problems using Taylor’s formula and goal programming techniques. The proposed methodology is tested on the example of company's financial structure optimization. The obtained results indicate the possibility of efficient application of the proposed methodology for company's financial structure optimization as well as for solving other multi-criteria fractional programming problems.

  9. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy.

  10. TRACKING AND MONITORING OF TAGGED OBJECTS EMPLOYING PARTICLE SWARM OPTIMIZATION ALGORITHM IN A DEPARTMENTAL STORE

    Directory of Open Access Journals (Sweden)

    Indrajit Bhattacharya

    2011-05-01

    Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.

  11. Optimal on-airport monitoring of the integrity of GPS-based landing systems

    Science.gov (United States)

    Xie, Gang

    2004-11-01

    The Global Positioning System (GPS) is a satellite-based radio navigation system. The Local Area Augmentation System (LAAS) is a version of Differential GPS (DGPS) designed to reliably support aircraft precision approaches. The Integrity Monitor Testbed (IMT) is a prototype of the LAAS Ground Facility (LGF) that is used to evaluate whether the LGF can meet system integrity requirements. To insure high integrity, the IMT has a variety of monitors to detect all possible failures. It also contains a failure-handling logic, known as Executive Monitoring (EXM), to exclude faulty measurements and recover once the failure disappears. Spatial ionospheric gradients are major threats to the LAAS. One focus of this thesis is exploring methods to quickly detect ionospheric gradients given the required low probability of false alarms. The first part of the thesis introduces GPS, LAAS, and the IMT and explains the algorithms and functionalities of IMT integrity monitors in detail. It then analyzes the failure responses of the integrity monitors under the most general measurement failure model. This analysis not only qualitatively maps the integrity monitors into the entire failure space, but also provides a tool to quantitatively compare the performance of different integrity monitors. In addition, the analysis examines the limitations of the existing monitors in detecting small but hazardous ionospheric gradients. The divergence Cumulative Sum (CUSUM) method is then derived and assessed. It can reduce the time required to detect marginal ionospheric gradients by about 30%. With the divergence CUSUM method implemented in the IMT, system integrity and performance are greatly improved. Different monitors can respond to the same failures. The last part of this thesis shows that the combination of these different monitors can detect certain failures more quickly than any individual monitor. This idea leads to a new method, called failure-specific testing, which can significantly

  12. Plug-and-play monitoring and performance optimization for industrial automation processes

    CERN Document Server

    Luo, Hao

    2017-01-01

    Dr.-Ing. Hao Luo demonstrates the developments of advanced plug-and-play (PnP) process monitoring and control systems for industrial automation processes. With aid of the so-called Youla parameterization, a novel PnP process monitoring and control architecture (PnP-PMCA) with modularized components is proposed. To validate the developments, a case study on an industrial rolling mill benchmark is performed, and the real-time implementation on a laboratory brushless DC motor is presented. Contents PnP Process Monitoring and Control Architecture Real-Time Configuration Techniques for PnP Process Monitoring Real-Time Configuration Techniques for PnP Performance Optimization Benchmark Study and Real-Time Implementation Target Groups Researchers and students of Automation and Control Engineering Practitioners in the area of Industrial and Production Engineering The Author Hao Luo received the Ph.D. degree at the Institute for Automatic Control and Complex Systems (AKS) at the University of Duisburg-Essen, Germany, ...

  13. Batch Statistical Process Monitoring Approach to a Cocrystallization Process.

    Science.gov (United States)

    Sarraguça, Mafalda C; Ribeiro, Paulo R S; Santos, Adenilson O Dos; Lopes, João A

    2015-12-01

    Cocrystals are defined as crystalline structures composed of two or more compounds that are solid at room temperature held together by noncovalent bonds. Their main advantages are the increase of solubility, bioavailability, permeability, stability, and at the same time retaining active pharmaceutical ingredient bioactivity. The cocrystallization between furosemide and nicotinamide by solvent evaporation was monitored on-line using near-infrared spectroscopy (NIRS) as a process analytical technology tool. The near-infrared spectra were analyzed using principal component analysis. Batch statistical process monitoring was used to create control charts to perceive the process trajectory and define control limits. Normal and non-normal operating condition batches were performed and monitored with NIRS. The use of NIRS associated with batch statistical process models allowed the detection of abnormal variations in critical process parameters, like the amount of solvent or amount of initial components present in the cocrystallization.

  14. Approaches to integrated monitoring for environmental health impact assessment

    Directory of Open Access Journals (Sweden)

    Liu Hai-Ying

    2012-11-01

    Full Text Available Abstract Although Integrated Environmental Health Monitoring (IEHM is considered an essential tool to better understand complex environmental health issues, there is no consensus on how to develop such a programme. We reviewed four existing frameworks and eight monitoring programmes in the area of environmental health. We identified the DPSEEA (Driving Force-Pressure-State-Exposure-Effect-Action framework as most suitable for developing an IEHM programme for environmental health impact assessment. Our review showed that most of the existing monitoring programmes have been designed for specific purposes, resulting in narrow scope and limited number of parameters. This therefore limits their relevance for studying complex environmental health topics. Other challenges include limited spatial and temporal data availability, limited development of data sharing mechanisms, heterogeneous data quality, a lack of adequate methodologies to link disparate data sources, and low level of interdisciplinary cooperation. To overcome some of these challenges, we propose a DPSEEA-based conceptual framework for an IEHM programme that would enable monitoring and measuring the impact of environmental changes on human health. We define IEHM as ‘a systemic process to measure, analyse and interpret the state and changes of natural-eco-anthropogenic systems and its related health impact over time at the same location with causative explanations across the various compartments of the cause-effect chain’. We develop a structural work process to integrate information that is based on existing environmental health monitoring programmes. Such a framework allows the development of combined monitoring systems that exhibit a large degree of compatibility between countries and regions.

  15. Ant Colony Optimization Analysis on Overall Stability of High Arch Dam Basis of Field Monitoring

    Directory of Open Access Journals (Sweden)

    Peng Lin

    2014-01-01

    Full Text Available A dam ant colony optimization (D-ACO analysis of the overall stability of high arch dams on complicated foundations is presented in this paper. A modified ant colony optimization (ACO model is proposed for obtaining dam concrete and rock mechanical parameters. A typical dam parameter feedback problem is proposed for nonlinear back-analysis numerical model based on field monitoring deformation and ACO. The basic principle of the proposed model is the establishment of the objective function of optimizing real concrete and rock mechanical parameter. The feedback analysis is then implemented with a modified ant colony algorithm. The algorithm performance is satisfactory, and the accuracy is verified. The m groups of feedback parameters, used to run a nonlinear FEM code, and the displacement and stress distribution are discussed. A feedback analysis of the deformation of the Lijiaxia arch dam and based on the modified ant colony optimization method is also conducted. By considering various material parameters obtained using different analysis methods, comparative analyses were conducted on dam displacements, stress distribution characteristics, and overall dam stability. The comparison results show that the proposal model can effectively solve for feedback multiple parameters of dam concrete and rock material and basically satisfy assessment requirements for geotechnical structural engineering discipline.

  16. Ant colony optimization analysis on overall stability of high arch dam basis of field monitoring.

    Science.gov (United States)

    Lin, Peng; Liu, Xiaoli; Chen, Hong-Xin; Kim, Jinxie

    2014-01-01

    A dam ant colony optimization (D-ACO) analysis of the overall stability of high arch dams on complicated foundations is presented in this paper. A modified ant colony optimization (ACO) model is proposed for obtaining dam concrete and rock mechanical parameters. A typical dam parameter feedback problem is proposed for nonlinear back-analysis numerical model based on field monitoring deformation and ACO. The basic principle of the proposed model is the establishment of the objective function of optimizing real concrete and rock mechanical parameter. The feedback analysis is then implemented with a modified ant colony algorithm. The algorithm performance is satisfactory, and the accuracy is verified. The m groups of feedback parameters, used to run a nonlinear FEM code, and the displacement and stress distribution are discussed. A feedback analysis of the deformation of the Lijiaxia arch dam and based on the modified ant colony optimization method is also conducted. By considering various material parameters obtained using different analysis methods, comparative analyses were conducted on dam displacements, stress distribution characteristics, and overall dam stability. The comparison results show that the proposal model can effectively solve for feedback multiple parameters of dam concrete and rock material and basically satisfy assessment requirements for geotechnical structural engineering discipline.

  17. Bioaccumulation in aquatic systems: methodological approaches, monitoring and assessment

    DEFF Research Database (Denmark)

    Schäfer, Sabine; Buchmeier, Georgia; Claus, Evelyn

    2015-01-01

    Bioaccumulation, the accumulation of a chemical in an organism relative to its level in the ambient medium, is of major environmental concern. Thus, monitoring chemical concentrations in biota are widely and increasingly used for assessing the chemical status of aquatic ecosystems. In this paper......, various scientific and regulatory aspects of bioaccumulation in aquatic systems and the relevant critical issues are discussed. Monitoring chemical concentrations in biota can be used for compliance checking with regulatory directives, for identification of chemical sources or event-related environmental...

  18. Monitoring Customer Satisfaction in Service Industry: A Cluster Analysis Approach

    Directory of Open Access Journals (Sweden)

    Matúš Horváth

    2012-10-01

    Full Text Available One of the key performance indicators of quality management system of an organization is customer satisfaction. The process of monitoring customer satisfaction is therefore an important part of the measuring processes of the quality management system. This paper deals with new ways how to analyse and monitor customer satisfaction using the analysis of data containing how the customers use the organisation services and customer leaving rates. The article used cluster analysis in this process for segmentation of customers with the aim to increase the accuracy of the results and on these results based decisions. The aplication example was created as a part of bachelor thesis.

  19. Monitoring Customer Satisfaction in Service Industry: A Cluster Analysis Approach

    Directory of Open Access Journals (Sweden)

    Matúš Horváth

    2012-11-01

    Full Text Available One of the key performance indicators of quality management system of an organization is customer satisfaction. The process of monitoring customer satisfaction is therefore an important part of the measuring processes of the quality management system. This paper deals with new ways how to analyse and monitor customer satisfaction using the analysis of data containing how the customers use the organisation services and customer leaving rates. The article used cluster analysis in this process for segmentation of customers with the aim to increase the accuracy of the results and on these results based decisions. The aplication example was created as a part of bachelor thesis.

  20. An Ethnographic Observational Study to Evaluate and Optimize the Use of Respiratory Acoustic Monitoring in Children Receiving Postoperative Opioid Infusions.

    Science.gov (United States)

    Görges, Matthias; West, Nicholas C; Christopher, Nancy A; Koch, Jennifer L; Brodie, Sonia M; Lowlaavar, Nasim; Lauder, Gillian R; Ansermino, J Mark

    2016-04-01

    Respiratory depression in children receiving postoperative opioid infusions is a significant risk because of the interindividual variability in analgesic requirement. Detection of respiratory depression (or apnea) in these children may be improved with the introduction of automated acoustic respiratory rate (RR) monitoring. However, early detection of adverse events must be balanced with the risk of alarm fatigue. Our objective was to evaluate the use of acoustic RR monitoring in children receiving opioid infusions on a postsurgical ward and identify the causes of false alarm and optimal alarm thresholds. A video ethnographic study was performed using an observational, mixed methods approach. After surgery, an acoustic RR sensor was placed on the participant's neck and attached to a Rad87 monitor. The monitor was networked with paging for alarms. Vital signs data and paging notification logs were obtained from the central monitoring system. Webcam videos of the participant, infusion pump, and Rad87 monitor were recorded, stored on a secure server, and subsequently analyzed by 2 research nurses to identify the cause of the alarm, response, and effectiveness. Alarms occurring within a 90-second window were grouped into a single-alarm response opportunity. Data from 49 patients (30 females) with median age 14 (range, 4.4-18.8) years were analyzed. The 896 bedside vital sign threshold alarms resulted in 160 alarm response opportunities (44 low RR, 74 high RR, and 42 low SpO2). In 141 periods (88% of total), for which video was available, 65% of alarms were deemed effective (followed by an alarm-related action within 10 minutes). Nurses were the sole responders in 55% of effective alarms and the patient or parent in 20%. Episodes of desaturation (SpO2 10 bpm in 6 of 9 patients. Based on all RR samples observed, the default alarm thresholds, to serve as a starting point for each patient, would be a low RR of 6 (>10 years of age) and 10 (4-9 years of age). In this study

  1. Optimized two-level placement of test points for multi-objective air monitoring of the Three Gorges Reservoir area

    Institute of Scientific and Technical Information of China (English)

    XIAO Dong-hai; TAN Chun-lu; WANG Jun-qiang; ZHONG Yuan-chang

    2007-01-01

    To fit the complicated geographic conditions of the Three Gorges Reservoir area, a two-level multi-objective monitoring system was developed to monitor the atmosphere of the area. Statistical analysis of environmental monitoring data and the macro control principle were employed to configure the upper layer. The lower layer was designed by the application of the thumb rule to a local terrain and specific point sources of pollution therein. The optimized two-level system comprises an upper layer of 16 monitoring stations distributed at places of diverse geographical, ecological, economical and social characteristics, and a lower layer of 16 sub-machines at each monitoring station of the upper layer. This optimal outcome fits the complicated conditions of the Three Gorges Reservoir area, substantially cuts down the installation cost and the operation cost, and provides accurate monitoring data of atmosphere over the entire area with a high resolution.

  2. A Comparative Study between Optimization and Market-Based Approaches to Multi-Robot Task Allocation

    Directory of Open Access Journals (Sweden)

    Mohamed Badreldin

    2013-01-01

    Full Text Available This paper presents a comparative study between optimization-based and market-based approaches used for solving the Multirobot task allocation (MRTA problem that arises in the context of multirobot systems (MRS. The two proposed approaches are used to find the optimal allocation of a number of heterogeneous robots to a number of heterogeneous tasks. The two approaches were extensively tested over a number of test scenarios in order to test their capability of handling complex heavily constrained MRS applications that include extended number of tasks and robots. Finally, a comparative study is implemented between the two approaches and the results show that the optimization-based approach outperforms the market-based approach in terms of optimal allocation and computational time.

  3. A correlation consistency based multivariate alarm thresholds optimization approach.

    Science.gov (United States)

    Gao, Huihui; Liu, Feifei; Zhu, Qunxiong

    2016-11-01

    Different alarm thresholds could generate different alarm data, resulting in different correlations. A new multivariate alarm thresholds optimization methodology based on the correlation consistency between process data and alarm data is proposed in this paper. The interpretative structural modeling is adopted to select the key variables. For the key variables, the correlation coefficients of process data are calculated by the Pearson correlation analysis, while the correlation coefficients of alarm data are calculated by kernel density estimation. To ensure the correlation consistency, the objective function is established as the sum of the absolute differences between these two types of correlations. The optimal thresholds are obtained using particle swarm optimization algorithm. Case study of Tennessee Eastman process is given to demonstrate the effectiveness of proposed method.

  4. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  5. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  6. Structured controllers for uncertain systems a stochastic optimization approach

    CERN Document Server

    Toscano, Rosario

    2013-01-01

    Structured Controllers for Uncertain Systems focuses on the development of easy-to-use design strategies for robust low-order or fixed-structure controllers (particularly the industrially ubiquitous PID controller). These strategies are based on a recently-developed stochastic optimization method termed the "Heuristic Kalman Algorithm" (HKA) the use of which results in a simplified methodology that enables the solution of the structured control problem without a profusion of user-defined parameters. An overview of the main stochastic methods employable in the context of continuous non-convex optimization problems is also provided and various optimization criteria for the design of a structured controller are considered; H∞, H2, and mixed H2/H∞ each merits a chapter to itself. Time-domain-performance specifications can be easily incorporated in the design. Advances in Industrial Control aims to report and encourage the transfer of technology in control engineering. The rapid development of control technolo...

  7. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    Science.gov (United States)

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  8. A linear nonequilibrium thermodynamics approach to optimization of thermoelectric devices

    CERN Document Server

    Ouerdane, H; Apertet, Y; Michot, A; Abbout, A

    2013-01-01

    Improvement of thermoelectric systems in terms of performance and range of applications relies on progress in materials science and optimization of device operation. In this chapter, we focuse on optimization by taking into account the interaction of the system with its environment. For this purpose, we consider the illustrative case of a thermoelectric generator coupled to two temperature baths via heat exchangers characterized by a thermal resistance, and we analyze its working conditions. Our main message is that both electrical and thermal impedance matching conditions must be met for optimal device performance. Our analysis is fundamentally based on linear nonequilibrium thermodynamics using the force-flux formalism. An outlook on mesoscopic systems is also given.

  9. An approach to identify the optimal cloud in cloud federation

    Directory of Open Access Journals (Sweden)

    Saumitra Baleshwar Govil

    2012-01-01

    Full Text Available Enterprises are migrating towards cloud computing for their ability to provide agility, robustness and feasibility in operations. To increase the reliability and availability of services, clouds have grown into federated clouds i.e., union of clouds. There are still major issues in federated clouds, which when solved could lead to increased satisfaction to both service providers and clients alike. One such issue is to select the optimal foreign cloud amongst the federation, which provides services according to the client requirements. In this paper, we propose a model to select the optimal cloud service provider based on the capability and performance of the available clouds in the federation. We use two matrix models to obtain the capability and performance parametric values. They are matched with the client requirements and the optimal foreign cloud service provider is selected.

  10. A log mining approach for process monitoring in SCADA

    NARCIS (Netherlands)

    Hadziosmanovic, Dina; Bolzoni, Damiano; Hartel, Pieter

    2012-01-01

    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and perf

  11. Mastitis diagnostics and performance monitoring: a practical approach

    NARCIS (Netherlands)

    Lam, T.J.G.M.; Olde Riekerink, R.; Sampimon, O.C.; Smith, H.E.

    2009-01-01

    In this paper a review is given of frequently used mastitis diagnostic methods in modern dairy practice. Methods used at the quarter, cow, herd and regional or national level are discussed, including their usability for performance monitoring in udder health. Future developments, such as systems in

  12. A Log Mining Approach for Process Monitoring in SCADA

    NARCIS (Netherlands)

    Hadziosmanovic, D.; Bolzoni, D.; Hartel, Pieter H.

    2012-01-01

    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and perf

  13. OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS: USER'S GUIDE VERSION 1.2

    Science.gov (United States)

    The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...

  14. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...

  15. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  16. A diaCEST MRI approach for monitoring liposomal accumulation in tumors.

    Science.gov (United States)

    Chan, Kannie W Y; Yu, Tao; Qiao, Yuan; Liu, Qiang; Yang, Ming; Patel, Himatkumar; Liu, Guanshu; Kinzler, Kenneth W; Vogelstein, Bert; Bulte, Jeff W M; van Zijl, Peter C M; Hanes, Justin; Zhou, Shibin; McMahon, Michael T

    2014-04-28

    Nanocarrier-based chemotherapy allows preferential delivery of therapeutics to tumors and has been found to improve the efficacy of cancer treatment. However, difficulties in tracking nanocarriers and evaluating their pharmacological fates in patients have limited judicious selection of patients to those who might most benefit from nanotherapeutics. To enable the monitoring of nanocarriers in vivo, we developed MRI-traceable diamagnetic Chemical Exchange Saturation Transfer (diaCEST) liposomes. The diaCEST liposomes were based on the clinical formulation of liposomal doxorubicin (i.e. DOXIL®) and were loaded with barbituric acid (BA), a small, organic, biocompatible diaCEST contrast agent. The optimized diaCEST liposomal formulation with a BA-to-lipid ratio of 25% exhibited 30% contrast enhancement at B1=4.7μT in vitro. The contrast was stable, with ~80% of the initial CEST signal sustained over 8h in vitro. We used the diaCEST liposomes to monitor the response to tumor necrosis factor-alpha (TNF-α), an agent in clinical trials that increases vascular permeability and uptake of nanocarriers into tumors. After systemic administration of diaCEST liposomes to mice bearing CT26 tumors, we found an average diaCEST contrast at the BA frequency (5ppm) of 0.4% at B1=4.7μT while if TNF-α was co-administered the contrast increased to 1.5%. This novel approach provides a non-radioactive, non-metallic, biocompatible, semi-quantitative, and clinically translatable approach to evaluate the tumor targeting of stealth liposomes in vivo, which may enable personalized nanomedicine.

  17. A Two-stage Optimal Network Reconfiguration Approach for Minimizing Energy Loss of Distribution Networks Using Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wei-Tzer Huang

    2015-12-01

    Full Text Available This study aimed to minimize energy losses in traditional distribution networks and microgrids through a network reconfiguration and phase balancing approach. To address this problem, an algorithm composed of a multi-objective function and operation constraints is proposed. Network connection matrices based on graph theory and the backward/forward sweep method are used to analyze power flow. A minimizing energy loss approach is developed for network reconfiguration and phase balancing, and the particle swarm optimization (PSO algorithm is adopted to solve this optimal combination problem. The proposed approach is tested on the IEEE 37-bus test system and the first outdoor microgrid test bed established by the Institute of Nuclear Energy Research (INER in Taiwan. Simulation results demonstrate that the proposed two-stage approach can be applied in network reconfiguration to minimize energy loss.

  18. Approaches to monitoring biological outcomes for HPV vaccination: challenges of early adopter countries

    DEFF Research Database (Denmark)

    Wong, Charlene A; Saraiya, Mona; Hariri, Susan;

    2011-01-01

    In this review, we describe plans to monitor the impact of human papillomavirus (HPV) vaccine on biologic outcomes in selected international areas (Australia, Canada, Mexico, the Nordic countries, Scotland, and the United States) that have adopted this vaccine. This summary of monitoring plans...... provides a background for discussing the challenges of vaccine monitoring in settings where resources and capacity may vary. A variety of approaches that depend on existing infrastructure and resources are planned or underway for monitoring HPV vaccine impact. Monitoring HPV vaccine impact on biologic...

  19. Novel Approach to Nonlinear PID Parameter Optimization Using Ant Colony Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    Duan Hai-bin; Wang Dao-bo; Yu Xiu-fen

    2006-01-01

    This paper presents an application of an Ant Colony Optimization (ACO) algorithm to optimize the parameters in the design of a type of nonlinear PID controller. The ACO algorithm is a novel heuristic bionic algorithm, which is based on the behaviour of real ants in nature searching for food. In order to optimize the parameters of the nonlinear PID controller using ACO algorithm,an objective function based on position tracing error was constructed, and elitist strategy was adopted in the improved ACO algorithm. Detailed simulation steps are presented. This nonlinear PID controller using the ACO algorithm has high precision of control and quick response.

  20. A decision-analytic approach to the optimal allocation of resources for endangered species consultation

    Science.gov (United States)

    Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.

    2011-01-01

    The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal

  1. Reverse convex problems: an approach based on optimality conditions

    Directory of Open Access Journals (Sweden)

    Ider Tseveendorj

    2006-01-01

    Full Text Available We present some results concerning reverse convex problems. Global optimality conditions for the problems with a nonsmooth reverse convex constraint are established and convergence of an algorithm in the case of linear program with an additional quadratic reverse convex constraint is studied.

  2. Particle Swarm Optimization approach to defect detection in armour ceramics.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function.

  3. Taxing Strategies for Carbon Emissions: A Bilevel Optimization Approach

    Directory of Open Access Journals (Sweden)

    Wei Wei

    2014-04-01

    Full Text Available This paper presents a quantitative and computational method to determine the optimal tax rate among generating units. To strike a balance between the reduction of carbon emission and the profit of energy sectors, the proposed bilevel optimization model can be regarded as a Stackelberg game between the government agency and the generation companies. The upper-level, which represents the government agency, aims to limit total carbon emissions within a certain level by setting optimal tax rates among generators according to their emission performances. The lower-level, which represents decision behaviors of the grid operator, tries to minimize the total production cost under the tax rates set by the government. The bilevel optimization model is finally reformulated into a mixed integer linear program (MILP which can be solved by off-the-shelf MILP solvers. Case studies on a 10-unit system as well as a provincial power grid in China demonstrate the validity of the proposed method and its capability in practical applications.

  4. Reverse convex problems: an approach based on optimality conditions

    OpenAIRE

    Ider Tseveendorj

    2006-01-01

    We present some results concerning reverse convex problems. Global optimality conditions for the problems with a nonsmooth reverse convex constraint are established and convergence of an algorithm in the case of linear program with an additional quadratic reverse convex constraint is studied.

  5. Multi-objective optimization approach for air traffic flow management

    Directory of Open Access Journals (Sweden)

    Fadil Rabie

    2017-01-01

    The decision-making stage was then performed with the aid of data clustering techniques to reduce the sizeof the Pareto-optimal set and obtain a smaller representation of the multi-objective design space, there by making it easier for the decision-maker to find satisfactory and meaningful trade-offs, and to select a preferred final design solution.

  6. A comparison between remote sensing approaches to water extent monitoring

    Science.gov (United States)

    elmi, omid; javad tourian, mohammad; sneeuw, nico

    2013-04-01

    Monitoring the variation of water storage in a long period is a primary issue for understanding the impact of climate change and human activities on earth water resources. In order to obtain the change in water volume in a lake and reservoir, in addition to water level, water extent must be repeatedly determined in an appropriate time interval. Optical satellite imagery as a passive system is the main source of determination of coast line change as it is easy to interpret. Optical sensors acquire the reflected energy from the sunlight in various bands from visible to near infrared. Also, panchromatic mode provides more geometric details. Establishing a ratio between visible bands is the most common way of extract coastlines because with this ratio, water and land can be separated directly. Also, since the reflectance value of water is distinctly less than soil in infrared bands, applying a histogram threshold on this band is a effective way of coastline extraction. However, optical imagery is highly vulnerable to occurrence of dense clouds and fog. Moreover, the coastline is hard to detect where it is covered by dense vegetation. Synthetic aperture radar (SAR) as an active system provides an alternative source for monitoring the spatial change in coastlines. Two methods for monitoring the shoreline with SAR data have been published. First, the backscatter difference is calculated between two images acquired at different times. Second, the change in coastline is detected by computing the coherence of two SAR images acquired at different times. A SAR system can operate in all weather, so clouds and fog don't impact its efficiency. Also, it can penetrate into the plant canopy. However, in comparison with optical imagery, interpretation of SAR image in this case is relatively hard because of limitation in the number of band and polarization modes, also due to effects caused by speckle noises, slant-range imaging and shadows. The primary aim of this study is a

  7. A Study on the Optimal Positions of ECG Electrodes in a Garment for the Design of ECG-Monitoring Clothing for Male.

    Science.gov (United States)

    Cho, Hakyung; Lee, Joo Hyeon

    2015-09-01

    Smart clothing is a sort of wearable device used for ubiquitous health monitoring. It provides comfort and efficiency in vital sign measurements and has been studied and developed in various types of monitoring platforms such as T-shirt and sports bra. However, despite these previous approaches, smart clothing for electrocardiography (ECG) monitoring has encountered a serious shortcoming relevant to motion artifacts caused by wearer movement. In effect, motion artifacts are one of the major problems in practical implementation of most wearable health-monitoring devices. In the ECG measurements collected by a garment, motion artifacts are usually caused by improper location of the electrode, leading to lack of contact between the electrode and skin with body motion. The aim of this study was to suggest a design for ECG-monitoring clothing contributing to reduction of motion artifacts. Based on the clothing science theory, it was assumed in this study that the stability of the electrode in a dynamic state differed depending on the electrode location in an ECG-monitoring garment. Founded on this assumption, effects of 56 electrode positions were determined by sectioning the surface of the garment into grids with 6 cm intervals in the front and back of the bodice. In order to determine the optimal locations of the ECG electrodes from the 56 positions, ECG measurements were collected from 10 participants at every electrode position in the garment while the wearer was in motion. The electrode locations indicating both an ECG measurement rate higher than 80.0 % and a large amplitude during motion were selected as the optimal electrode locations. The results of this analysis show four electrode locations with consistently higher ECG measurement rates and larger amplitudes amongst the 56 locations. These four locations were abstracted to be least affected by wearer movement in this research. Based on this result, a design of the garment-formed ECG monitoring platform

  8. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    Science.gov (United States)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  9. Objective Functions for Information-Content-Based Optimal Monitoring Network Design

    Science.gov (United States)

    Weijs, S. V.; Huwald, H.; Parlange, M. B.

    2013-12-01

    Information theory has the potential to provide a common language for the quantification of uncertainty and its reduction by choosing optimally informative monitoring network layout. Numerous different objectives based on information measures have been proposed in recent literature, often focusing simultaneously on maximum information and minimum dependence between the chosen locations for data collection. We discuss these objective functions and conclude that a single objective optimization of joint entropy suffices to maximize the collection of information. Minimum dependence is a secondary objective that automatically follows from the first, but has no intrinsic justification. Furthermore it is demonstrated how the curse of dimensionality complicates the determination of information content for time series. In many cases found in the monitoring network literature, discrete multivariate joint distributions are estimated from relatively little data, leading to the occurrence of spurious dependencies in data, which change interpretations of previously published results. Aforementioned numerical challenges stem from inherent difficulties and subjectivity in determining information content. From information-theoretical logic it is clear that the information content of data depends on the state of knowledge prior to obtaining them. Less assumptions in formulating this state of knowledge leads to higher data requirements in formulating it. We further clarify the role of prior information in information content by drawing an analogy with data compression.

  10. The RADMED monitoring program: towards an ecosystem approach

    Directory of Open Access Journals (Sweden)

    J. L. López-Jurado

    2015-05-01

    Full Text Available In the Western Mediterranean, the IEO-RADMED monitoring program is already conducting many of the evaluations required under the Marine Strategy Framework Directive (MFSD along the Spanish Mediterranean coast. The different aspects of the ecosystem that are regularly sampled under this monitoring program are the physical environment and the chemical and biological variables of the water column, together with the planktonic communities, biomass and structure. Moreover, determinations of some anthropogenic stressors on the marine environment, as contaminants and microplastics, are under develop. Data are managed and stored at the IEO Data Center that works under the SeaDataNet infrastructure and are also stored under the IBAMar database. In combination with remote sensing data they are used to address open questions on the ecosystem in the Western Mediterranean sea.

  11. The RADMED monitoring program: towards an ecosystem approach

    Science.gov (United States)

    López-Jurado, J. L.; Balbín, R.; Amengual, B.; Aparicio-González, A.; Fernández de Puelles, M. L.; García-Martínez, M. C.; Gazá, M.; Jansá, J.; Morillas-Kieffer, A.; Moyá, F.; Santiago, R.; Serra, M.; Vargas-Yáñez, M.; Vicente, L.

    2015-05-01

    In the Western Mediterranean, the IEO-RADMED monitoring program is already conducting many of the evaluations required under the Marine Strategy Framework Directive (MFSD) along the Spanish Mediterranean coast. The different aspects of the ecosystem that are regularly sampled under this monitoring program are the physical environment and the chemical and biological variables of the water column, together with the planktonic communities, biomass and structure. Moreover, determinations of some anthropogenic stressors on the marine environment, as contaminants and microplastics, are under develop. Data are managed and stored at the IEO Data Center that works under the SeaDataNet infrastructure and are also stored under the IBAMar database. In combination with remote sensing data they are used to address open questions on the ecosystem in the Western Mediterranean sea.

  12. STRUCTURAL HEALTH MONITORING SYSTEM – AN EMBEDDED SENSOR APPROACH

    Directory of Open Access Journals (Sweden)

    Dhivya. A

    2013-02-01

    Full Text Available Structural Health monitoring system is the implementation of improving the maintenance of any structures like buildings and bridges. It encompasses damage detection, identification and prevention of structures from natural disasters like earth quake and rain. This paper is mainly proposed for three modules. First module constitutes recognizing and alerting of abnormal vibration of the building due to an earth quake. This consists of two types of sensor to predict the abnormal vibration induced by an earth quake. Second module portrays the prediction of damage in the buildings after an earth quake or heavy rain. Damage detection includes identification of crack and the moisture content in wall bricks in real time buildings. Third module presents the smart auditorium which is used to reduce the power consumption. Depending on the number of audience inside the auditorium it can control the electric appliances like fans, lights and speakers. In any real time structural health monitoring system the main issue is the time synchronization. This paper also proposes to overcome the general issue arises in structural health monitoring system. ZigBee based reliable communication is used among the client node and server node. For the secured wireless communication between the nodes ZigBee is used.

  13. Integrated approach to monitor water dynamics with drones

    Science.gov (United States)

    Raymaekers, Dries; De Keukelaere, Liesbeth; Knaeps, Els; Strackx, Gert; Decrop, Boudewijn; Bollen, Mark

    2017-04-01

    Remote sensing has been used for more than 20 years to estimate water quality in the open ocean and study the evolution of vegetation on land. More recently big improvements have been made to extend these practices to coastal and inland waters, opening new monitoring opportunities, eg. monitoring the impact of dredging activities on the aquatic environment. While satellite sensors can provide complete coverage and historical information of the study area, they are limited in their temporal revisit time and spatial resolution. Therefore, deployment of drones can create an added value and in combination with satellite information increase insights in the dynamics and actors of coastal and aquatic systems. Drones have the advantages of monitoring at high spatial detail (cm scale), with high frequency and are flexible. One of the important water quality parameters is the suspended sediment concentration. However, retrieving sediment concentrations from unmanned systems is a challenging task. The sediment dynamics in the port of Breskens, the Netherlands, were investigated by combining information retrieved from different data sources: satellite, drone and in-situ data were collected, analysed and inserted in sediment models. As such, historical (satellite), near-real time (drone) and predictive (sediment models) information, integrated in a spatial data infrastructure, allow to perform data analysis and can support decision makers.

  14. Well Control Optimization using Derivative-Free Algorithms and a Multiscale Approach

    CERN Document Server

    Wang, Xiang; Feng, Qihong

    2015-01-01

    In this paper, we use numerical optimization algorithms and a multiscale approach in order to find an optimal well management strategy over the life of the reservoir. The large number of well rates for each control step make the optimization problem more difficult and at a high risk of achieving a suboptimal solution. Moreover, the optimal number of adjustments is not known a priori. Adjusting well controls too frequently will increase unnecessary well management and operation cost, and an excessively low number of control adjustments may not be enough to obtain a good yield. We investigate three derivative-free optimization algorithms, chosen for their robust and parallel nature, to determine optimal well control strategies. The algorithms chosen include generalized pattern search (GPS), particle swarm optimization (PSO) and covariance matrix adaptation evolution strategy (CMA-ES). These three algorithms encompass the breadth of available black-box optimization strategies: deterministic local search, stochas...

  15. Multi-proxy monitoring approaches at Kangaroo Island, South Australia

    Science.gov (United States)

    Dixon, Bronwyn; Drysdale, Russell; Tyler, Jonathan; Goodwin, Ian

    2017-04-01

    Interpretations of geochemical signals preserved in young speleothems are greatly enhanced by comprehensive cave-site monitoring. In the light of this, a cave monitoring project is being conducted concurrently with the development of a new palaeoclimate record from Kelly Hill Cave (Kangaroo Island, South Australia). The site is strategically located because it is situated between longer-lived monitoring sites in southeastern and southwestern Australia, as well as being climatically 'upstream' from major population and agricultural centres. This study aims to understand possible controls on speleothem δ18O in Kelly Hill Cave through i. identification of local and regional δ18O drivers in precipitation; and ii. preservation and modification of climatic signals within the epikarst as indicated by dripwater δ18O. These aims are achieved through analysis of a five-year daily rainfall (amount and δ18O) dataset in conjunction with in-cave drip monitoring. Drivers of precipitation δ18O were identified through linear regression between δ18O values and local meteorological variables, air-parcel back trajectories, and synoptic-typing. Synoptically driven moisture sources were identified through the use of NCEP/NCAR climate reanalysis sea-level pressure, precipitable moisture, and outgoing longwave radiation data in order to trace moisture sources and travel mechanisms from surrounding ocean basins. Local controls on δ18O at Kelly Hill Cave are consistent with published interpretations of southern Australia sites, with oxygen isotopes primarily controlled by rainfall amount on both daily and monthly time scales. Back-trajectory analysis also supports previous observations that the Southern Ocean is the major source for moisture-bearing cold-front systems. However, synoptic typing of daily rainfall δ18O and amount extremes reveals a previously unreported tropical connection and moisture source. This tropical connection appears to be strongest in summer and autumn, but

  16. Optimal Resource Allocation for Network Protection: A Geometric Programming Approach

    CERN Document Server

    Preciado, Victor M; Enyioha, Chinwendu; Jadbabaie, Ali; Pappas, George

    2013-01-01

    We study the problem of containing spreading processes in arbitrary directed networks by distributing protection resources throughout the nodes of the network. We consider two types of protection resources are available: (i) Preventive resources able to defend nodes against the spreading (such as vaccines in a viral infection process), and (ii) corrective resources able to neutralize the spreading after it has reached a node (such as antidotes). We assume that both preventive and corrective resources have an associated cost and study the problem of finding the cost-optimal distribution of resources throughout the nodes of the network. We analyze these questions in the context of viral spreading processes in directed networks. We study the following two problems: (i) Given a fixed budget, find the optimal allocation of preventive and corrective resources in the network to achieve the highest level of containment, and (ii) when a budget is not specified, find the minimum budget required to control the spreading...

  17. A free boundary approach to shape optimization problems.

    Science.gov (United States)

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint.

  18. Handbook of Optimization From Classical to Modern Approach

    CERN Document Server

    Snášel, Václav; Abraham, Ajith

    2013-01-01

    Optimization problems were and still are the focus of mathematics from antiquity to the present. Since the beginning of our civilization, the human race has had to confront numerous technological challenges, such as finding the optimal solution of various problems including control technologies, power sources construction, applications in economy, mechanical engineering and energy distribution amongst others. These examples encompass both ancient as well as modern technologies like the first electrical energy distribution network in USA etc. Some of the key principles formulated in the middle ages were done by Johannes Kepler (Problem of the wine barrels), Johan Bernoulli (brachystochrone problem), Leonhard Euler (Calculus of Variations), Lagrange (Principle multipliers), that were formulated primarily in the ancient world and are of a geometric nature. In the beginning of the modern era, works of L.V. Kantorovich and G.B. Dantzig (so-called linear programming) can be considered amongst others. This book disc...

  19. A genetic algorithm approach in interface and surface structure optimization

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jian [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  20. Designing Networks: A Mixed-Integer Linear Optimization Approach

    CERN Document Server

    Gounaris, Chrysanthos E; Kevrekidis, Ioannis G; Floudas, Christodoulos A

    2015-01-01

    Designing networks with specified collective properties is useful in a variety of application areas, enabling the study of how given properties affect the behavior of network models, the downscaling of empirical networks to workable sizes, and the analysis of network evolution. Despite the importance of the task, there currently exists a gap in our ability to systematically generate networks that adhere to theoretical guarantees for the given property specifications. In this paper, we propose the use of Mixed-Integer Linear Optimization modeling and solution methodologies to address this Network Generation Problem. We present a number of useful modeling techniques and apply them to mathematically express and constrain network properties in the context of an optimization formulation. We then develop complete formulations for the generation of networks that attain specified levels of connectivity, spread, assortativity and robustness, and we illustrate these via a number of computational case studies.

  1. A thermodynamic approach to the affinity optimization of drug candidates.

    Science.gov (United States)

    Freire, Ernesto

    2009-11-01

    High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.

  2. A triaxial accelerometer monkey algorithm for optimal sensor placement in structural health monitoring

    Science.gov (United States)

    Jia, Jingqing; Feng, Shuo; Liu, Wei

    2015-06-01

    Optimal sensor placement (OSP) technique is a vital part of the field of structural health monitoring (SHM). Triaxial accelerometers have been widely used in the SHM of large-scale structures in recent years. Triaxial accelerometers must be placed in such a way that all of the important dynamic information is obtained. At the same time, the sensor configuration must be optimal, so that the test resources are conserved. The recommended practice is to select proper degrees of freedom (DOF) based upon several criteria and the triaxial accelerometers are placed at the nodes corresponding to these DOFs. This results in non-optimal placement of many accelerometers. A ‘triaxial accelerometer monkey algorithm’ (TAMA) is presented in this paper to solve OSP problems of triaxial accelerometers. The EFI3 measurement theory is modified and involved in the objective function to make it more adaptable in the OSP technique of triaxial accelerometers. A method of calculating the threshold value based on probability theory is proposed to improve the healthy rate of monkeys in a troop generation process. Meanwhile, the processes of harmony ladder climb and scanning watch jump are proposed and given in detail. Finally, Xinghai NO.1 Bridge in Dalian is implemented to demonstrate the effectiveness of TAMA. The final results obtained by TAMA are compared with those of the original monkey algorithm and EFI3 measurement, which show that TAMA can improve computational efficiency and get a better sensor configuration.

  3. Margin Requirements and Portfolio Optimization: A Geometric Approach

    OpenAIRE

    Sheng Guo

    2014-01-01

    Using geometric illustrations, we investigate what implications of portfolio optimization in equilibrium can be generated by the simple mean-variance framework, under margin borrowing restrictions. First, we investigate the case of uniform marginability on all risky assets. It is shown that changing from unlimited borrowing to margin borrowing shifts the market portfolio to a riskier combination, accompanied by a higher risk premium and a lower price of risk. With the linear risk-return prefe...

  4. An approach to optimization of low-power Stirling cryocoolers

    Science.gov (United States)

    Sullivan, D. B.; Radebaugh, R.; Daney, D. E.; Zimmerman, J. E.

    1983-01-01

    A method for optimizing the design (shape of the displacer) of low power Stirling cryocoolers relative to the power required to operate the systems is described. A variational calculation which includes static conduction, shuttle and radiation losses, as well as regenerator inefficiency, was completed for coolers operating in the 300 K to 10 K range. While the calculations apply to tapered displacer machines, comparison of the results with stepped displacer cryocoolers indicates reasonable agreement.

  5. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    Science.gov (United States)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  6. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  7. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  8. Coordinated Target Tracking via a Hybrid Optimization Approach

    Science.gov (United States)

    Wang, Yin; Cao, Yan

    2017-01-01

    Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV) in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects’ motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO). The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions. PMID:28264425

  9. Coordinated Target Tracking via a Hybrid Optimization Approach

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2017-02-01

    Full Text Available Recent advances in computer science and electronics have greatly expanded the capabilities of unmanned aerial vehicles (UAV in both defense and civil applications, such as moving ground object tracking. Due to the uncertainties of the application environments and objects’ motion, it is difficult to maintain the tracked object always within the sensor coverage area by using a single UAV. Hence, it is necessary to deploy a group of UAVs to improve the robustness of the tracking. This paper investigates the problem of tracking ground moving objects with a group of UAVs using gimbaled sensors under flight dynamic and collision-free constraints. The optimal cooperative tracking path planning problem is solved using an evolutionary optimization technique based on the framework of chemical reaction optimization (CRO. The efficiency of the proposed method was demonstrated through a series of comparative simulations. The results show that the cooperative tracking paths determined by the newly developed method allows for longer sensor coverage time under flight dynamic restrictions and safety conditions.

  10. Asynchronous Multimodal Process Approach to Cross-Docking Hub Optimization

    National Research Council Canada - National Science Library

    Pawlewski, Pawel

    2015-01-01

    ... in a supply chain. It shows modern approaches to the model supply chain: multimodal, based on the idea of Physical Internet and based on Cyber-Physical Internet. The main goal of this paper is to...

  11. Convex optimization approach to the fusion of identity information

    Science.gov (United States)

    Li, Lingjie; Luo, Zhi-Quan; Wong, Kon M.; Bosse, Eloi

    1999-03-01

    We consider the problem of identity fusion for a multi- sensor target tracking system whereby sensors generate reports on the target identities. Since the sensor reports are typically fuzzy, 'incomplete' and inconsistent, the fusion approach based on the minimization of inconsistencies between the sensor reports by using a convex Quadratic Programming (QP) and linear programming (LP) formulation. In contrast to the Dempster-Shafer's evidential reasoning approach which suffers from exponentially growing completely, our approach is highly efficient. Moreover, our approach is capable of fusing 'ratio type' sensor reports, thus it is more general than the evidential reasoning theory. When the sensor reports are consistent, the solution generated by the new fusion method can be shown to converge to the true probability distribution. Simulation work shows that our method generates reasonable fusion results, and when only 'Subset type' sensor reports are presented, it produces fusion results similar to that obtained via the evidential reasoning theory.

  12. Using Data-Mining Approaches for Wind Turbine Power Curve Monitoring: A Comparative Study

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar; Achiche, Sofiane

    2013-01-01

    Four data-mining approaches for wind turbine power curve monitoring are compared. Power curve monitoring can be applied to evaluate the turbine power output and detect deviations, causing financial loss. In this research, cluster center fuzzy logic, neural network, and -nearest neighbor models...

  13. RF cavity design exploiting a new derivative-free trust region optimization approach

    Directory of Open Access Journals (Sweden)

    Abdel-Karim S.O. Hassan

    2015-11-01

    Full Text Available In this article, a novel derivative-free (DF surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n, where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique.

  14. Optimal design and verification of temporal and spatial filters using second-order cone programming approach

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Temporal filters and spatial filters are widely used in many areas of signal processing. A number of optimal design criteria to these problems are available in the literature. Various computational techniques are also presented to optimize these criteria chosen. There are many drawbacks in these methods. In this paper, we introduce a unified framework for optimal design of temporal and spatial filters. Most of the optimal design problems of FIR filters and beamformers are included in the framework. It is shown that all the design problems can be reformulated as convex optimization form as the second-order cone programming (SOCP) and solved efficiently via the well-established interior point methods. The main advantage of our SOCP approach as compared with earlier approaches is that it can include most of the existing methods as its special cases, which leads to more flexible designs. Furthermore, the SOCP approach can optimize multiple required performance measures, which is the drawback of earlier approaches. The SOCP approach is also developed to optimally design temporal and spatial two-dimensional filter and spatial matrix filter. Numerical results demonstrate the effectiveness of the proposed approach.

  15. A Biosystems Approach to Industrial Patient Monitoring and Diagnostic Devices

    CERN Document Server

    Baura, Gail

    2008-01-01

    A medical device is an apparatus that uses engineering and scientific principles to interface to physiology and diagnose or treat a disease. In this Lecture, we specifically consider thosemedical devices that are computer based, and are therefore referred to as medical instruments. Further, the medical instruments we discuss are those that incorporate system theory into their designs. We divide these types of instruments into those that provide continuous observation and those that provide a single snapshot of health information. These instruments are termed patient monitoring devices and diag

  16. Establishing an air pollution monitoring network for intra-urban population exposure assessment : a location-allocation approach

    Energy Technology Data Exchange (ETDEWEB)

    Kanaroglou, P.S. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology; Jerrett, M.; Beckerman, B.; Arain, M.A. [McMaster Univ., Hamilton, ON (Canada). School of Geography and Geology]|[McMaster Univ., Hamilton, ON (Canada). McMaster Inst. of Environment and Health; Morrison, J. [Carleton Univ., Ottawa, ON (Canada). School of Computer Science; Gilbert, N.L. [Health Canada, Ottawa, ON (Canada). Air Health Effects Div; Brook, J.R. [Meteorological Service of Canada, Toronto, ON (Canada)

    2004-10-01

    A study was conducted to assess the relation between traffic-generated air pollution and health reactions ranging from childhood asthma to mortality from lung cancer. In particular, it developed a formal method of optimally locating a dense network of air pollution monitoring stations in order to derive an exposure assessment model based on the data obtained from the monitoring stations and related land use, population and biophysical information. The method for determining the locations of 100 nitrogen dioxide monitors in Toronto, Ontario focused on land use, transportation infrastructure and the distribution of at-risk populations. The exposure assessment produced reasonable estimates at the intra-urban scale. This method for locating air pollution monitors effectively maximizes sampling coverage in relation to important socio-demographic characteristics and likely pollution variability. The location-allocation approach integrates many variables into the demand surface to reconfigure a monitoring network and is especially useful for measuring traffic pollutants with fine-scale spatial variability. The method also shows great promise for improving the assessment of exposure to ambient air pollution in epidemiologic studies. 19 refs., 3 tabs., 4 figs.

  17. Lyapunov-based Low-thrust Optimal Orbit Transfer: An approach in Cartesian coordinates

    CERN Document Server

    Zhang, Hantian; Cao, Qingjie

    2014-01-01

    This paper presents a simple approach to low-thrust optimal-fuel and optimal-time transfer problems between two elliptic orbits using the Cartesian coordinates system. In this case, an orbit is described by its specific angular momentum and Laplace vectors with a free injection point. Trajectory optimization with the pseudospectral method and nonlinear programming are supported by the initial guess generated from the Chang-Chichka-Marsden Lyapunov-based transfer controller. This approach successfully solves several low-thrust optimal problems. Numerical results show that the Lyapunov-based initial guess overcomes the difficulty in optimization caused by the strong oscillation of variables in the Cartesian coordinates system. Furthermore, a comparison of the results shows that obtaining the optimal transfer solution through the polynomial approximation by utilizing Cartesian coordinates is easier than using orbital elements, which normally produce strongly nonlinear equations of motion. In this paper, the Eart...

  18. A Robust Optimization Approach Considering the Robustness of Design Objectives and Constraints

    Institute of Scientific and Technical Information of China (English)

    LIUChun-tao; LINZhi-hang; ZHOUChunojing

    2005-01-01

    The problem of robust design is treated as a multi-objective optimization issue in which the performance mean and variation are optimized and minimized respectively, while maintaining the feasibility of design constraints under uncertainty. To effectively address this issue in robust design, this paper presents a novel robust optimization approach which integrates multi-objective optimization concepts with Taguchi's crossed arrays techniques. In this approach,Pareto-optimal robust design solution sets are obtained with the aid of design of experiment set-ups,which utilize the results of Analysis of Variance to quantify relative dominance and significance of design variables. A beam design problem is used to illustrate the effectiveness of the proposed approach.

  19. A multi-objective approach in the optimization of optical systems taking into account tolerancing

    Science.gov (United States)

    de Albuquerque, Bráulio F. C.; Liao, Lin-Yao; Montes, Amauri Silva; de Sousa, Fabiano Luis; Sasián, José

    2011-10-01

    A Multi-Objective approach for lens design optimization was verified. The optimization problem was approached by addressing simultaneously, but separately, image quality and system tolerancing. In contrast to other previous published methods, the error functions were not combined into a single merit function. As a result the method returns a set of nondominated solutions that generates a Pareto Front. Our method resulted in alternate and useful insights about the trade off solutions for a lens design problem. This Multi-objective optimization can conveniently be implemented with evolutionary methods of optimization that have established success in lens design. We provided an example of the insights and usefulness of our approach in the design of a Telephoto lens system using NSGA-II, a popular multiobjective evolutionary optimization algorithm.

  20. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    Science.gov (United States)

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  1. Intraoperative Neurophysiological Monitoring for Endoscopic Endonasal Approaches to the Skull Base: A Technical Guide

    OpenAIRE

    Harminder Singh; Vogel, Richard W.; Lober, Robert M.; Doan, Adam T.; Matsumoto, Craig I.; Kenning, Tyler J.; Evans, James J.

    2016-01-01

    Intraoperative neurophysiological monitoring during endoscopic, endonasal approaches to the skull base is both feasible and safe. Numerous reports have recently emerged from the literature evaluating the efficacy of different neuromonitoring tests during endonasal procedures, making them relatively well-studied. The authors report on a comprehensive, multimodality approach to monitoring the functional integrity of at risk nervous system structures, including the cerebral cortex, brainstem, cr...

  2. Monitoring Wildlife Interactions with Their Environment: An Interdisciplinary Approach

    Energy Technology Data Exchange (ETDEWEB)

    Charles-Smith, Lauren E.; Domnguez, Ignacio X.; Fornaro, Robert J.; DePerno, Christopher S.; Kennedy-Stoskopf, Suzanne

    2015-12-01

    In a rapidly changing world, wildlife ecologists strive to correctly model and predict complex relationships between animals and their environment, which facilitates management decisions impacting public policy to conserve and protect delicate ecosystems. Recent advances in monitoring systems span scientific domains, including animal and weather monitoring devices and landscape classification mapping techniques. The current challenge is how to combine and use detailed output from various sources to address questions spanning multiple disciplines. WolfScout wildlife and weather tracking system is a software tool capable of filling this niche. WolfScout automates integration of the latest technological advances in wildlife GPS collars, weather stations, drought conditions, and severe weather reports, and animal demographic information. The WolfScout database stores a variety of classified landscape maps including natural and manmade features. Additionally, WolfScout’s spatial database management system allows users to calculate distances between animals’ location and landscape characteristics, which are linked to the best approximation of environmental conditions at the animal’s location during the interaction. Through a secure website, data are exported in formats compatible with multiple software programs including R and ArcGIS. The WolfScout design promotes interoperability in data, between researchers, and software applications while standardizing analyses of animal interactions with their environment.

  3. MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Cirstoiu, C.; Grigoras, C.; Dobre, C.; Muraru, A.; Costan, A.; Dediu, M.; Stratan, C.

    2009-12-01

    The MonALISA (Monitoring Agents in a Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information. Program summaryProgram title: MonALISA Catalogue identifier: AEEZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Caltech License - free for all non-commercial activities No. of lines in distributed program, including test data, etc.: 147 802 No. of bytes in distributed program, including test data, etc.: 2 5913 689 Distribution format: tar.gz Programming language: Java, additional APIs available in Java, C, C++, Perl and python Computer: Computing Clusters, Network Devices, Storage Systems, Large scale data intensive applications Operating system: The MonALISA service is mainly used in Linux, the MonALISA client runs on all major platforms (Windows, Linux, Solaris, MacOS). Has the code been vectorized or parallelized?: It is a multithreaded application. It will efficiently use all the available processors. RAM: for the MonALISA service the minimum required memory is 64 MB; if the JVM is started allocating more memory this will be used for internal caching. The MonALISA client requires typically 256-512 MB of memory. Classification: 6.5 External routines: Requires Java: JRE or JDK to run. These external packages are used (they are included in the distribution): JINI, JFreeChart, PostgreSQL (optional). Nature of problem: To monitor and control

  4. The timing of terrorist attacks: An optimal stopping approach

    Directory of Open Access Journals (Sweden)

    Thomas Jensen

    2016-02-01

    Full Text Available I use a simple optimal stopping model to derive policy relevant insights on the timing of one-shot attacks by small autonomous terrorist units or “lone wolf” individuals. A main insight is that an increase in proactive counterterrorism measures can lead to a short term increase in the number of attempted terrorist attacks because it makes it more risky for existing terrorist units to pursue further development of capabilities. This is consistent with the events in London in 2005 where a terrorist attack on 7 July was followed by a similar but unsuccessful attack two weeks later.

  5. Optimization Approach for Detecting the Critical Data on a Database

    CERN Document Server

    Alluvada, Prashanth

    2008-01-01

    Through purposeful introduction of malicious transactions into randomly select nodes of a (database) graph, soiled and clean segments are identified. Soiled and clean measures corresponding those segments are then computed. These measures are used to repose the problem of critical database elements detection as an optimization problem over the graph. This method is universally applicable over a large class of graphs (including directed, weighted, disconnected, cyclic) that occur in several contexts of databases. A generalization argument is presented which extends the critical data problem to abstract settings.

  6. Targeted Proteomics Approaches To Monitor Microbial Activity In Basalt Aquifer

    Science.gov (United States)

    Paszczynski, A. J.; Paidisetti, R.

    2007-12-01

    Microorganisms play a major role in biogeochemical cycles of the Earth. Information regarding microbial community composition can be very useful for environmental monitoring since the short generation times of microorganisms allows them to respond rapidly to changing environmental conditions. Microbial mediated attenuation of toxic chemicals offers great potential for the restoration of contaminated environments in an ecologically acceptable manner. Current knowledge regarding the structure and functional activities of microbial communities is limited, but more information is being acquired every day through many genomic- and proteomic- based methods. As of today, only a small fraction of the Earth's microorganisms has been cultured, and so most of the information regarding the biodegradation and therapeutic potentials of these uncultured microorganisms remains unknown. Sequence analysis of DNA and/or RNA has been used for identifying specific microorganisms, to study the community composition, and to monitor gene expression providing limited information about metabolic state of given microbial system. Proteomic studies can reveal information regarding the real-time metabolic state of the microbial communities thereby aiding in understanding their interaction with the environment. In research described here the involvement of microbial communities in the degradation of anthropogenic contaminants such as trichloroethylene (TCE) was studied using mass spectrometry-based proteomics. The co- metabolic degradation of TCE in the groundwater of the Snake River Plain Aquifer at the Test Area North (TAN) site of Idaho National Laboratory (INL) was monitored by the characterization of peptide sequences of enzymes such as methane monooxygenases (MMOs). MMOs, expressed by methanotrophic bacteria are involved in the oxidation of methane and non-specific co-metabolic oxidation of TCE. We developed a time- course cell lysis method to release proteins from complex microbial

  7. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-04-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  8. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring

    Science.gov (United States)

    Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.

    2008-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be

  9. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    Science.gov (United States)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  10. Modified Lagrangian and Least Root Approaches for General Nonlinear Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    W. Oettli; X.Q. Yang

    2002-01-01

    In this paper we study nonlinear Lagrangian methods for optimization problems with side constraints.Nonlinear Lagrangian dual problems are introduced and their relations with the original problem are established.Moreover, a least root approach is investigated for these optimization problems.

  11. Optimal aggregation of noisy observations: A large deviations approach

    Energy Technology Data Exchange (ETDEWEB)

    Murayama, Tatsuto; Davis, Peter, E-mail: murayama@cslab.kecl.ntt.co.j, E-mail: davis@cslab.kecl.ntt.co.j [NTT Communication Science Laboratories, NTT Corporation, 2-4, Hikaridai, Seika-cho, Keihanna, Kyoto 619-0237 (Japan)

    2010-06-01

    Sensing and data aggregation tasks in distributed systems should not be considered as separate issues. The quality of collective estimation involves a fundamental tradeoff between sensing quality, which can be increased by increasing the number of sensors, and aggregation quality under a given capacity of the network, which decreases if the number of sensors is too large. In this paper, we examine a system level strategy for optimal aggregation of data from an ensemble of independent sensors. In particular, we consider large scale aggregation from very many sensors, in which case the network capacity diverges to infinity. Then, by applying the large deviations techniques, we conclude the following significant result: larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. At a critical value of noise, there is an abrupt change in the behavior of a parameter characterizing the aggregation strategy, similar to a phase transition in statistical physics.

  12. Global path planning approach based on ant colony optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    WEN Zhi-qiang; CAI Zi-xing

    2006-01-01

    Ant colony optimization (ACO) algorithm was modified to optimize the global path. In order to simulate the real ant colonies, according to the foraging behavior of ant colonies and the characteristic of food, conceptions of neighboring area and smell area were presented. The former can ensure the diversity of paths and the latter ensures that each ant can reach the goal. Then the whole path was divided into three parts and ACO was used to search the second part path. When the three parts pathes were adjusted,the final path was found. The valid path and invalid path were defined to ensure the path valid. Finally, the strategies of the pheromone search were applied to search the optimum path. However, when only the pheromone was used to search the optimum path, ACO converges easily. In order to avoid this premature convergence, combining pheromone search and random search, a hybrid ant colony algorithm(HACO) was used to find the optimum path. The comparison between ACO and HACO shows that HACO can be used to find the shortest path.

  13. A Simulated Annealing Approach for the Train Design Optimization Problem

    Directory of Open Access Journals (Sweden)

    Federico Alonso-Pecina

    2017-01-01

    Full Text Available The Train Design Optimization Problem regards making optimal decisions on the number and movement of locomotives and crews through a railway network, so as to satisfy requested pick-up and delivery of car blocks at stations. In a mathematical programming formulation, the objective function to minimize is composed of the costs associated with the movement of locomotives and cars, the loading/unloading operations, the number of locomotives, and the crews’ return to their departure stations. The constraints include upper bounds for number of car blocks per locomotive, number of car block swaps, and number of locomotives passing through railroad segments. We propose here a heuristic method to solve this highly combinatorial problem in two steps. The first one finds an initial, feasible solution by means of an ad hoc algorithm. The second step uses the simulated annealing concept to improve the initial solution, followed by a procedure aiming to further reduce the number of needed locomotives. We show that our results are competitive with those found in the literature.

  14. A stochastic optimization approach for integrated urban water resource planning.

    Science.gov (United States)

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.

  15. Discovery and Optimization of Materials Using Evolutionary Approaches.

    Science.gov (United States)

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  16. Comparison of radon doses based on different radon monitoring approaches.

    Science.gov (United States)

    Vaupotič, Janja; Smrekar, Nataša; Žunić, Zora S

    2017-04-01

    In 43 places (23 schools, 3 kindergartens, 16 offices and one dwelling), indoor radon has been monitored as an intercomparison experiment, using α-scintillation cells (SC - Jožef Stefan Institute, Slovenia), various kinds of solid state nuclear track detectors (KfK - Karlsruhe Institute of Technology, Germany; UFO - National Institute of Radiological Sciences, Chiba, Japan; RET - University College Dublin, Ireland) and active electronic devices (EQF, Sarad, Germany). At the same place, the radon levels and, consequently, the effective doses obtained with different radon devices differed substantially (by a factor of 2 or more), and no regularity was observed as regards which detector would show a higher or lower dose. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Ultrasonic Approach to Nonivasive Temperature Monitoring During Microwave Thermotherapy

    Directory of Open Access Journals (Sweden)

    J. Vrba

    2001-06-01

    Full Text Available Microwave thermotherapy (MT is an oncological treatment. At presentthe invasive thermometer probes are clinically used for temperaturemeasuring during an MT. Any invasive handling of tumors is ofhigh-risk. A new possible method of noninvasive monitoring oftemperature distribution in tissue has been developed. An MT treatmentof the experimentally induced pedicle-tumors of the rat was prepared.For 100 rat samples a strong correlation between the mean gray level inthe ROIs in the ultrasound pictures and the invasively measuredtemperature in the range 37-44 °C was found. The correlationcoefficient of the mean gray level and the invasively measuredtemperature is 0.96a0.05. A system for representation of changes ofspatial temperature distribution of the whole tumor during MT ispresented.

  18. Ground tilt monitoring at Phlegraean Fields (Italy: a methodological approach

    Directory of Open Access Journals (Sweden)

    C. Del Gaudio

    2003-06-01

    Full Text Available Among geodetic methods used for monitoring ground deformation in volcanic areas, tiltmetry represents the most rapid technique and therefore it is used by almost all the volcanological observatories in the world. The deformation of volcanic building is not only the result of endogenous causes (i.e. dykes injection or magma rising, but also non-tectonic environmental factors. Such troubles cannot be removed completely but they can be reduce. This article outlines the main source of errors affecting the signals recorded by Phlegraean tilt, network, such as the dependence of the tilt response on temperature and to the thermoelastic effect on ground deformation. The analytical procedure used to evaluate about such errors and their reduction is explained. An application to data acquired from the tilt network during two distinct phases of ground uplift and subsidence of the Phlegraean Fields is reported.

  19. Hyper Heuristic Approach for Design and Optimization of Satellite Launch Vehicle

    Institute of Scientific and Technical Information of China (English)

    Amer Farhan RAFIQUE; HE Linshu; Ali KAMRAN; Qasim ZEESHAN

    2011-01-01

    Satellite launch vehicle lies at the cross-road of multiple challenging technologies and its design and optimization present a typical example of multidisciplinary design and optimization (MDO) process. The complexity of problem demands highly efficient and effective algorithm that can optimize the design. Hyper heuristic approach (HHA) based on meta-heuristics is applied to the optimization of air launched satellite launch vehicle (ASLV). A non-learning random function (NLRF) is proposed to control low-level meta-heuristics (LLMHs) that increases certainty of global solution, an essential ingredient required in product conceptual design phase of aerospace systems. Comprehensive empirical study is performed to evaluate the performance advantages of proposed approach over popular non-gradient based optimization methods. Design of ASLV encompasses aerodynamics,propulsion, structure, stages layout, mass distribution, and trajectory modules connected by multidisciplinary feasible design approach. This approach formulates explicit system-level goals and then forwards the design optimization process entirely over to optimizer. This distinctive approach for launch vehicle system design relieves engineers from tedious, iterative task and enables them to improve their component level models. Mass is an impetus on vehicle performance and cost, and so it is considered as the core of vehicle design process. Therefore, gross launch mass is to be minimized in HHA.

  20. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J. [VTT Automation, Espoo (Finland)

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.

  1. Optimal life-style mix: an inductive approach

    NARCIS (Netherlands)

    R. Veenhoven

    2009-01-01

    There are three approaches to assessing life balance: (a) assessing how well the reality of a person’s life fits preconceptions of what a balanced life should be, (b) assessing how balanced people think their own life is, and (c) assessing which lifestyle mixes yield the most happiness. In theory, t

  2. An innovative multi-source approach for environmental monitoring of landfills

    Science.gov (United States)

    Manzo, Ciro; Mei, Alessandro; Paciucci, Lucia; Bassani, Cristiana

    2016-04-01

    This paper describes the application of downscaling approach, based on the products obtained by remote sensing and in situ survey, for the geo-environmental analysis of landfill site, located in the San Giovanni in Fiore Municipalty (CS) in the Southern Italy (Calabria District). The aim of the study focused on the optimization of techniques for the monitoring of landfill area by optical remote sensing, which represents a crucial issue since usual investigation methods are expensive and time-consuming. This approach integrated data with different spectral and spatial resolutions extracting parameters descriptive of superficial condition. The use of remote sensing provided a synoptic perspective considering time and spatial ranges which were useful for the monitoring of different environmental matrices and the assessment of biogas and leachate migration. Indeed the multispectral data of Worldview 2 (2012) and Pléiades (2014 and 2015) operating in the range from visible to near-infrared, were adopted for the retrieval of indices descriptive of the vegetation and soil targets with high spatial resolution. The orthophoto dataset integrated the temporal analysis not covered by spectral imagery showing a general increasing of land consumption and highlighting area with no or senescent vegetation cover. These evidences, due to the intensive human activities and to geological, hydraulic and land cover conditions, provided the general setting of the area and its evolution identifying ongoing processes in the study area. The Multispectral Infrared and Visible Imaging Spectrometer (MIVIS) airborne sensor extended the remote sensing analysis up to the thermal domain highlighting superficial anomalies of landfill capping linked to local phenomena such as biogas migration or local humidity into the ground. The dataset of products obtained by remote sensing data processing was validated by in situ analysis. The evidences of ground anomalies were collected by field surveys and

  3. [Triazole antifungal agents: practice guidelines of therapeutic drug monitoring and perspectives in treatment optimization].

    Science.gov (United States)

    Scodavolpe, Simon; Quaranta, Sylvie; Lacarelle, Bruno; Solas, Caroline

    2014-01-01

    Antifungal triazole agents (fluconazole, voriconazole, itraconazole and posaconazole) are widely used for the management of invasive fungal infections (IFI). These drugs are indicated both for the prophylaxis and treatment of IFI, particularly in candidiasis and aspergillosis, major cause of mortality in immunocompromised patients. Due to a large interindividual pharmacokinetic variability leading to sub-therapeutic or toxic concentrations and to concentration-efficacy and/or -toxicity relationships, therapeutic drug monitoring (TDM) of antifungal triazole is fully justified. This review provides an overview of literature based data that confirm the usefulness of such TDM and its level of evidence as well as the practical guidelines for its implementation. In addition, we discuss the interest of new tools to improve the clinical management of IFI, such as genotyping tests optimizing initial voriconazole dosing regimen or the development of a new solid oral tablet of posaconazole improving its bioavailability and limiting absorption disorders.

  4. Optimization of Kicker Pulse Bump by Using a SR Monitor at the Photon Factory

    CERN Document Server

    Mitsuhashi, Toshiyuki

    2005-01-01

    We plan to operate the Photon Factory storage ring by top-up injection mode from 2006. To realize this operation mode, remaining coherent oscillation of the stored beam due to error in the injection pulse bump is one of most serious problem. To reducing the error in the injection pulse bump, we calibrated kicking angles of the injection kicker magnets by means of the term by term instantaneous observation of beam profile. We have a SR monitor inside of injection pulse bump. By measureing the tern by tern beam position after the excitation of kicker magnet, we can calibrate the kick angle of the kicker magnet. By using this calibration, we optimized injection pulse bump. As a result, we reduced amplitude of remaining coherent oscillation less than 1/4 of the 1??of the beam size.

  5. An Optimization-Based Approach to Injector Element Design

    Science.gov (United States)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues

  6. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  7. Monitoring Debris Flows Using Spatial Filtering and Entropy Determination Approaches

    Directory of Open Access Journals (Sweden)

    Hung-Ming Kao

    2013-01-01

    Full Text Available We developed an automatic debris flow warning system in this study. The system uses a fixed video camera mounted over mountainous streams with a high risk for debris flows. The focus of this study is to develop an automatic algorithm for detecting debris flows with a low computational effort which can facilitate real-time implementation. The algorithm is based on a moving object detection technique to detect debris flow by comparing among video frames. Background subtraction is the kernel of the algorithm to reduce the computational effort, but non-rigid properties and color similarity of the object and the background color introduces some difficulties. Therefore, we used several spatial filtering approaches to increase the performance of the background subtraction. To increase the accuracy entropy is used with histogram analysis to identify whether a debris flow occurred. The modified background subtraction approach using spatial filtering and entropy determination is adopted to overcome the error in moving detection caused by non-rigid and similarities in color properties. The results of this study show that the approach described here can improve performance and also reduce the computational effort.

  8. Optimal Charging of Electric Drive Vehicles: A Dynamic Programming Approach

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Capion, Karsten Emil; Juul, Nina

    2013-01-01

    of electric vehicles in a market environment. From the perspective of vehicle operators participating in the electricity spot market, the problem is to optimally charge and discharge the vehicles in response to spot market prices. We consider the case of a vehicle owner who is a price......With the integration of fluctuating renewable production into the electricity system, electric-drive vehicles may contribute to the resulting need for flexibility, given that the market conditions provide sufficient economic incentive. To investigate this, we consider the short-term management......-taker and that of a fleet operator who can influence prices. In both cases, we show how the problem is amenable to dynamic programming with respectively linear and quadratic costs. With discretization of the state space, however, the problem of fleet operation is prone to suffer from the curse of dimensionality and...

  9. A simple approach to metal hydride alloy optimization

    Science.gov (United States)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  10. Optimization of a Large-scale Microseismic Monitoring Network in Northern Switzerland

    Science.gov (United States)

    Kraft, T.; Husen, S.; Mignan, A.; Bethmann, F.

    2011-12-01

    We have performed a computer aided network optimization for a regional scale microseismic network in northeastern Switzerland. The goal of the optimization was to find the geometry and size of the network that assures a location precision of 0.5 km in the epicenter and 2.0 km in focal depth for earthquakes of magnitude ML>= 1.0, by taking into account 67 existing stations in Switzerland, Germany and Austria, and the expected detectability of Ml 1 earthquakes in the study area. The optimization was based on the simulated annealing approach by Hardt and Scherbaum (1993), that aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm: to calculate traveltimes of seismic body waves using a finite differences raytracer and the three-dimensional velocity model of Switzerland, to calculate seismic body waves amplitudes at arbitrary stations assuming Brune source model and using scaling relations recently derived for Switzerland, and to estimate the noise level at arbitrary locations within Switzerland using a first order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. Considering 67 existing stations in Switzerland, Germany and Austria, optimizations for networks of 10 to 35 new stations were calculated with respect to 2240 synthetic earthquakes of magnitudes between ML=0.8-1.1. We incorporated the case of non-detections by considering only earthquake-station pairs with an expected signal-to-noise ratio larger than 10 for the considered body wave. Station noise levels were derived from measured ground motion for existing stations and from the first order ambient noise model for new sites. The stability of the optimization result was tested by repeated optimization runs with changing initial conditions. Due to the highly non linear nature and size of the problem, station locations in the individual solutions show small

  11. Nonlinear approach for oil field optimization based on gas lift optimization

    Energy Technology Data Exchange (ETDEWEB)

    Khamehchi, Ehsan; Rashidi, Fariborz [Amirkabir Univ. of Technology, Tehran (Iran). Faculty of Chemical Engineering; Karimi, Behrooz [Amirkabir Univ. of Technology, Tehran (Iran). Faculty of Industrial Engineering; Pourafshary, Peyman [Tehran Univ. (Iran). Petroleum Engineering Inst.

    2009-12-15

    When the initial energy of a virgin reservoir is not sufficient or when this energy falls below a certain limit after a production history, the production rates won't be able to meet economic margins. It is then time for artificial lift methods to come to aid. Among which, gas lift is the most commonly used scenario. Being somehow an ancient tool with an age of over a century, gas lift is though still a challenging problem when overall optimization is the concern. When the injection gas is of limited supply the problem is finding the best gas allocation scheme. However there are ever more cases emerging in certain geographic localities where the gas supplies are usually unlimited. The optimization problem then totally relates to the wellbore and completion string and fully engages with multiphase flow concepts. In the present study an intelligent genetic algorithm has been developed to simultaneously optimize all role playing factors, namely gas injection rate, injection depth and tubing diameter towards the maximum oil production rate with the water cut and injection pressure as the restrictions. The computations and real field data are mutually compared. (orig.)

  12. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  13. Optimal Diagnostic Approaches for Patients with Suspected Small Bowel Disease

    Science.gov (United States)

    Kim, Jae Hyun; Moon, Won

    2016-01-01

    While the domain of gastrointestinal endoscopy has made great strides over the last several decades, endoscopic assessment of the small bowel continues to be challenging. Recently, with the development of new technology including video capsule endoscopy, device-assisted enteroscopy, and computed tomography/magnetic resonance enterography, a more thorough investigation of the small bowel is possible. In this article, we review the systematic approach for patients with suspected small bowel disease based on these advanced endoscopic and imaging systems. PMID:27334413

  14. Combining Exact and Heuristic Approaches for Discrete Optimization

    Science.gov (United States)

    2009-02-18

    XPRESS . This success has stimulated the need for methodology to solve even much larger problems and the desire to solve problems in real-time. We...problems that cannot be solved to optimalitv using the leading commercial solvers such as CPLEX and XPRESS . These problems are either too large or too...an improved solution is found then Update the global solution end if end while The key to making this approach work is problem dependent. We

  15. A Powerful Optimization Approach for the Multi Channel Dissemination Networks

    CERN Document Server

    Al-Mogren, Ahmad Saad

    2010-01-01

    In the wireless environment, dissemination techniques may improve data access for the users. In this paper, we show a description of dissemination architecture that fits the overall telecommunication network. This architecture is designed to provide efficient data access and power saving for the mobile units. A concurrency control approach, MCD, is suggested for data consistency and conflict checking. A performance study shows that the power consumption, space overhead, and response time associated with MCD is far less than other previous techniques.

  16. Two image denoising approaches based on wavelet neural network and particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Yunyi Yan; Baolong Guo

    2007-01-01

    Two image denoising approaches based on wavelet neural network (WNN) optimized by particle swarm optimization (PSO) are proposed. The noisy image is filtered by the modified median filtering (MMF).Feature values are extracted based on the MMF and then normalized in order to avoid data scattering. In approach 1, WNN is used to tell those uncorrupted but filtered by MMF and then the pixels are restored to their original values while other pixels will retain. In approach 2, WNN distinguishes the corrupted pixels and then these pixels are replaced by MMF results while other pixels retain. WNN can be seen as a classifier to distinguish the corrupted or uncorrupted pixels from others in both approaches. PSO is adopted to optimize and train the WNN for its low requirements and easy employment. Experiments have shown that in terms of peak signal-to-noise ratio (PSNR) and subjective image quality, both proposed approaches are superior to traditional median filtering.

  17. Tissue viability monitoring: a multi-sensor wearable platform approach

    Science.gov (United States)

    Mathur, Neha; Davidson, Alan; Buis, Arjan; Glesk, Ivan

    2016-12-01

    Health services worldwide are seeking ways to improve patient care for amputees suffering from diabetes, and at the same time reduce costs. The monitoring of residual limb temperature, interface pressure and gait can be a useful indicator of tissue viability in lower limb amputees especially to predict the occurrence of pressure ulcers. This is further exacerbated by elevated temperatures and humid micro environment within the prosthesis which encourages the growth of bacteria and skin breakdown. Wearable systems for prosthetic users have to be designed such that the sensors are minimally obtrusive and reliable enough to faithfully record movement and physiological signals. A mobile sensor platform has been developed for use with the lower limb prosthetic users. This system uses an Arduino board that includes sensors for temperature, gait, orientation and pressure measurements. The platform transmits sensor data to a central health authority database server infrastructure through the Bluetooth protocol at a suitable sampling rate. The data-sets recorded using these systems are then processed using machine learning algorithms to extract clinically relevant information from the data. Where a sensor threshold is reached a warning signal can be sent wirelessly together with the relevant data to the patient and appropriate medical personnel. This knowledge is also useful in establishing biomarkers related to a possible deterioration in a patient's health or for assessing the impact of clinical interventions.

  18. [Vasomotor tone and CBP : monitoring components, pratical and therapeutic approaches].

    Science.gov (United States)

    Isetta, C; Janot, N

    2012-05-01

    The vasomotor tone is an essential determinant of blood pressure. Vascular resistance is the result of a calculation including vasomotor tone, blood flow and blood viscosity. The vascular tone is modulated by the sympathetic system and the direct actions of drugs (patient's pathology, anaesthesia). The pressure and flow allow the vascular tone apprehension. A decrease in vasomotor tone lowers the mean arterial pressure and may cause an intense vasoplegia with arterial vascular resistance below than 800 dyn/s/cm(5) leading to a lack of tissue oxygenation. Vasomotor paralysis can be caused by the patient medications or an intense inflammatory reaction starting at the extracorporeal circulation onset. Monitoring parameters of extracorporeal circulation such as pressure, flow, arterial and venous oxygen saturation, blood level in the venous reservoir, and extensively blood gases, haemoglobin, CO(2) partial pressure level of the oxygenator vent, bispectral index, and oxygen saturation of cerebral tissue are reviewed. They will know the vasoplegia consequences and bear an indication of adequate tissue oxygenation. It may be obtained by using vasopressors (ephedrine, norepinephrine, terbutalin and vasopressin) methylene blue, increasing blood viscosity (erythrocytes) and blood flow, even by inducing hypothermia.

  19. Optimal Fair Scheduling in S-TDMA Sensor Networks for Monitoring River Plumes

    Directory of Open Access Journals (Sweden)

    Miguel-Angel Luque-Nieto

    2016-01-01

    Full Text Available Underwater wireless sensor networks (UWSNs are a promising technology to provide oceanographers with environmental data in real time. Suitable network topologies to monitor estuaries are formed by strings coming together to a sink node. This network may be understood as an oriented graph. A number of MAC techniques can be used in UWSNs, but Spatial-TDMA is preferred for fixed networks. In this paper, a scheduling procedure to obtain the optimal fair frame is presented, under ideal conditions of synchronization and transmission errors. The main objective is to find the theoretical maximum throughput by overlapping the transmissions of the nodes while keeping a balanced received data rate from each sensor, regardless of its location in the network. The procedure searches for all cliques of the compatibility matrix of the network graph and solves a Multiple-Vector Bin Packing (MVBP problem. This work addresses the optimization problem and provides analytical and numerical results for both the minimum frame length and the maximum achievable throughput.

  20. A new digital approach to design multivariable robust optimal control systems

    Institute of Scientific and Technical Information of China (English)

    LIU Xiang; CHEN Lin; SUN You-xian

    2005-01-01

    This paper presents a new design of robust optimal controller for multivariable system. The row characteristic functions of a linear multivariable system and dynamic decoupling of its equivalent system, were applied to change the transfer function matrix of a closed-loop system into a normal function matrix, so that robustH∞ optimal stability is guaranteed. Furthermore,for the decoupled equivalent control system the l∞ optimization approach is used to have the closed-loop system embody optimal time domain indexes. A successful application on a heater control system verified the excellence of the new control scheme.

  1. On the equivalent static loads approach for dynamic response structural optimization

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2014-01-01

    as the original problem. The optimization theoretical foundation of the algorithm is mainly developed in Park and Kang (J Optim Theory Appl 118(1):191–200, 2003). In that article it is shown, for a certain class of problems, that if the equivalent static loads algorithm terminates then the KKT conditions......The equivalent static loads algorithm is an increasingly popular approach to solve dynamic response structural optimization problems. The algorithm is based on solving a sequence of related static response structural optimization problems with the same objective and constraint functions...

  2. A comparison between gradient descent and stochastic approaches for parameter optimization of a sea ice model

    Science.gov (United States)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2013-07-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.

  3. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  4. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....... However, explicit solution may not exist, especially when we want to deal with constraints, such as the limits on the portfolio composition, the limits on the insured sum, an inclusion of transaction costs or taxes on capital gains, which are important issues regularly mentioned in the scientic literature....... Two applications are considered: (A) optimal investment, consumption and insured sum for an individual maximizing the expected utility of consumption and bequest, and (B) optimal investment for a pension saver who wishes to maximize the expected utility of retirement benets. Numerical results show...

  5. Ultrasonic monitoring of fish thawing process optimal time of thawing and effect of freezing/thawing

    Directory of Open Access Journals (Sweden)

    Youssef Ait El Kadi

    2013-09-01

    Full Text Available Introduction. Fish quality is traditionally controlled by chemical and microbiological analysis. The non-de- structive control presents an enormous professional interest thanks to the technical contribution and precision of the analysis to which it leads. This paper presents the results obtained from a characterisation of fish thaw­ing process by the ultrasonic technique, with monitoring thermal processing from frozen to defrosted states. Material and methods. The study was carried out on fish type red drum and salmon cut into fillets of 15 mm thickness. After being frozen at -20°C, the sample is enclosed in a plexiglas vessel with parallel walls at the ambient temperature 30°C and excited in perpendicular incidence at 0.5 MHz by an ultrasonic pulser-receiver Sofranel 5052PR. the technique of measurement consists to study the signals reflected by fish during its thaw­ing, the specific techniques of signal processing are implemented to deduce informations characterizing the state of fish and its thawing process by examining the evolution of the position echoes reflected by the sample and the viscoelastic parameters offish during its thawing. Results. The obtained results show a relationship between the thermal state offish and its acoustic proper­ties, which allowed to deduce the optimal time of the first thawing in order to restrict the growth of microbial flora. For salmon, the results show a decrease of 36% of the time of the second thawing and an increase of 10.88% of the phase velocity, with a decrease of 65.5% of the peak-to-peak voltage of the signal reflected, thus a decrease of the acoustic impedance. Conclusions. This study shows an optimal time and an evolution rate of thawing specific to each type offish and a correlation between the acoustic behavior of fish and its thermal state which approves that this tech­nique of ultrasonic monitoring can substitute the control using the destructive chemical analysis in order to monitor

  6. Optimization Approaches for Designing a Novel 4-Bit Reversible Comparator

    Science.gov (United States)

    Zhou, Ri-gui; Zhang, Man-qun; Wu, Qian; Li, Yan-cheng

    2013-02-01

    Reversible logic is a new rapidly developed research field in recent years, which has been receiving much attention for calculating with minimizing the energy consumption. This paper constructs a 4×4 new reversible gate called ZRQ gate to build quantum adder and subtraction. Meanwhile, a novel 1-bit reversible comparator by using the proposed ZRQC module on the basis of ZRQ gate is proposed as the minimum number of reversible gates and quantum costs. In addition, this paper presents a novel 4-bit reversible comparator based on the 1-bit reversible comparator. One of the vital important for optimizing reversible logic is to design reversible logic circuits with the minimum number of parameters. The proposed reversible comparators in this paper can obtain superiority in terms of the number of reversible gates, input constants, garbage outputs, unit delays and quantum costs compared with the existed circuits. Finally, MATLAB simulation software is used to test and verify the correctness of the proposed 4-bit reversible comparator.

  7. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    Science.gov (United States)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  8. Contemporary nutrition approaches to optimize elite marathon performance.

    Science.gov (United States)

    Stellingwerff, Trent

    2013-09-01

    The professionalization of any sport must include an appreciation for how and where nutrition can positively affect training adaptation and/or competition performance. Furthermore, there is an ever-increasing importance of nutrition in sports that feature very high training volumes and are of a long enough duration that both glycogen and fluid balance can limit performance. Indeed, modern marathon training programs and racing satisfy these criteria and are uniquely suited to benefit from nutritional interventions. Given that muscle glycogen is limiting during a 2-h marathon, optimizing carbohydrate (CHO) intake and delivery is of maximal importance. Furthermore, the last 60 y of marathon performance have seen lighter and smaller marathoners, which enhances running economy and heat dissipation and increases CHO delivery per kg body mass. Finally, periodically training under conditions of low CHO availability (eg, low muscle glycogen) or periods of mild fluid restriction may actually further enhance the adaptive responses to training. Accordingly, this commentary highlights these key nutrition and hydration interventions that have emerged over the last several years and explores how they may assist in world-class marathon performance.

  9. Dynamic Range Size Analysis of Territorial Animals: An Optimality Approach.

    Science.gov (United States)

    Tao, Yun; Börger, Luca; Hastings, Alan

    2016-10-01

    Home range sizes of territorial animals are often observed to vary periodically in response to seasonal changes in foraging opportunities. Here we develop the first mechanistic model focused on the temporal dynamics of home range expansion and contraction in territorial animals. We demonstrate how simple movement principles can lead to a rich suite of range size dynamics, by balancing foraging activity with defensive requirements and incorporating optimal behavioral rules into mechanistic home range analysis. Our heuristic model predicts three general temporal patterns that have been observed in empirical studies across multiple taxa. First, a positive correlation between age and territory quality promotes shrinking home ranges over an individual's lifetime, with maximal range size variability shortly before the adult stage. Second, poor sensory information, low population density, and large resource heterogeneity may all independently facilitate range size instability. Finally, aggregation behavior toward forage-rich areas helps produce divergent home range responses between individuals from different age classes. This model has broad applications for addressing important unknowns in animal space use, with potential applications also in conservation and health management strategies.

  10. A moment-based approach for DVH-guided radiotherapy treatment plan optimization

    Science.gov (United States)

    Zarepisheh, M.; Shakourifar, M.; Trigila, G.; Ghomi, P. S.; Couzens, S.; Abebe, A.; Noreña, L.; Shang, W.; Jiang, Steve B.; Zinchenko, Y.

    2013-03-01

    The dose-volume histogram (DVH) is a clinically relevant criterion to evaluate the quality of a treatment plan. It is hence desirable to incorporate DVH constraints into treatment plan optimization for intensity modulated radiation therapy. Yet, the direct inclusion of the DVH constraints into a treatment plan optimization model typically leads to great computational difficulties due to the non-convex nature of these constraints. To overcome this critical limitation, we propose a new convex-moment-based optimization approach. Our main idea is to replace the non-convex DVH constraints by a set of convex moment constraints. In turn, the proposed approach is able to generate a Pareto-optimal plan whose DVHs are close to, or if possible even outperform, the desired DVHs. In particular, our experiment on a prostate cancer patient case demonstrates the effectiveness of this approach by employing two and three moment formulations to approximate the desired DVHs.

  11. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    Science.gov (United States)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  12. A HyperSpectral Imaging (HSI) approach for bio-digestate real time monitoring

    Science.gov (United States)

    Bonifazi, Giuseppe; Fabbri, Andrea; Serranti, Silvia

    2014-05-01

    One of the key issues in developing Good Agricultural Practices (GAP) is represented by the optimal utilisation of fertilisers and herbicidal to reduce the impact of Nitrates in soils and the environment. In traditional agriculture practises, these substances were provided to the soils through the use of chemical products (inorganic/organic fertilizers, soil improvers/conditioners, etc.), usually associated to several major environmental problems, such as: water pollution and contamination, fertilizer dependency, soil acidification, trace mineral depletion, over-fertilization, high energy consumption, contribution to climate change, impacts on mycorrhizas, lack of long-term sustainability, etc. For this reason, the agricultural market is more and more interested in the utilisation of organic fertilisers and soil improvers. Among organic fertilizers, there is an emerging interest for the digestate, a sub-product resulting from anaerobic digestion (AD) processes. Several studies confirm the high properties of digestate if used as organic fertilizer and soil improver/conditioner. Digestate, in fact, is somehow similar to compost: AD converts a major part of organic nitrogen to ammonia, which is then directly available to plants as nitrogen. In this paper, new analytical tools, based on HyperSpectral Imaging (HSI) sensing devices, and related detection architectures, is presented and discussed in order to define and apply simple to use, reliable, robust and low cost strategies finalised to define and implement innovative smart detection engines for digestate characterization and monitoring. This approach is finalized to utilize this "waste product" as a valuable organic fertilizer and soil conditioner, in a reduced impact and an "ad hoc" soil fertilisation perspective. Furthermore, the possibility to contemporary utilize the HSI approach to realize a real time physicalchemical characterisation of agricultural soils (i.e. nitrogen, phosphorus, etc., detection) could

  13. An Improved Particle Swarm Optimization Based on Deluge Approach for Enhanced Hierarchical Cache Optimization in IPTV Networks

    Directory of Open Access Journals (Sweden)

    M. Somu

    2014-05-01

    Full Text Available In recent years, IP network has been considered as a new delivery network for TV services. A majority of the telecommunication industries have used IP network to offer on-demand services and linear TV services as it can offer a two-way and high-speed communication. In order to effectively and economically utilize the IP network, caching is the technique which is usually preferred. In IPTV system, a managed network is utilized to bring out TV services, the requests of Video on Demand (VOD objects are usually combined in a limited period intensively and user preferences are fluctuated dynamically. Furthermore, the VOD content updates often under the control of IPTV providers. In order to minimize this traffic and overall network cost, a segment of the video content is stored in caches closer to subscribers, for example, Digital Subscriber Line Access Multiplexer (DSLAM, a Central Office (CO and Intermediate Office (IO. The major problem focused in this approach is to determine the optimal cache memory that should be assigned in order to attain maximum cost effectiveness. This approach uses an effective Grate Deluge algorithm based Particle Swarm Optimization (GDPSO approach for attaining the optimal cache memory size which in turn minimizes the overall network cost. The analysis shows that hierarchical distributed caching can save significant network cost through the utilization of the GDPSO algorithm.

  14. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    Science.gov (United States)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  15. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  16. Innovative biological approaches for monitoring and improving water quality

    Directory of Open Access Journals (Sweden)

    Sanja eAracic

    2015-08-01

    Full Text Available Water quality is largely influenced by the abundance and diversity of indigenous microbes present within an aquatic environment. Physical, chemical and biological contaminants from anthropogenic activities can accumulate in aquatic systems causing detrimental ecological consequences. Approaches exploiting microbial processes are now being utilized for the detection, and removal or reduction of contaminants. Contaminants can be identified and quantified in situ using microbial whole-cell biosensors, negating the need for water samples to be tested off-site. Similarly, the innate biodegradative processes can be enhanced through manipulation of the composition and/or function of the indigenous microbial communities present within the contaminated environments. Biological contaminants, such as detrimental/pathogenic bacteria, can be specifically targeted and reduced in number using bacteriophages. This mini-review discusses the potential application of whole-cell microbial biosensors for the detection of contaminants, the exploitation of microbial biodegradative processes for environmental restoration and the manipulation of microbial communities using phages.

  17. Land Degradation Monitoring in the Ordos Plateau of China Using an Expert Knowledge and BP-ANN-Based Approach

    Directory of Open Access Journals (Sweden)

    Yaojie Yue

    2016-11-01

    Full Text Available Land degradation monitoring is of vital importance to provide scientific information for promoting sustainable land utilization. This paper presents an expert knowledge and BP-ANN-based approach to detect and monitor land degradation in an effort to overcome the deficiencies of image classification and vegetation index-based approaches. The proposed approach consists of three generic steps: (1 extraction of knowledge on the relationship between land degradation degree and predisposing factors, which are NDVI and albedo, from domain experts; (2 establishment of a land degradation detecting model based on the BP-ANN algorithm; and (3 land degradation dynamic analysis. A comprehensive analysis was conducted on the development of land degradation in the Ordos Plateau of China in 1990, 2000 and 2010. The results indicate that the proposed approach is reliable for monitoring land degradation, with an overall accuracy of 91.2%. From 1990–2010, a reverse trend of land degradation is observed in Ordos Plateau. Regions with relatively high land degradation dynamic were mostly located in the northeast of Ordos Plateau. Additionally, most of the regions have transferred from a hot spot of land degradation to a less changed area. It is suggested that land utilization optimization plays a key role for effective land degradation control. However, it should be highlighted that the goals of such strategies should aim at the main negative factors causing land degradation, and the land use type and its quantity must meet the demand of population and be reconciled with natural conditions. Results from this case study suggest that the expert knowledge and BP-ANN-based approach is effective in mapping land degradation.

  18. Global Crop Monitoring: A Satellite-Based Hierarchical Approach

    Directory of Open Access Journals (Sweden)

    Bingfang Wu

    2015-04-01

    Full Text Available Taking advantage of multiple new remote sensing data sources, especially from Chinese satellites, the CropWatch system has expanded the scope of its international analyses through the development of new indicators and an upgraded operational methodology. The approach adopts a hierarchical system covering four spatial levels of detail: global, regional, national (thirty-one key countries including China and “sub-countries” (for the nine largest countries. The thirty-one countries encompass more that 80% of both production and exports of maize, rice, soybean and wheat. The methodology resorts to climatic and remote sensing indicators at different scales. The global patterns of crop environmental growing conditions are first analyzed with indicators for rainfall, temperature, photosynthetically active radiation (PAR as well as potential biomass. At the regional scale, the indicators pay more attention to crops and include Vegetation Health Index (VHI, Vegetation Condition Index (VCI, Cropped Arable Land Fraction (CALF as well as Cropping Intensity (CI. Together, they characterize crop situation, farming intensity and stress. CropWatch carries out detailed crop condition analyses at the national scale with a comprehensive array of variables and indicators. The Normalized Difference Vegetation Index (NDVI, cropped areas and crop conditions are integrated to derive food production estimates. For the nine largest countries, CropWatch zooms into the sub-national units to acquire detailed information on crop condition and production by including new indicators (e.g., Crop type proportion. Based on trend analysis, CropWatch also issues crop production supply outlooks, covering both long-term variations and short-term dynamic changes in key food exporters and importers. The hierarchical approach adopted by CropWatch is the basis of the analyses of climatic and crop conditions assessments published in the quarterly “CropWatch bulletin” which

  19. Effective sensing approach for assessment and monitoring of in-situ biodegradation in a subsurface environment

    Science.gov (United States)

    Li, Dong X.

    1999-02-01

    Rapid assessment and monitoring of biological conditions in a subsurface environment is becoming more and more important as bioremediation approaches become widely used in environmental cleanup. Remediation monitoring is also more challenging for in-situ remedial approaches, such as bioventing, biosparging, or passive bioremediation, where conventional 'inlet' and 'outlet' monitoring can no longer be applied. A sensing approach using subsurface chemical sensors offers a cost- effective alternative for remediation monitoring. Additional benefits of deploying subsurface sensors include continuous and unattended measurement with minimum disturbance to the subsurface condition. In a series of field studies, an electrochemical oxygen sensor, a non-dispersive infrared (NDIR) carbon dioxide sensor, and two hydrocarbons sensors were employed for monitoring in-situ bioremediation of petroleum hydrocarbon contaminated soils. Biodegradation rates were effectively measured through an in-situ respiration measurement using subsurface oxygen and carbon dioxide sensors. The high sensitivity of the carbon dioxide sensor to small change in the concentration enables rapid respiration measurements. Subsurface hydrocarbon sensors offer a means to monitor the progress of remediation and the migration of contaminant vapors during the remediation. The chemical sensors tested are clearly cost effective for remediation monitoring. The strengths of oxygen and carbon dioxide sensors are complimentary to each other. Strengths and limitations of different hydrocarbon sensors were also noted. Balancing cost and performance of sensors is crucial for environmental remediation application.

  20. An approach for the reliability based design optimization of laminated composites

    Science.gov (United States)

    Holdorf Lopez, Rafael; Lemosse, Didier; Souza de Cursi, José Eduardo; Rojas, Jhojan; El-Hami, Abdelkhalak

    2011-10-01

    This article aims at optimizing laminated composite plates taking into account uncertainties in the structural dimensions. As laminated composites require a global optimization tool, the Particle Swarm Optimization (PSO) method is employed. A new Reliability Based Design Optimization (RBDO) methodology based on safety factors is presented and coupled with PSO. Such safety factors are derived from the Karush-Kuhn-Tucker optimality conditions of the reliability index approach and eliminate the need for reliability analysis in RBDO. The plate weight minimization is the objective function of the optimization process. The results show that the coupling of the evolutionary algorithm with the safety-factor method proposed in this article successfully performs the RBDO of laminated composite structures.

  1. Phase Noise Monitor and Reduction by Parametric Saturation Approach in Phase Modulation Systems

    Institute of Scientific and Technical Information of China (English)

    XU Ming; ZHOU Zhen; PU Xiao; JI Jian-Hua; YANG Shu-Wen

    2011-01-01

    Nonlinear phase noise (NLPN) is investigated theoretically and numerically to be mitigated by parametric saturation approach in DPSK systems.The nonlinear propagation equation that incorporates the phase of linear and nonlinear is analyzed with parametric saturation processing (PSP).The NLPN is picked and monitored with the power change factors in the DPSK system.This process can be realized by an optical PSP limiter and a novel apparatus with feedback MZI.The monitor range of phase noise is 0°-90°, which may be reduced to 0°-45°if the monitor factor is about the Stockes wave but not an anti-Stockes wave.It is shown that DPSK signal performance can be improved based on the parametric saturation approach.%@@ Nonlinear phase noise (NLPN) is investigated theoretically and numerically to be mitigated by parametric saturation approach in DPSK systems.The nonlinear propagation equation that incorporates the phase of linear and nonlinear is analyzed with parametric saturation processing (PSP).The NLPN is picked and monitored with the power change factors in the DPSK system.This process can be realized by an optical PSP limiter and a novel apparatus with feedback MZI.The monitor range of phase noise is 0°-90°, which may be reduced to 0°-45° if the monitor factor is about the Stockes wave but not an anti-Stockes wave.It is shown that DPSK signal performance can be improved based on the parametric saturation approach.

  2. Sequential optimal monitoring network design and iterative spatial estimation of pollutant concentration for identification of unknown groundwater pollution source locations.

    Science.gov (United States)

    Prakash, Om; Datta, Bithin

    2013-07-01

    One of the difficulties in accurate characterization of unknown groundwater pollution sources is the uncertainty regarding the number and the location of such sources. Only when the number of source locations is estimated with some degree of certainty that the characterization of the sources in terms of location, magnitude, and activity duration can be meaningful. A fairly good knowledge of source locations can substantially decrease the degree of nonuniqueness in the set of possible aquifer responses to subjected geochemical stresses. A methodology is developed to use a sequence of dedicated monitoring network design and implementation and to screen and identify the possible source locations. The proposed methodology utilizes a combination of spatial interpolation of concentration measurements and simulated annealing as optimization algorithm for optimal design of the monitoring network. These monitoring networks are to be designed and implemented sequentially. The sequential design is based on iterative pollutant concentration measurement information from the sequentially designed monitoring networks. The optimal monitoring network design utilizes concentration gradient information from the monitoring network at previous iteration to define the objective function. The capability of the feedback information based iterative methodology is shown to be effective in estimating the source locations when no such information is initially available. This unknown pollution source locations identification methodology should be very useful as a screening model for subsequent accurate estimation of the unknown pollution sources in terms of location, magnitude, and activity duration.

  3. A Structural Approach to Performance Monitoring of Waste Sites: Obtaining Actionable Information

    Science.gov (United States)

    Mattson, E. D.; Versteeg, R.; Ankeny, M.; Richardson, A.

    2005-05-01

    Both government and non-government agencies are faced with the challenge of long-term monitoring of waste sites and landfills. Such monitoring should provide actionable information on how these sites are evolving, including (but not limited to) information on the success of remedial treatment methods (either active or passive), compliance with regulatory standards, and evolution of system behavior associated with these sites. Current monitoring efforts suffer from the lack of integration between data collection, data management, information extraction and information use. An alternative to such efforts is the use of a structural approach to performance monitoring developed at Idaho National Laboratory (INL). This approach has the following characteristics (1) tight integration between monitoring objectives and data collection efforts (2) well structured storage of all relevant monitoring data (3) establishment of transparent, reproducible procedures for translation of data to information (including coupling of data to models) (4) development of a web based interface to the monitoring system, providing easy access to data and results by multiple stakeholders. We will discuss several examples of the implementation of the INL monitoring system, including an EPA superfund site and several landfill sites.

  4. Muscle damage during minimally invasive surgical total knee arthroplasty traditional versus optimized subvastus approach.

    Science.gov (United States)

    Rossi, Roberto; Maiello, Alessio; Bruzzone, Matteo; Bonasia, Davide Edoardo; Blonna, Davide; Castoldi, Filippo

    2011-08-01

    Decreased muscle damage is reported as an advantage of minimally invasive surgical (MIS) approaches in total knee arthroplasty (TKA). The purpose of this study was to evaluate the anatomy of vastus medialis obliquus (VMO) tendon at its patellar insertion as well as to determine the amount and location of muscle damage comparing traditional subvastus approach and optimized subvastus approach. TKAs were performed in ten human cadavers (20 knees). In each specimen, one knee underwent the traditional subvastus approach and the contralateral knee the optimized subvastus approach. The risk of tearing and damaging the VMO muscle during the traditional subvastus approach is significantly higher (70% of the cases) compared to the optimized technique (30%). The amount of damage to the VMO muscle using the traditional subvastus approach was: 80% of the muscle's width in two cases, 60% in three cases, and 30% in two. The damage created by the optimized subvastus approach occurred along the edge of the tendon and the first fibers of the VMO muscle close to the muscle-tendon junction (less than 20% of muscle's width). Clinical studies are needed to determine the functional implications of these findings.

  5. A Genetic Algorithms-based Approach for Optimized Self-protection in a Pervasive Service Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Ingstrup, Mads; Hansen, Klaus Marius

    2009-01-01

    the constraints of heterogeneous devices and networks. In this paper, we present a Genetic Algorithms-based approach for obtaining optimized security configurations at run time, supported by a set of security OWL ontologies and an event-driven framework. This approach has been realized as a prototype for self...

  6. Optimized approach to cine MRI of uterine peristalsis.

    Science.gov (United States)

    Liu, Shanshan; Zhang, Qi; Yin, Chengying; Liu, Song; Chan, Queenie; Chen, Weibo; He, Jian; Zhu, Bin

    2016-12-01

    To determine the optimal slice thickness, playback rate, and scan time for uterine peristalsis with 3.0T magnetic resonance imaging (MRI). In all, 23 young female volunteers underwent a 3.0T MRI scan with different slice thicknesses of 3 mm (Cine3mm ), 5 mm (Cine5mm ), and 7 mm (Cine7mm ) for 6 minutes. Subjective image quality score, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs) of those MR images were evaluated by two radiologists independently. The number, intensity, and direction of uterine peristalsis with different thickness were compared at various playback rates. Also, the peristalsis frequency was counted and compared in different acquisition durations (1-6 minutes). The subjective image quality score, peristalsis number, and intensity were significantly higher in Cine7mm and Cine5mm than Cine3mm (P Cine7mm were significantly higher than Cine3mm (P Cine7mm and Cine5mm (P < 0.05). The peristalsis frequency at 3, 4, 5, 6 minutes was significantly higher than that at 1 minute and 2 minutes (P < 0.05). We recommend a slice thickness of 5 mm or 7 mm and a scan time of 3 minutes for uterine peristalsis with 3.0T MRI, and a playback rate of 12× or 15× the actual speed for peristalsis observation. J. Magn. Reson. Imaging 2016;44:1397-1404. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Optimal Control of Vertically Transmitted Disease: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Samit Bhattacharyya

    2010-01-01

    horizontal transmission, administration of the antiviral drug to infected individuals lessens the chance of vertical transmission. Thus the vaccine and antiviral drug play different roles in controlling the disease, which has both vertical and horizontal transmission. We develop a 3D model with Susceptible–Infected–Recovered under vaccination to the susceptible and antiviral treatment to the infected and consider a control theoretic approach using the Pontryagin maximum principle to analyse the costeffectiveness of the control process. Our results demonstrate that a mixed intervention strategy of vaccination and antiviral drug in a proper ratio is the most effective way to control the disease. We show that cost-effectiveness of both intervention strategies intimately depends on disease-related parameters, such as force of infection, probability of being infected to offspring from infected mothers, loss of immunity or reinfection and also on cost of treatment.

  8. An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method

    Directory of Open Access Journals (Sweden)

    Jibum Kim

    2014-01-01

    Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.

  9. A complex systems approach to planning, optimization and decision making for energy networks

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Jessica; Kempener, Ruud [School of Chemical and Biomolecular Engineering, Building J01, University of Sydney, NSW 2006 (Australia); Cohen, Brett [Department of Chemical Engineering, University of Cape Town, Rondebosch (South Africa); Petrie, Jim [School of Chemical and Biomolecular Engineering, Building J01, University of Sydney, NSW 2006 (Australia); Department of Chemical Engineering, University of Cape Town, Rondebosch (South Africa)

    2008-08-15

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock. (author)

  10. A hardware/software co-optimization approach for embedded software of MP3 decoder

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei; LIU Peng; ZHAI Zhi-bo

    2007-01-01

    In order to improve the efficiency of embedded software running on processor core, this paper proposes a hardware/software co-optimization approach for embedded software from the system point of view. The proposed stepwise methods aim at exploiting the structure and the resources of the processor as much as possible for software algorithm optimization. To achieve low memory usage and low frequency need for the same performance, this co-optimization approach was used to optimize embedded software of MP3 decoder based on a 16-bit fixed-point DSP core. After the optimization, the results of decoding 128kbps, 44.1 kHz stereo MP3 on DSP evaluation platform need 45.9 MIPS and 20.4 kbytes memory space. The optimization rate achieves 65.6% for memory and 49.6% for frequency respectively compared with the results by compiler using floating-point computation. The experimental result indicates the availability of the hardware/software co-optimization approach depending on the algorithm and architecture.

  11. A unified modeling approach for physical experiment design and optimization in laser driven inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)

    2015-11-15

    Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.

  12. A new approach for monitoring ebolavirus in wild great apes.

    Directory of Open Access Journals (Sweden)

    Patricia E Reed

    2014-09-01

    Full Text Available Central Africa is a "hotspot" for emerging infectious diseases (EIDs of global and local importance, and a current outbreak of ebolavirus is affecting multiple countries simultaneously. Ebolavirus is suspected to have caused recent declines in resident great apes. While ebolavirus vaccines have been proposed as an intervention to protect apes, their effectiveness would be improved if we could diagnostically confirm Ebola virus disease (EVD as the cause of die-offs, establish ebolavirus geographical distribution, identify immunologically naïve populations, and determine whether apes survive virus exposure.Here we report the first successful noninvasive detection of antibodies against Ebola virus (EBOV from wild ape feces. Using this method, we have been able to identify gorillas with antibodies to EBOV with an overall prevalence rate reaching 10% on average, demonstrating that EBOV exposure or infection is not uniformly lethal in this species. Furthermore, evidence of antibodies was identified in gorillas thought previously to be unexposed to EBOV (protected from exposure by rivers as topological barriers of transmission.Our new approach will contribute to a strategy to protect apes from future EBOV infections by early detection of increased incidence of exposure, by identifying immunologically naïve at-risk populations as potential targets for vaccination, and by providing a means to track vaccine efficacy if such intervention is deemed appropriate. Finally, since human EVD is linked to contact with infected wildlife carcasses, efforts aimed at identifying great ape outbreaks could have a profound impact on public health in local communities, where EBOV causes case-fatality rates of up to 88%.

  13. Topology Optimization of Constrained Layer Damping on Plates Using Method of Moving Asymptote (MMA Approach

    Directory of Open Access Journals (Sweden)

    Zheng Ling

    2011-01-01

    Full Text Available Damping treatments have been extensively used as a powerful means to damp out structural resonant vibrations. Usually, damping materials are fully covered on the surface of plates. The drawbacks of this conventional treatment are also obvious due to an added mass and excess material consumption. Therefore, it is not always economical and effective from an optimization design view. In this paper, a topology optimization approach is presented to maximize the modal damping ratio of the plate with constrained layer damping treatment. The governing equation of motion of the plate is derived on the basis of energy approach. A finite element model to describe dynamic performances of the plate is developed and used along with an optimization algorithm in order to determine the optimal topologies of constrained layer damping layout on the plate. The damping of visco-elastic layer is modeled by the complex modulus formula. Considering the vibration and energy dissipation mode of the plate with constrained layer damping treatment, damping material density and volume factor are considered as design variable and constraint respectively. Meantime, the modal damping ratio of the plate is assigned as the objective function in the topology optimization approach. The sensitivity of modal damping ratio to design variable is further derived and Method of Moving Asymptote (MMA is adopted to search the optimized topologies of constrained layer damping layout on the plate. Numerical examples are used to demonstrate the effectiveness of the proposed topology optimization approach. The results show that vibration energy dissipation of the plates can be enhanced by the optimal constrained layer damping layout. This optimal technology can be further extended to vibration attenuation of sandwich cylindrical shells which constitute the major building block of many critical structures such as cabins of aircrafts, hulls of submarines and bodies of rockets and missiles as an

  14. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status

    KAUST Repository

    Danovaro, Roberto

    2016-11-23

    Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labor intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD), European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i) molecular approaches, including microarray, Real Time quantitative PCR (qPCR), and metagenetic (metabarcoding) tools; (ii) optical (remote) sensing and acoustic methods; and (iii) in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  15. Implementing and innovating marine monitoring approaches for assessing marine environmental status

    Directory of Open Access Journals (Sweden)

    Roberto Danovaro

    2016-11-01

    Full Text Available Marine environmental monitoring has tended to focus on site-specific methods of investigation. These traditional methods have low spatial and temporal resolution and are relatively labour intensive per unit area/time that they cover. To implement the Marine Strategy Framework Directive (MSFD, European Member States are required to improve marine monitoring and design monitoring networks. This can be achieved by developing and testing innovative and cost-effective monitoring systems, as well as indicators of environmental status. Here, we present several recently developed methodologies and technologies to improve marine biodiversity indicators and monitoring methods. The innovative tools are discussed concerning the technologies presently utilized as well as the advantages and disadvantages of their use in routine monitoring. In particular, the present analysis focuses on: (i molecular approaches, including microarray, Real Time quantitative PCR (qPCR, and metagenetic (metabarcoding tools; (ii optical (remote sensing and acoustic methods; and (iii in situ monitoring instruments. We also discuss their applications in marine monitoring within the MSFD through the analysis of case studies in order to evaluate their potential utilization in future routine marine monitoring. We show that these recently-developed technologies can present clear advantages in accuracy, efficiency and cost.

  16. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. An Informatics Approach to Demand Response Optimization in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  18. 78 FR 48173 - Guidance for Industry on Oversight of Clinical Investigations-A Risk-Based Approach to Monitoring...

    Science.gov (United States)

    2013-08-07

    ... Investigations--A Risk-Based Approach to Monitoring; Availability AGENCY: Food and Drug Administration, HHS... guidance for industry entitled ``Oversight of Clinical Investigations--A Risk-Based Approach to Monitoring.'' This guidance assists sponsors in developing risk-based monitoring strategies and plans for...

  19. Vibration Sensor Approaches for the Monitoring of Sand Production in Bohai Bay

    OpenAIRE

    Kai Wang; Zhiguo Liu; Gang Liu; Longtao Yi; Kui Yang; Shiqi Peng; Man Chen

    2015-01-01

    The real-time monitoring of sand production has always been an important issue during the process of oil production in offshore field. This paper illustrates a new alternative vibration sensor approach to monitor the sand production. A special broadband sensor was selected. Then the time-frequency analysis, characteristic sand frequency band filter method, and peak searching-denoising method were proposed to enhance the detection ability of sand vibration signals in strong background noises o...

  20. Monitoring the Ocean Acoustic Environment: A Model-Based Detection Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Sullivan, E.J.

    2000-03-13

    A model-based approach is applied in the development of a processor designed to passively monitor an ocean acoustic environment along with its associated variations. The technique employs an adaptive, model-based processor embedded in a sequential likelihood detection scheme. The trade-off between state-based and innovations-based monitor designs is discussed, conceptually. The underlying theory for the innovations-based design is briefly developed and applied to a simulated data set.

  1. ANFIS Approach for Optimal Selection of Reusable Components

    Directory of Open Access Journals (Sweden)

    K.S. Ravichandran

    2012-12-01

    Full Text Available In a growing world, the development of modern software system requires large-scale manpower, high development cost, larger completion time and high risk of maintaining the software quality. Component- Based Software Development (CBSD approach is based on the concept of developing modern software systems by selecting the appropriate reusable components or COTS (Commercial Off-The-Shelf components and then assembling them with well-defined software architecture. The proper selection of COTS components will really reduce the manpower, development cost, product completion time, risk, maintenance cost and also it addresses the high quality software product. In this paper, we develop an automated process of component selection by using Adaptive Neuro-Fuzzy Inference Systems (ANFIS based technique by using 14 reusable components’ parameters as a first time in this field. Again, for increasing the accuracy of a model, Fuzzy- Weighted-Relational-Coefficient (FWRC matrix is derived between the components and CBS development with the help of 14 component parameters, namely, Reliability, Stability, Portability, Consistency, Completeness, Interface & Structural Complexity, Understandability of Software Documents, Security, Usability, Accuracy, Compatibility, Performance, Serviceability and Customizable. In the recent literature studies reveals that almost all the researchers have been designed a general fuzzy-design rule for a component selection problem of all kinds of software architecture; but it leads to a poor selection of components and this paper suggests adoption of a specific fuzzy-design rule for every software architecture application for the selection of reusable components. Finally, it is concluded that the selection of reusable components through ANFIS performs better than the other models discussed so far.

  2. Realizing an Optimization Approach Inspired from Piaget’s Theory on Cognitive Development

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2015-09-01

    Full Text Available The objective of this paper is to introduce an artificial intelligence based optimization approach, which is inspired from Piaget’s theory on cognitive development. The approach has been designed according to essential processes that an individual may experience while learning something new or improving his / her knowledge. These processes are associated with the Piaget’s ideas on an individual’s cognitive development. The approach expressed in this paper is a simple algorithm employing swarm intelligence oriented tasks in order to overcome single-objective optimization problems. For evaluating effectiveness of this early version of the algorithm, test operations have been done via some benchmark functions. The obtained results show that the approach / algorithm can be an alternative to the literature in terms of single-objective optimization.The authors have suggested the name: Cognitive Development Optimization Algorithm (CoDOA for the related intelligent optimization approach.

  3. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    Science.gov (United States)

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (pquality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics.

  4. Optimization of Deep Sedation with Spontaneous Respiration for Therapeutic Endoscopy Combining Propofol and Bispectral Index Monitoring

    Directory of Open Access Journals (Sweden)

    Kohei Matsumoto

    2015-01-01

    Full Text Available Background/Aims. This study aimed to establish optimal propofol anesthesia for therapeutic endoscopy, which has not been established. Methodology. We retrospectively investigated data on 89 patients who underwent upper-GI endoscopic submucosal dissection or endoscopic mucosal resection under anesthesia with propofol. Examined doses of propofol were changed according to efficacy and/or adverse events and classified into 5 periods. A bispectral index (BIS monitor was used at Period 5 to decrease the incidence of adverse events caused by oversedation. The initial dose of propofol was administered after bolus injection of pethidine hydrochloride (0.5 mg/kg, and 1.0 mL of propofol was added every minute until the patients fell asleep. Continuous and bolus infusion were performed to maintain sedation. When the patient moved or an adverse event occurred, the maintenance dose examined was increased or decreased by 5 mL/h regardless of body weight. Results. Dose combinations (introduction : maintenance and patient numbers for each period were as follows: Period 1 (n=27, 0.5 mg/kg : 5 mg/kg/h; Period 2 (n=11, 0.33 mg/kg : 3.3 mg/kg/h; Period 3 (n=7, 0.5 mg/kg : 3.3 mg/kg/h; Period 4 (n=14, 0.5 mg/kg : 2.5 mg/kg/h; Period 5 (n=30, 0.5 mg/kg : 2.5 mg/kg/h, using BIS monitor. During Period 5, an adverse event occurred in 10.0% of patients, which was lower than that for Periods 1–4. Conclusions. Period 5 propofol anesthesia with BIS protocol could be safe and useful for therapeutic endoscopy under deep sedation with spontaneous respiration.

  5. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  6. Parameter Optimization of Information Channels for Laser Fluorescence Method of Vegetation Monitoring

    Directory of Open Access Journals (Sweden)

    M. L. Belov

    2015-01-01

    Full Text Available Nowadays, there is a growing interest in application of remote monitoring and accounting systems in agriculture.One of the promising areas of remote vegetation monitoring is a fluorescence analysis, as it potentially allows sensing stress of plants according to characteristics of their fluorescent radiation.The shape of the fluorescence spectra of vegetation in the normal condition differs from that of the fluorescence spectra of vegetation in stressful conditions. This potentially allows you to sence the plants by recording information about the shape of the fluorescence spectra.Analysis of the fluorescence spectrum shape can be replaced by the analysis of fluorescence intensities in several spectral bands, which simpifies problem-solving.Currently, there are various devices developed for laser fluorescence sensing of plant stress. However, a lot of issues important to the practice remain unclear.Most of these issues concern the parameters of receiving channels to record information signals, which allow you to perceve the stress-sensed plants:- how many information channels of spectral bands better to use;- what the best width of these spectral bands of information is ;- what is the best width of the spectral bands of information;- what the best threshold value for the threshold algorithm is, and if there is the better algorithm to process measurement data.The work uses mathematical modeling based on the experimentally measured fluorescence spectra to determine the optimal (in terms of probability of sensing characteristics of the stress of plants, i.e. the probability of good sense and false alarm parameters of information channels for laser fluorescence method to sense the plant stress: the central wavelength of the information spectral bands, their spectral width, and parameters of the algorithm in the case of processing two spectral channels of information. It is shown that using the additional third information spectral band allows you to

  7. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification

    Directory of Open Access Journals (Sweden)

    Sourabh De

    2011-01-01

    Full Text Available For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV. This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  8. U.S. EPA OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS (ROCKY GAP, MD)

    Science.gov (United States)

    The Optimal Well Locator (OWL): uses linear regression to fit a plane to the elevation of the water table in monitoring wells in each round of sampling. The slope of the plane fit to the water table is used to predict the direction and gradient of ground water flow. Along with ...

  9. Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Paul H. Evangelista

    2011-05-01

    Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  10. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  11. An assembly oriented design and optimization approach for mechatronic system engineering

    Directory of Open Access Journals (Sweden)

    Marconnet Bertrand

    2017-01-01

    Full Text Available Today, companies involved in product development in the “Industry 4.0” era, need to manage all the necessary information required in the product entire lifecycle, in order to optimize as much as possible the product-process integration. In this paper, a Product Lifecycle Management (PLM approach is proposed, in order to facilitate product-process information exchange, by considering design constraints and rules coming from DFMA (Design For Manufacturing and Assembly guidelines. Indeed, anticipating these manufacturing and assembly constraints in product design process, reduces both costs and Time To Market (TTM, and avoids to repeat mistakes. The paper details the application of multi-objective optimization algorithms after considering DFMA constraints in a PLM approach. A case study using an original mechatronic system concept is presented, and improved by considering product-process integrated design, optimization and simulation loops, using numerical optimization and FEM (Finite Element Method methods and tools.

  12. A graph-based ant colony optimization approach for process planning.

    Science.gov (United States)

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach.

  13. An Efficient Approach to a Class of Non-smooth Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    李兴斯

    1994-01-01

    This paper presents an entropy-based smoothing technique for solving a class of non-smooth optimization problems that are in some way related to the maximum function.Basic ideas concerning this approach are that we replace the non-smooth maximum function by a smooth one,called aggregate function,which is derived by employing the maximum entropy principle and its useful properties are proved.Wilh this smoothing technique,both unconstrained and constrained mimma.x problems are transformed into unconstrained optimization problems of smooth functions such that this class of non-smooth optimization problems can be solved by some existing unconstrained optimization softwares for smooth functions The present approach can be very easily implemented on computers with very fast and -.Inhie convergence.

  14. Design of parametric fault detection systems:An H-infinity optimization approach

    Institute of Scientific and Technical Information of China (English)

    Maiying ZHONG; Chuanfeng MA; Steven X.DING

    2005-01-01

    Problems related to the design of observer-based parametric fault detection (PFD) systems are studied.The core of our study is to first describe the faults occurring in system actuators,sensors and components in the form of additive parameter deviations,then to transform the PFD problems into a similar additive fault setup,based on which an optimal observer-based optimization fault detection approach is proposed.A constructive solution optimal in the sense of minimizing a certain performance index is developed.The main results consist of defining parametric fault detectability,formulating a PFD optimization problem and its solution.A numerical example to demonstrate the effectiveness of the proposed approach is provided.

  15. Optimal and Sustainable Exchange Rate Regimes; A Simple Game-Theoretic Approach

    OpenAIRE

    Masahiro Kawai

    1992-01-01

    This paper examines the question of how to design an optimal and sustainable exchange rate regime in a world economy of two interdependent countries. It develops a Barro-Gordon type two-country model and compares noncooperative equilibria under different assumptions of monetary policy credibility and different exchange rate regimes. Using a two-stage game approach to the strategic choice of policy instruments, it identifies optimal (in a Pare to sense) and sustainable (self-enforcing) exchang...

  16. Hybrid Optimization Approach for the Design of Mechanisms Using a New Error Estimator

    Directory of Open Access Journals (Sweden)

    A. Sedano

    2012-01-01

    Full Text Available A hybrid optimization approach for the design of linkages is presented. The method is applied to the dimensional synthesis of mechanism and combines the merits of both stochastic and deterministic optimization. The stochastic optimization approach is based on a real-valued evolutionary algorithm (EA and is used for extensive exploration of the design variable space when searching for the best linkage. The deterministic approach uses a local optimization technique to improve the efficiency by reducing the high CPU time that EA techniques require in this kind of applications. To that end, the deterministic approach is implemented in the evolutionary algorithm in two stages. The first stage is the fitness evaluation where the deterministic approach is used to obtain an effective new error estimator. In the second stage the deterministic approach refines the solution provided by the evolutionary part of the algorithm. The new error estimator enables the evaluation of the different individuals in each generation, avoiding the removal of well-adapted linkages that other methods would not detect. The efficiency, robustness, and accuracy of the proposed method are tested for the design of a mechanism in two examples.

  17. NEW APPROACH FOR RELIABILITY-BASED DESIGN OPTIMIZATION: MINIMUM ERROR POINT

    Institute of Scientific and Technical Information of China (English)

    LIU Deshun; YUE Wenhui; ZHU Pingyu; DU Xiaoping

    2006-01-01

    Conventional reliability-based design optimization (RBDO) requires to use the most probable point (MPP) method for a probabilistic analysis of the reliability constraints. A new approach is presented, called as the minimum error point (MEP) method or the MEP based method,for reliability-based design optimization, whose idea is to minimize the error produced by approximating performance functions. The MEP based method uses the first order Taylor's expansion at MEP instead of MPP. Examples demonstrate that the MEP based design optimization can ensure product reliability at the required level, which is very imperative for many important engineering systems. The MEP based reliability design optimization method is feasible and is considered as an alternative for solving reliability design optimization problems. The MEP based method is more robust than the commonly used MPP based method for some irregular performance functions.

  18. Optimizing water supply and hydropower reservoir operation rule curves: An imperialist competitive algorithm approach

    Science.gov (United States)

    Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz

    2015-09-01

    Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.

  19. A new IPSO-SA approach for cardinality constrained portfolio optimization

    Directory of Open Access Journals (Sweden)

    Marzieh Mozafari

    2011-04-01

    Full Text Available The problem of portfolio optimization has always been a key concern for investors. This paper addresses a realistic portfolio optimization problem with floor, ceiling, and cardinality constraints. This problem is a mixed integer quadratic programming where traditional optimization methods fail to find the optimal solution, efficiently. The present paper develops a new hybrid approach based on an improved particle swarm optimization (PSO and a modified simulated annealing (SA methods to find the cardinality constrained efficient frontier. The proposed algorithm benefits simple and easy characteristics of PSO with an adaptation of inertia weights and constriction factor. In addition, incorporating an SA procedure into IPSO helps escaping from local optima and improves the precision of convergence. Computational results on benchmark problems with up to 225 assets signify that our proposed algorithm exceeds not only the standard PSO but also the other heuristic algorithms previously presented to solve the cardinality constrained portfolio problem.

  20. PID control design for chaotic synchronization using a tribes optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br; Andrade Bernert, Diego Luis de [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: dbernert@gmail.com

    2009-10-15

    Recently, the investigation of synchronization and control problems for discrete chaotic systems has stimulated a wide range of research activity including both theoretical studies and practical applications. This paper deals with the tuning of a proportional-integral-derivative (PID) controller using a modified Tribes optimization algorithm based on truncated chaotic Zaslavskii map (MTribes) for synchronization of two identical discrete chaotic systems subject the different initial conditions. The Tribes algorithm is inspired by the social behavior of bird flocking and is also an optimization adaptive procedure that does not require sociometric or swarm size parameter tuning. Numerical simulations are given to show the effectiveness of the proposed synchronization method. In addition, some comparisons of the MTribes optimization algorithm with other continuous optimization methods, including classical Tribes algorithm and particle swarm optimization approaches, are presented.