WorldWideScience

Sample records for end-use disaggregation algorithm

  1. Spatial Disaggregation of Areal Rainfall Using Two Different Artificial Neural Networks Models

    Directory of Open Access Journals (Sweden)

    Sungwon Kim

    2015-06-01

    Full Text Available The objective of this study is to develop artificial neural network (ANN models, including multilayer perceptron (MLP and Kohonen self-organizing feature map (KSOFM, for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP representative catchment, in South Korea. A three-layer MLP model, using three training algorithms, was used to estimate areal rainfall. The Levenberg–Marquardt training algorithm was found to be more sensitive to the number of hidden nodes than were the conjugate gradient and quickprop training algorithms using the MLP model. Results showed that the networks structures of 11-5-1 (conjugate gradient and quickprop and 11-3-1 (Levenberg-Marquardt were the best for estimating areal rainfall using the MLP model. The networks structures of 1-5-11 (conjugate gradient and quickprop and 1-3-11 (Levenberg–Marquardt, which are the inverse networks for estimating areal rainfall using the best MLP model, were identified for spatial disaggregation of areal rainfall using the MLP model. The KSOFM model was compared with the MLP model for spatial disaggregation of areal rainfall. The MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts.

  2. Is disaggregation the holy grail of energy efficiency? The case of electricity

    International Nuclear Information System (INIS)

    Carrie Armel, K.; Gupta, Abhay; Shrimali, Gireesh; Albert, Adrian

    2013-01-01

    This paper aims to address two timely energy problems. First, significant low-cost energy reductions can be made in the residential and commercial sectors, but these savings have not been achievable to date. Second, billions of dollars are being spent to install smart meters, yet the energy saving and financial benefits of this infrastructure – without careful consideration of the human element – will not reach its full potential. We believe that we can address these problems by strategically marrying them, using disaggregation. Disaggregation refers to a set of statistical approaches for extracting end-use and/or appliance level data from an aggregate, or whole-building, energy signal. In this paper, we explain how appliance level data affords numerous benefits, and why using the algorithms in conjunction with smart meters is the most cost-effective and scalable solution for getting this data. We review disaggregation algorithms and their requirements, and evaluate the extent to which smart meters can meet those requirements. Research, technology, and policy recommendations are also outlined. - Highlights: ► Appliance energy use data can produce many consumer, industry, and policy benefits. ► Disaggregating smart meter data is the most cost-effective and scalable solution. ► We review algorithm requirements, and ability of smart meters to meet those. ► Current technology identifies ∼10 appliances; minor upgrades could identify more. ► Research, technology, and policy recommendations for moving forward are outlined.

  3. Load Disaggregation Technologies: Real World and Laboratory Performance

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.; Butner, Ryan S.; Johnson, Erica M.

    2016-09-28

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; which has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.

  4. Context-Based Energy Disaggregation in Smart Homes

    Directory of Open Access Journals (Sweden)

    Francesca Paradiso

    2016-01-01

    Full Text Available In this paper, we address the problem of energy conservation and optimization in residential environments by providing users with useful information to solicit a change in consumption behavior. Taking care to highly limit the costs of installation and management, our work proposes a Non-Intrusive Load Monitoring (NILM approach, which consists of disaggregating the whole-house power consumption into the individual portions associated to each device. State of the art NILM algorithms need monitoring data sampled at high frequency, thus requiring high costs for data collection and management. In this paper, we propose an NILM approach that relaxes the requirements on monitoring data since it uses total active power measurements gathered at low frequency (about 1 Hz. The proposed approach is based on the use of Factorial Hidden Markov Models (FHMM in conjunction with context information related to the user presence in the house and the hourly utilization of appliances. Through a set of tests, we investigated how the use of these additional context-awareness features could improve disaggregation results with respect to the basic FHMM algorithm. The tests have been performed by using Tracebase, an open dataset made of data gathered from real home environments.

  5. Multisite rainfall downscaling and disaggregation in a tropical urban area

    Science.gov (United States)

    Lu, Y.; Qin, X. S.

    2014-02-01

    A systematic downscaling-disaggregation study was conducted over Singapore Island, with an aim to generate high spatial and temporal resolution rainfall data under future climate-change conditions. The study consisted of two major components. The first part was to perform an inter-comparison of various alternatives of downscaling and disaggregation methods based on observed data. This included (i) single-site generalized linear model (GLM) plus K-nearest neighbor (KNN) (S-G-K) vs. multisite GLM (M-G) for spatial downscaling, (ii) HYETOS vs. KNN for single-site disaggregation, and (iii) KNN vs. MuDRain (Multivariate Rainfall Disaggregation tool) for multisite disaggregation. The results revealed that, for multisite downscaling, M-G performs better than S-G-K in covering the observed data with a lower RMSE value; for single-site disaggregation, KNN could better keep the basic statistics (i.e. standard deviation, lag-1 autocorrelation and probability of wet hour) than HYETOS; for multisite disaggregation, MuDRain outperformed KNN in fitting interstation correlations. In the second part of the study, an integrated downscaling-disaggregation framework based on M-G, KNN, and MuDRain was used to generate hourly rainfall at multiple sites. The results indicated that the downscaled and disaggregated rainfall data based on multiple ensembles from HadCM3 for the period from 1980 to 2010 could well cover the observed mean rainfall amount and extreme data, and also reasonably keep the spatial correlations both at daily and hourly timescales. The framework was also used to project future rainfall conditions under HadCM3 SRES A2 and B2 scenarios. It was indicated that the annual rainfall amount could reduce up to 5% at the end of this century, but the rainfall of wet season and extreme hourly rainfall could notably increase.

  6. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  7. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  8. Ends and Ways: The Algorithmic Politics of Network Neutrality

    Directory of Open Access Journals (Sweden)

    Fenwick McKelvey

    2010-01-01

    Full Text Available The Internet in Canada is an assemblage of private and public networks. A variety of institutions and networking codes manage these networks. Conflicts exist between these parties despite their interconnection. Tensions heightened when commercial ISPs began managing traffic on their network using sophisticated routing algorithms. Concerned parties demanded legislation based on a network neutrality principle to prevent undue discrimination. While the network neutrality controversy has been addressed as a question of public policy, the controversy also includes a conflict between various codes constituting networks in Canada. The conflict between codes involve two key networking software that manifest incongruous networks. Their algorithms, the logics embedded in code, differentiate the different types of networking code. The two types of algorithms are Quality of Service and End-to-End. These algorithms treat different modalities of Internet communication differently, in part due to their deployment by different institutions. Quality of Service allows for the tiering of traffic by carriers. Commercial carriers have popularized this algorithm to promote value-added services and prevent network congestions. End-to-end algorithms, on the other hand, enforce a strict equality between modalities of communication. Peer-to-peer applications have popularized an extreme version of the end-to-algorithm, treating all nodes as equals. The popularity and growth of both these algorithms pulls the Internet in different directions, creating conflicts over its future. Through an extended review of these two algorithms and their intersection, this paper confronts how code plays a role in the network neutrality controversy.

  9. A GIS-based disaggregate spatial watershed analysis using RADAR data

    International Nuclear Information System (INIS)

    Al-Hamdan, M.

    2002-01-01

    Hydrology is the study of water in all its forms, origins, and destinations on the earth.This paper develops a novel modeling technique using a geographic information system (GIS) to facilitate watershed hydrological routing using RADAR data. The RADAR rainfall data, segmented to 4 km by 4 km blocks, divides the watershed into several sub basins which are modeled independently. A case study for the GIS-based disaggregate spatial watershed analysis using RADAR data is provided for South Fork Cowikee Creek near Batesville, Alabama. All the data necessary to complete the analysis is maintained in the ArcView GIS software. This paper concludes that the GIS-Based disaggregate spatial watershed analysis using RADAR data is a viable method to calculate hydrological routing for large watersheds. (author)

  10. Disaggregation of sectors in Social Accounting Matrices using a customized Wolsky method

    OpenAIRE

    BARRERA-LOZANO Margarita; MAINAR CAUSAPÉ ALFREDO; VALLÉS FERRER José

    2014-01-01

    The aim of this work is to enable the implementation of disaggregation processes for specific and homogeneous sectors in Social Accounting Matrices (SAMs), while taking into account the difficulties in data collection from these types of sectors. The method proposed is based on the Wolsky technique, customized for the disaggregation of Social Accounting Matrices, within the current-facilities framework. The Spanish Social Accounting Matrix for 2008 is used as a benchmark for the analysis, and...

  11. New Insight into the Finance-Energy Nexus: Disaggregated Evidence from Turkish Sectors

    Directory of Open Access Journals (Sweden)

    Mert Topcu

    2017-01-01

    Full Text Available Seeing that reshaped energy economics literature has adopted some new variables in energy demand function, the number of papers looking into the relationship between financial development and energy consumption at the aggregate level has been increasing over the last few years. This paper, however, proposes a new framework using disaggregated data and investigates the nexus between financial development and sectoral energy consumption in Turkey. To this end, panel time series regression and causality techniques are adopted over the period 1989–2011. Empirical results confirm that financial development does have a significant impact on energy consumption, even with disaggregated data. It is also proved that the magnitude of financial development is larger in energy-intensive industries than in less energy-intensive ones.

  12. Converged photonic data storage and switch platform for exascale disaggregated data centers

    Science.gov (United States)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  13. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature...... disaggregation model that considers the uncertainty in the disaggregation, taking basis in the scaled Dirichlet distribution. The proposed probabilistic disaggregation model is applied to a portfolio of residential buildings in the Canton Bern, Switzerland, subject to flood risk. Thereby, the model is verified...... are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is generally imperfect, uncertainty arises in disaggregation. This paper therefore proposes a probabilistic...

  14. Daily disaggregation of simulated monthly flows using different rainfall datasets in southern Africa

    Directory of Open Access Journals (Sweden)

    D.A. Hughes

    2015-09-01

    New hydrological insights for the region: There are substantial regional differences in the success of the monthly hydrological model, which inevitably affects the success of the daily disaggregation results. There are also regional differences in the success of using global rainfall data sets (Climatic Research Unit (CRU datasets for monthly, National Oceanic and Atmospheric Administration African Rainfall Climatology, version 2 (ARC2 satellite data for daily. The overall conclusion is that the disaggregation method presents a parsimonious approach to generating daily flow simulations from existing monthly simulations and that these daily flows are likely to be useful for some purposes (e.g. water quality modelling, but less so for others (e.g. peak flow analysis.

  15. A Peltier-based freeze-thaw device for meteorite disaggregation

    Science.gov (United States)

    Ogliore, R. C.

    2018-02-01

    A Peltier-based freeze-thaw device for the disaggregation of meteorite or other rock samples is described. Meteorite samples are kept in six water-filled cavities inside a thin-walled Al block. This block is held between two Peltier coolers that are automatically cycled between cooling and warming. One cycle takes approximately 20 min. The device can run unattended for months, allowing for ˜10 000 freeze-thaw cycles that will disaggregate meteorites even with relatively low porosity. This device was used to disaggregate ordinary and carbonaceous chondrite regoltih breccia meteorites to search for micrometeoroid impact craters.

  16. Development of a Disaggregation Framework toward the Estimation of Subdaily Reference Evapotranspiration: 2- Estimation of Subdaily Reference Evapotranspiration Using Disaggregated Weather Data

    Directory of Open Access Journals (Sweden)

    F. Parchami Araghi

    2016-09-01

    Full Text Available Introduction: Subdaily estimates of reference evapotranspiration (ET o are needed in many applications such as dynamic agro-hydrological modeling. However, in many regions, the lack of subdaily weather data availability has hampered the efforts to quantify the subdaily ET o. In the first presented paper, a physically based framework was developed to desegregate daily weather data needed for estimation of subdaily reference ET o, including air temperature, wind speed, dew point, actual vapour pressure, relative humidity, and solar radiation. The main purpose of this study was to estimate the subdaily ETo using disaggregated daily data derived from developed disaggregation framework in the first presented paper. Materials and Methods: Subdaily ET o estimates were made, using ASCE and FAO-56 Penman–Monteith models (ASCE-PM and FAO56-PM, respectively and subdaily weather data derived from the developed daily-to-subdaily weather data disaggregation framework. To this end, long-term daily weather data got from Abadan (59 years and Ahvaz (50 years synoptic weather stations were collected. Sensitivity analysis of Penman–Monteith model to the different meteorological variables (including, daily air temperature, wind speed at 2 m height, actual vapor pressure, and solar radiation was carried out, using partial derivatives of Penman–Monteith equation. The capability of the two models for retrieving the daily ETo was evaluated, using root mean square error RMSE (mm, the mean error ME (mm, the mean absolute error ME (mm, Pearson correlation coefficient r (-, and Nash–Sutcliffe model efficiency coefficient EF (-. Different contributions to the overall error were decomposed using a regression-based method. Results and Discussion: The results of the sensitivity analysis showed that the daily air temperature and the actual vapor pressure are the most significant meteorological variables, which affect the ETo estimates. In contrast, low sensitivity

  17. A Replication of ``Using self-esteem to disaggregate psychopathy, narcissism, and aggression (2013''

    Directory of Open Access Journals (Sweden)

    Durand, Guillaume

    2016-09-01

    Full Text Available The present study is a replication of Falkenbach, Howe, and Falki (2013. Using self-esteem to disaggregate psychopathy, narcissism, and aggression. Personality and Individual Differences, 54(7, 815-820.

  18. An Iterative Load Disaggregation Approach Based on Appliance Consumption Pattern

    Directory of Open Access Journals (Sweden)

    Huijuan Wang

    2018-04-01

    Full Text Available Non-intrusive load monitoring (NILM, monitoring single-appliance consumption level by decomposing the aggregated energy consumption, is a novel and economic technology that is beneficial to energy utilities and energy demand management strategies development. Hardware costs of high-frequency sampling and algorithm’s computational complexity hampered NILM large-scale application. However, low sampling data shows poor performance in event detection when multiple appliances are simultaneously turned on. In this paper, we contribute an iterative disaggregation approach that is based on appliance consumption pattern (ILDACP. Our approach combined Fuzzy C-means clustering algorithm, which provide an initial appliance operating status, and sub-sequence searching Dynamic Time Warping, which retrieves single energy consumption based on the typical power consumption pattern. Results show that the proposed approach is effective to accurately disaggregate power consumption, and is suitable for the situation where different appliances are simultaneously operated. Also, the approach has lower computational complexity than Hidden Markov Model method and it is easy to implement in the household without installing special equipment.

  19. Location Assisted Vertical Handover Algorithm for QoS Optimization in End-to-End Connections

    DEFF Research Database (Denmark)

    Dam, Martin S.; Christensen, Steffen R.; Mikkelsen, Lars M.

    2012-01-01

    implementation on Android based tablets. The simulations cover a wide range of scenarios for two mobile users in an urban area with ubiquitous cellular coverage, and shows our algorithm leads to increased throughput, with fewer handovers, when considering the end-to-end connection than to other handover schemes...

  20. End-use energy characterization and conservation potentials at DoD Facilities: An analysis of electricity use at Fort Hood, Texas

    Energy Technology Data Exchange (ETDEWEB)

    Akbari, H.; Konopacki, S.

    1995-05-01

    This report discusses the application of the LBL`s End-use Disaggregation Algorithm (EDA) to a DoD installation and presents hourly reconciled end-use data for all major building types and end uses. The project initially focused on achieving these objectives and pilot-testing the methodology at Fort Hood, Texas. Fort Hood, with over 5000 buildings was determined to have representative samples of nearly all of the major building types in use on DoD installations. These building types at Fort Hood include: office, administration, vehicle maintenance, shop, hospital, grocery store, retail store, car wash, church, restaurant, single-family detached housing, two and four-plex housings, and apartment building. Up to 11 end uses were developed for each prototype, consisting of 9 electric and 2 gas; however, only electric end uses were reconciled against known data and weather conditions. The electric end uses are space cooling, ventilation, cooking, miscellaneous/plugs, refrigeration, exterior lighting, interior lighting, process loads, and street lighting. The gas end uses are space heating and hot water heating. Space heating energy-use intensities were simulated only. The EDA was applied to 10 separate feeders from the three substations at Fort Hood. The results from the analyses of these ten feeders were extrapolated to estimate energy use by end use for the entire installation. The results show that administration, residential, and the bar-rack buildings are the largest consumers of electricity for a total of 250GWh per year (74% of annual consumption). By end use, cooling, ventilation, miscellaneous, and indoor lighting consume almost 84% of total electricity use. The contribution to the peak power demand is highest by residential sector (35%, 24 MW), followed by administration buildings (30%), and barrack (14%). For the entire Fort Hood installation, cooling is 54% of the peak demand (38 MW), followed by interior lighting at 18%, and miscellaneous end uses by 12%.

  1. Probabilistic disaggregation model with application to natural hazard risk assessment of portfolios

    OpenAIRE

    Custer, Rocco; Nishijima, Kazuyoshi

    2012-01-01

    In natural hazard risk assessment, a resolution mismatch between hazard data and aggregated exposure data is often observed. A possible solution to this issue is the disaggregation of exposure data to match the spatial resolution of hazard data. Disaggregation models available in literature are usually deterministic and make use of auxiliary indicator, such as land cover, to spatially distribute exposures. As the dependence between auxiliary indicator and disaggregated number of exposures is ...

  2. Command Disaggregation Attack and Mitigation in Industrial Internet of Things

    Directory of Open Access Journals (Sweden)

    Peng Xun

    2017-10-01

    Full Text Available A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1 the command sequence is disordered and (2 disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  3. Command Disaggregation Attack and Mitigation in Industrial Internet of Things.

    Science.gov (United States)

    Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan

    2017-10-21

    A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.

  4. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  5. SU-F-P-39: End-To-End Validation of a 6 MV High Dose Rate Photon Beam, Configured for Eclipse AAA Algorithm Using Golden Beam Data, for SBRT Treatments Using RapidArc

    Energy Technology Data Exchange (ETDEWEB)

    Ferreyra, M; Salinas Aranda, F; Dodat, D; Sansogne, R; Arbiser, S [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires, Ciudad Autonoma de Buenos Aire (Argentina)

    2016-06-15

    Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical and dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.

  6. Characteristics and Performance of Existing Load Disaggregation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Butner, Ryan S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, He [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-04-10

    Non-intrusive load monitoring (NILM) or non-intrusive appliance load monitoring (NIALM) is an analytic approach to disaggregate building loads based on a single metering point. This advanced load monitoring and disaggregation technique has the potential to provide an alternative solution to high-priced traditional sub-metering and enable innovative approaches for energy conservation, energy efficiency, and demand response. However, since the inception of the concept in the 1980’s, evaluations of these technologies have focused on reporting performance accuracy without investigating sources of inaccuracies or fully understanding and articulating the meaning of the metrics used to quantify performance. As a result, the market for, as well as, advances in these technologies have been slowly maturing.To improve the market for these NILM technologies, there has to be confidence that the deployment will lead to benefits. In reality, every end-user and application that this technology may enable does not require the highest levels of performance accuracy to produce benefits. Also, there are other important characteristics that need to be considered, which may affect the appeal of NILM products to certain market targets (i.e. residential and commercial building consumers) and the suitability for particular applications. These characteristics include the following: 1) ease of use, the level of expertise/bandwidth required to properly use the product; 2) ease of installation, the level of expertise required to install along with hardware needs that impact product cost; and 3) ability to inform decisions and actions, whether the energy outputs received by end-users (e.g. third party applications, residential users, building operators, etc.) empower decisions and actions to be taken at time frames required for certain applications. Therefore, stakeholders, researchers, and other interested parties should be kept abreast of the evolving capabilities, uses, and characteristics

  7. Technology data characterizing refrigeration in commercial buildings: Application to end-use forecasting with COMMEND 4.0

    Energy Technology Data Exchange (ETDEWEB)

    Sezgen, O.; Koomey, J.G.

    1995-12-01

    In the United States, energy consumption is increasing most rapidly in the commercial sector. Consequently, the commercial sector is becoming an increasingly important target for state and federal energy policies and also for utility-sponsored demand side management (DSM) programs. The rapid growth in commercial-sector energy consumption also makes it important for analysts working on energy policy and DSM issues to have access to energy end-use forecasting models that include more detailed representations of energy-using technologies in the commercial sector. These new forecasting models disaggregate energy consumption not only by fuel type, end use, and building type, but also by specific technology. The disaggregation of the refrigeration end use in terms of specific technologies, however, is complicated by several factors. First, the number of configurations of refrigeration cases and systems is quite large. Also, energy use is a complex function of the refrigeration-case properties and the refrigeration-system properties. The Electric Power Research Institute`s (EPRI`s) Commercial End-Use Planning System (COMMEND 4.0) and the associated data development presented in this report attempt to address the above complications and create a consistent forecasting framework. Expanding end-use forecasting models so that they address individual technology options requires characterization of the present floorstock in terms of service requirements, energy technologies used, and cost-efficiency attributes of the energy technologies that consumers may choose for new buildings and retrofits. This report describes the process by which we collected refrigeration technology data. The data were generated for COMMEND 4.0 but are also generally applicable to other end-use forecasting frameworks for the commercial sector.

  8. India Energy Outlook: End Use Demand in India to 2020

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; McNeil, Michael; Sathaye, Jayant

    2009-03-30

    Integrated economic models have been used to project both baseline and mitigation greenhouse gas emissions scenarios at the country and the global level. Results of these scenarios are typically presented at the sectoral level such as industry, transport, and buildings without further disaggregation. Recently, a keen interest has emerged on constructing bottom up scenarios where technical energy saving potentials can be displayed in detail (IEA, 2006b; IPCC, 2007; McKinsey, 2007). Analysts interested in particular technologies and policies, require detailed information to understand specific mitigation options in relation to business-as-usual trends. However, the limit of information available for developing countries often poses a problem. In this report, we have focus on analyzing energy use in India in greater detail. Results shown for the residential and transport sectors are taken from a previous report (de la Rue du Can, 2008). A complete picture of energy use with disaggregated levels is drawn to understand how energy is used in India and to offer the possibility to put in perspective the different sources of end use energy consumption. For each sector, drivers of energy and technology are indentified. Trends are then analyzed and used to project future growth. Results of this report provide valuable inputs to the elaboration of realistic energy efficiency scenarios.

  9. Effect of natural antioxidants on the aggregation and disaggregation ...

    African Journals Online (AJOL)

    Conclusion: High antioxidant activities were positively correlated with the inhibition of Aβ aggregation, although not with the disaggregation of pre-formed Aβ aggregates. Nevertheless, potent antioxidants may be helpful in treating Alzheimer's disease. Keywords: Alzheimer's disease, β-Amyloid, Aggregation, Disaggregation ...

  10. Smart Metering and Water End-Use Data: Conservation Benefits and Privacy Risks

    Directory of Open Access Journals (Sweden)

    Damien P. Giurco

    2010-08-01

    Full Text Available Smart metering technology for residential buildings is being trialed and rolled out by water utilities to assist with improved urban water management in a future affected by climate change. The technology can provide near real-time monitoring of where water is used in the home, disaggregated by end-use (shower, toilet, clothes washing, garden irrigation, etc.. This paper explores questions regarding the degree of information detail required to assist utilities in targeting demand management programs and informing customers of their usage patterns, whilst ensuring privacy concerns of residents are upheld.

  11. End User Perceptual Distorted Scenes Enhancement Algorithm Using Partition-Based Local Color Values for QoE-Guaranteed IPTV

    Science.gov (United States)

    Kim, Jinsul

    In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.

  12. Disaggregated Futures-Only Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures-Only Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  13. Specific effect of the linear charge density of the acid polysaccharide on thermal aggregation/ disaggregation processes in complex carrageenan/lysozyme systems

    NARCIS (Netherlands)

    Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.

    2017-01-01

    We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex

  14. Localization of SDGs through Disaggregation of KPIs

    Directory of Open Access Journals (Sweden)

    Manohar Patole

    2018-03-01

    Full Text Available The United Nation’s Agenda 2030 and Sustainable Development Goals (SDGs pick up where the Millennium Development Goals (MDGs left off. The SDGs set forth a formidable task for the global community and international sustainable development over the next 15 years. Learning from the successes and failures of the MDGs, government officials, development experts, and many other groups understood that localization is necessary to accomplish the SDGs but how and what to localize remain as questions to be answered. The UN Inter-Agency and Expert Group on Sustainable Development Goals (UN IAEG-SDGs sought to answer these questions through development of metadata behind the 17 goals, 169 associated targets and corresponding indicators of the SDGs. Data management is key to understanding how and what to localize, but, to do it properly, the data and metadata needs to be properly disaggregated. This paper reviews the utilization of disaggregation analysis for localization and demonstrates the process of identifying opportunities for subnational interventions to achieve multiple targets and indicators through the formation of new integrated key performance indicators. A case study on SDG 6: Clean Water and Sanitation is used to elucidate these points. The examples presented here are only illustrative—future research and the development of an analytical framework for localization and disaggregation of the SDGs would be a valuable tool for national and local governments, implementing partners and other interested parties.

  15. Cellular Handling of Protein Aggregates by Disaggregation Machines.

    Science.gov (United States)

    Mogk, Axel; Bukau, Bernd; Kampinga, Harm H

    2018-01-18

    Both acute proteotoxic stresses that unfold proteins and expression of disease-causing mutant proteins that expose aggregation-prone regions can promote protein aggregation. Protein aggregates can interfere with cellular processes and deplete factors crucial for protein homeostasis. To cope with these challenges, cells are equipped with diverse folding and degradation activities to rescue or eliminate aggregated proteins. Here, we review the different chaperone disaggregation machines and their mechanisms of action. In all these machines, the coating of protein aggregates by Hsp70 chaperones represents the conserved, initializing step. In bacteria, fungi, and plants, Hsp70 recruits and activates Hsp100 disaggregases to extract aggregated proteins. In the cytosol of metazoa, Hsp70 is empowered by a specific cast of J-protein and Hsp110 co-chaperones allowing for standalone disaggregation activity. Both types of disaggregation machines are supported by small Hsps that sequester misfolded proteins. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. An economic analysis of disaggregation of space assets: Application to GPS

    Science.gov (United States)

    Hastings, Daniel E.; La Tour, Paul A.

    2017-05-01

    New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.

  17. Combination of Rivest-Shamir-Adleman Algorithm and End of File Method for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Amalia, Amalia; Elviwani

    2018-03-01

    Data security is one of the crucial issues in the delivery of information. One of the ways which used to secure the data is by encoding it into something else that is not comprehensible by human beings by using some crypto graphical techniques. The Rivest-Shamir-Adleman (RSA) cryptographic algorithm has been proven robust to secure messages. Since this algorithm uses two different keys (i.e., public key and private key) at the time of encryption and decryption, it is classified as asymmetric cryptography algorithm. Steganography is a method that is used to secure a message by inserting the bits of the message into a larger media such as an image. One of the known steganography methods is End of File (EoF). In this research, the cipher text resulted from the RSA algorithm is compiled into an array form and appended to the end of the image. The result of the EoF is the image which has a line with black gradations under it. This line contains the secret message. This combination of cryptography and steganography in securing the message is expected to increase the security of the message, since the message encryption technique (RSA) is mixed with the data hiding technique (EoF).

  18. Disaggregating radar-derived rainfall measurements in East Azarbaijan, Iran, using a spatial random-cascade model

    Science.gov (United States)

    Fouladi Osgouei, Hojjatollah; Zarghami, Mahdi; Ashouri, Hamed

    2017-07-01

    The availability of spatial, high-resolution rainfall data is one of the most essential needs in the study of water resources. These data are extremely valuable in providing flood awareness for dense urban and industrial areas. The first part of this paper applies an optimization-based method to the calibration of radar data based on ground rainfall gauges. Then, the climatological Z-R relationship for the Sahand radar, located in the East Azarbaijan province of Iran, with the help of three adjacent rainfall stations, is obtained. The new climatological Z-R relationship with a power-law form shows acceptable statistical performance, making it suitable for radar-rainfall estimation by the Sahand radar outputs. The second part of the study develops a new heterogeneous random-cascade model for spatially disaggregating the rainfall data resulting from the power-law model. This model is applied to the radar-rainfall image data to disaggregate rainfall data with coverage area of 512 × 512 km2 to a resolution of 32 × 32 km2. Results show that the proposed model has a good ability to disaggregate rainfall data, which may lead to improvement in precipitation forecasting, and ultimately better water-resources management in this arid region, including Urmia Lake.

  19. Disaggregated Futures and Options Commitments of Traders

    Data.gov (United States)

    Commodity Futures Trading Commission — The Disaggregated Futures and Options Commitments of Traders dataset provides a breakdown of each week's open interest for agriculture, energy, metals, lumber, and...

  20. Erosion of atmospherically deposited radionuclides as affected by soil disaggregation mechanisms

    International Nuclear Information System (INIS)

    Claval, D.; Garcia-Sanchez, L.; Real, J.; Rouxel, R.; Mauger, S.; Sellier, L.

    2004-01-01

    The interactions of soil disaggregation with radionuclide erosion were studied under controlled conditions in the laboratory on samples from a loamy silty-sandy soil. The fate of 134 Cs and 85 Sr was monitored on soil aggregates and on small plots, with time resolution ranging from minutes to hours after contamination. Analytical experiments reproducing disaggregation mechanisms on aggregates showed that disaggregation controls both erosion and sorption. Compared to differential swelling, air explosion mobilized the most by producing finer particles and increasing five-fold sorption. For all the mechanisms studied, a significant part of the contamination was still unsorbed on the aggregates after an hour. Global experiments on contaminated sloping plots submitted to artificial rainfalls showed radionuclide erosion fluctuations and their origin. Wet radionuclide deposition increased short-term erosion by 50% compared to dry deposition. A developed soil crust when contaminated decreased radionuclide erosion by a factor 2 compared to other initial soil states. These erosion fluctuations were more significant for 134 Cs than 85 Sr, known to have better affinity to soil matrix. These findings confirm the role of disaggregation on radionuclide erosion. Our data support a conceptual model of radionuclide erosion at the small plot scale in two steps: (1) radionuclide non-equilibrium sorption on mobile particles, resulting from simultaneous sorption and disaggregation during wet deposition and (2) later radionuclide transport by runoff with suspended matter

  1. Disaggregate energy consumption and industrial production in South Africa

    International Nuclear Information System (INIS)

    Ziramba, Emmanuel

    2009-01-01

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment.

  2. Photoinduced disaggregation of TiO₂ nanoparticles enables transdermal penetration.

    Directory of Open Access Journals (Sweden)

    Samuel W Bennett

    Full Text Available Under many aqueous conditions, metal oxide nanoparticles attract other nanoparticles and grow into fractal aggregates as the result of a balance between electrostatic and Van Der Waals interactions. Although particle coagulation has been studied for over a century, the effect of light on the state of aggregation is not well understood. Since nanoparticle mobility and toxicity have been shown to be a function of aggregate size, and generally increase as size decreases, photo-induced disaggregation may have significant effects. We show that ambient light and other light sources can partially disaggregate nanoparticles from the aggregates and increase the dermal transport of nanoparticles, such that small nanoparticle clusters can readily diffuse into and through the dermal profile, likely via the interstitial spaces. The discovery of photoinduced disaggregation presents a new phenomenon that has not been previously reported or considered in coagulation theory or transdermal toxicological paradigms. Our results show that after just a few minutes of light, the hydrodynamic diameter of TiO(2 aggregates is reduced from ∼280 nm to ∼230 nm. We exposed pigskin to the nanoparticle suspension and found 200 mg kg(-1 of TiO(2 for skin that was exposed to nanoparticles in the presence of natural sunlight and only 75 mg kg(-1 for skin exposed to dark conditions, indicating the influence of light on NP penetration. These results suggest that photoinduced disaggregation may have important health implications.

  3. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    Science.gov (United States)

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  4. Evolution of an intricate J-protein network driving protein disaggregation in eukaryotes.

    Science.gov (United States)

    Nillegoda, Nadinath B; Stank, Antonia; Malinverni, Duccio; Alberts, Niels; Szlachcic, Anna; Barducci, Alessandro; De Los Rios, Paolo; Wade, Rebecca C; Bukau, Bernd

    2017-05-15

    Hsp70 participates in a broad spectrum of protein folding processes extending from nascent chain folding to protein disaggregation. This versatility in function is achieved through a diverse family of J-protein cochaperones that select substrates for Hsp70. Substrate selection is further tuned by transient complexation between different classes of J-proteins, which expands the range of protein aggregates targeted by metazoan Hsp70 for disaggregation. We assessed the prevalence and evolutionary conservation of J-protein complexation and cooperation in disaggregation. We find the emergence of a eukaryote-specific signature for interclass complexation of canonical J-proteins. Consistently, complexes exist in yeast and human cells, but not in bacteria, and correlate with cooperative action in disaggregation in vitro. Signature alterations exclude some J-proteins from networking, which ensures correct J-protein pairing, functional network integrity and J-protein specialization. This fundamental change in J-protein biology during the prokaryote-to-eukaryote transition allows for increased fine-tuning and broadening of Hsp70 function in eukaryotes.

  5. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  6. Disaggregate energy consumption and industrial production in South Africa

    Energy Technology Data Exchange (ETDEWEB)

    Ziramba, Emmanuel [Department of Economics, University of South Africa, P.O Box 392, UNISA 0003 (South Africa)

    2009-06-15

    This paper tries to assess the relationship between disaggregate energy consumption and industrial output in South Africa by undertaking a cointegration analysis using annual data from 1980 to 2005. We also investigate the causal relationships between the various disaggregate forms of energy consumption and industrial production. Our results imply that industrial production and employment are long-run forcing variables for electricity consumption. Applying the [Toda, H.Y., Yamamoto, T., 1995. Statistical inference in vector autoregressions with possibly integrated processes. Journal of Econometrics 66, 225-250] technique to Granger-causality, we find bi-directional causality between oil consumption and industrial production. For the other forms of energy consumption, there is evidence in support of the energy neutrality hypothesis. There is also evidence of causality between employment and electricity consumption as well as coal consumption causing employment. (author)

  7. Automatic detection of end-diastole and end-systole from echocardiography images using manifold learning

    International Nuclear Information System (INIS)

    Gifani, Parisa; Behnam, Hamid; Shalbaf, Ahmad; Sani, Zahra Alizadeh

    2010-01-01

    The automatic detection of end-diastole and end-systole frames of echocardiography images is the first step for calculation of the ejection fraction, stroke volume and some other features related to heart motion abnormalities. In this paper, the manifold learning algorithm is applied on 2D echocardiography images to find out the relationship between the frames of one cycle of heart motion. By this approach the nonlinear embedded information in sequential images is represented in a two-dimensional manifold by the LLE algorithm and each image is depicted by a point on reconstructed manifold. There are three dense regions on the manifold which correspond to the three phases of cardiac cycle ('isovolumetric contraction', 'isovolumetric relaxation', 'reduced filling'), wherein there is no prominent change in ventricular volume. By the fact that the end-systolic and end-diastolic frames are in isovolumic phases of the cardiac cycle, the dense regions can be used to find these frames. By calculating the distance between consecutive points in the manifold, the isovolumic frames are mapped on the three minimums of the distance diagrams which were used to select the corresponding images. The minimum correlation between these images leads to detection of end-systole and end-diastole frames. The results on six healthy volunteers have been validated by an experienced echo cardiologist and depict the usefulness of the presented method

  8. T-wave end detection using neural networks and Support Vector Machines.

    Science.gov (United States)

    Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román

    2018-05-01

    In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Commercial demand for energy: a disaggregated approach. [Model validation for 1970-1975; forecasting to 2000

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, J.R.; Cohn, S.; Cope, J.; Johnson, W.S.

    1978-04-01

    This report describes the structure and forecasting accuracy of a disaggregated model of commercial energy use recently developed at Oak Ridge National Laboratory. The model forecasts annual commercial energy use by ten building types, five end uses, and four fuel types. Both economic (utilization rate, fuel choice, capital-energy substitution) and technological factors (equipment efficiency, thermal characteristics of buildings) are explicitly represented in the model. Model parameters are derived from engineering and econometric analysis. The model is then validated by simulating commercial energy use over the 1970--1975 time period. The model performs well both with respect to size of forecast error and ability to predict turning points. The model is then used to evaluate the energy-use implications of national commercial buildings standards based on the ASHRAE 90-75 recommendations. 10 figs., 12 tables, 14 refs.

  10. Spatial and temporal disaggregation of transport-related carbon dioxide emissions in Bogota - Colombia

    Science.gov (United States)

    Hernandez-Gonzalez, L. A.; Jimenez Pizarro, R.; Néstor Y. Rojas, N. Y.

    2011-12-01

    As a result of rapid urbanization during the last 60 years, 75% of the Colombian population now lives in cities. Urban areas are net sources of greenhouse gases (GHG) and contribute significantly to national GHG emission inventories. The development of scientifically-sound GHG mitigation strategies require accurate GHG source and sink estimations. Disaggregated inventories are effective mitigation decision-making tools. The disaggregation process renders detailed information on the distribution of emissions by transport mode, and the resulting a priori emissions map allows for optimal definition of sites for GHG flux monitoring, either by eddy covariance or inverse modeling techniques. Fossil fuel use in transportation is a major source of carbon dioxide (CO2) in Bogota. We present estimates of CO2 emissions from road traffic in Bogota using the Intergovernmental Panel on Climate Change (IPCC) reference method, and a spatial and temporal disaggregation method. Aggregated CO2 emissions from mobile sources were estimated from monthly and annual fossil fuel (gasoline, diesel and compressed natural gas - CNG) consumption statistics, and estimations of bio-ethanol and bio-diesel use. Although bio-fuel CO2 emissions are considered balanced over annual (or multi-annual) agricultural cycles, we included them since CO2 generated by their combustion would be measurable by a net flux monitoring system. For the disaggregation methodology, we used information on Bogota's road network classification, mean travel speed and trip length for each vehicle category and road type. The CO2 emission factors were taken from recent in-road measurements for gasoline- and CNG-powered vehicles and also estimated from COPERT IV. We estimated emission factors for diesel from surveys on average trip length and fuel consumption. Using IPCC's reference method, we estimate Bogota's total transport-related CO2 emissions for 2008 (reference year) at 4.8 Tg CO2. The disaggregation method estimation is

  11. Long-run relationship between sectoral productivity and energy consumption in Malaysia: An aggregated and disaggregated viewpoint

    International Nuclear Information System (INIS)

    Rahman, Md Saifur; Junsheng, Ha; Shahari, Farihana; Aslam, Mohamed; Masud, Muhammad Mehedi; Banna, Hasanul; Liya, Ma

    2015-01-01

    This paper investigates the causal relationship between energy consumption and economic productivity in Malaysia at both aggregated and disaggregated levels. The investigation utilises total and sectoral (industrial and manufacturing) productivity growth during the 1971–2012 period using the modified Granger causality test proposed by Toda and Yamamoto [1] within a multivariate framework. The economy of Malaysia was found to be energy dependent at aggregated and disaggregated levels of national and sectoral economic growth. However, at disaggregate level, inefficient energy use is particularly identified with electricity and coal consumption patterns and their Granger caused negative effects upon GDP (Gross Domestic Product) and manufacturing growth. These findings suggest that policies should focus more on improving energy efficiency and energy saving. Furthermore, since emissions are found to have a close relationship to economic output at national and sectoral levels green technologies are of a highest necessity. - Highlights: • At aggregate level, energy consumption significantly influences GDP (Gross Domestic Product). • At disaggregate level, electricity & coal consumption does not help output growth. • Mineral and waste are found to positively Granger cause GDP. • The results reveal strong interactions between emissions and economic growth

  12. Disaggregating Assessment to Close the Loop and Improve Student Learning

    Science.gov (United States)

    Rawls, Janita; Hammons, Stacy

    2015-01-01

    This study examined student learning outcomes for accelerated degree students as compared to conventional undergraduate students, disaggregated by class levels, to develop strategies for then closing the loop with assessment. Using the National Survey of Student Engagement, critical thinking and oral and written communication outcomes were…

  13. Aggregating and Disaggregating Flexibility Objects

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja

    2015-01-01

    In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...

  14. Disaggregation of MODIS surface temperature over an agricultural area using a time series of Formosat-2 images

    OpenAIRE

    Merlin, O.; Duchemin, Benoit; Hagolle, O.; Jacob, Frédéric; Coudert, B.; Chehbouni, Abdelghani; Dedieu, G.; Garatuza, J.; Kerr, Yann

    2010-01-01

    No of Pages 13; International audience; The temporal frequency of the thermal data provided by current spaceborne high-resolution imagery systems is inadequate for agricultural applications. As an alternative to the lack of high-resolution observations, kilometric thermal data can be disaggregated using a green (photosynthetically active) vegetation index e.g. NDVI (Normalized Difference Vegetation Index) collected at high resolution. Nevertheless, this approach is only valid in the condition...

  15. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  16. Disaggregating Orders of Water Scarcity - The Politics of Nexus in the Wami-Ruvu River Basin, Tanzania

    Directory of Open Access Journals (Sweden)

    Anna Mdee

    2017-02-01

    Full Text Available This article considers the dilemma of managing competing uses of surface water in ways that respond to social, ecological and economic needs. Current approaches to managing competing water use, such as Integrated Water Resources Management (IWRM and the concept of the water-energy-food nexus do not adequately disaggregate the political nature of water allocations. This is analysed using Mehta’s (2014 framework on orders of scarcity to disaggregate narratives of water scarcity in two ethnographic case studies in the WamiRuvu River Basin in Tanzania: one of a mountain river that provides water to urban Morogoro, and another of a large donor-supported irrigation scheme on the Wami River. These case studies allow us to explore different interfaces in the food-water-energy nexus. The article makes two points: that disaggregating water scarcity is essential for analysing the nexus; and that current institutional frameworks (such as IWRM mask the political nature of the nexus, and therefore do not provide an adequate platform for adjudicating the interfaces of competing water use.

  17. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our

  18. Equity in health care financing in Palestine: the value-added of the disaggregate approach.

    Science.gov (United States)

    Abu-Zaineh, Mohammad; Mataria, Awad; Luchini, Stéphane; Moatti, Jean-Paul

    2008-06-01

    This paper analyzes the redistributive effect and progressivity associated with the current health care financing schemes in the Occupied Palestinian Territory, using data from the first Palestinian Household Health Expenditure Survey conducted in 2004. The paper goes beyond the commonly used "aggregate summary index approach" to apply a more detailed "disaggregate approach". Such an approach is borrowed from the general economic literature on taxation, and examines redistributive and vertical effects over specific parts of the income distribution, using the dominance criterion. In addition, the paper employs a bootstrap method to test for the statistical significance of the inequality measures. While both the aggregate and disaggregate approaches confirm the pro-rich and regressive character of out-of-pocket payments, the aggregate approach does not ascertain the potential progressive feature of any of the available insurance schemes. The disaggregate approach, however, significantly reveals a progressive aspect, for over half of the population, of the government health insurance scheme, and demonstrates that the regressivity of the out-of-pocket payments is most pronounced among the worst-off classes of the population. Recommendations are advanced to improve the performance of the government insurance schemes to enhance its capacity in limiting inequalities in health care financing in the Occupied Palestinian Territory.

  19. Prediction of kharif rice yield at Kharagpur using disaggregated extended range rainfall forecasts

    Science.gov (United States)

    Dhekale, B. S.; Nageswararao, M. M.; Nair, Archana; Mohanty, U. C.; Swain, D. K.; Singh, K. K.; Arunbabu, T.

    2017-08-01

    The Extended Range Forecasts System (ERFS) has been generating monthly and seasonal forecasts on real-time basis throughout the year over India since 2009. India is one of the major rice producer and consumer in South Asia; more than 50% of the Indian population depends on rice as staple food. Rice is mainly grown in kharif season, which contributed 84% of the total annual rice production of the country. Rice cultivation in India is rainfed, which depends largely on rains, so reliability of the rainfall forecast plays a crucial role for planning the kharif rice crop. In the present study, an attempt has been made to test the reliability of seasonal and sub-seasonal ERFS summer monsoon rainfall forecasts for kharif rice yield predictions at Kharagpur, West Bengal by using CERES-Rice (DSSATv4.5) model. These ERFS forecasts are produced as monthly and seasonal mean values and are converted into daily sequences with stochastic weather generators for use with crop growth models. The daily sequences are generated from ERFS seasonal (June-September) and sub-seasonal (July-September, August-September, and September) summer monsoon (June to September) rainfall forecasts which are considered as input in CERES-rice crop simulation model for the crop yield prediction for hindcast (1985-2008) and real-time mode (2009-2015). The yield simulated using India Meteorological Department (IMD) observed daily rainfall data is considered as baseline yield for evaluating the performance of predicted yields using the ERFS forecasts. The findings revealed that the stochastic disaggregation can be used to disaggregate the monthly/seasonal ERFS forecasts into daily sequences. The year to year variability in rice yield at Kharagpur is efficiently predicted by using the ERFS forecast products in hindcast as well as real time, and significant enhancement in the prediction skill is noticed with advancement in the season due to incorporation of observed weather data which reduces uncertainty of

  20. Disaggregating Qualitative Data from Asian American College Students in Campus Racial Climate Research and Assessment

    Science.gov (United States)

    Museus, Samuel D.; Truong, Kimberly A.

    2009-01-01

    This article highlights the utility of disaggregating qualitative research and assessment data on Asian American college students. Given the complexity of and diversity within the Asian American population, scholars have begun to underscore the importance of disaggregating data in the empirical examination of Asian Americans, but most of those…

  1. Carbon emissions, energy consumption and economic growth: An aggregate and disaggregate analysis of the Indian economy

    International Nuclear Information System (INIS)

    Ahmad, Ashfaq; Zhao, Yuhuan; Shahbaz, Muhammad; Bano, Sadia; Zhang, Zhonghua; Wang, Song; Liu, Ya

    2016-01-01

    This study investigates the long and short run relationships among carbon emissions, energy consumption and economic growth in India at the aggregated and disaggregated levels during 1971–2014. The autoregressive distributed lag model is employed for the cointegration analyses and the vector error correction model is applied to determine the direction of causality between variables. Results show that a long run cointegration relationship exists and that the environmental Kuznets curve is validated at the aggregated and disaggregated levels. Furthermore, energy (total energy, gas, oil, electricity and coal) consumption has a positive relationship with carbon emissions and a feedback effect exists between economic growth and carbon emissions. Thus, energy-efficient technologies should be used in domestic production to mitigate carbon emissions at the aggregated and disaggregated levels. The present study provides policy makers with new directions in drafting comprehensive policies with lasting impacts on the economy, energy consumption and environment towards sustainable development. - Highlights: •Relationships among carbon emissions, energy consumption and economic growth are investigated. •The EKC exists at aggregated and disaggregated levels for India. •All energy resources have positive effects on carbon emissions. •Gas energy consumption is less polluting than other energy sources in India.

  2. The use of continuous functions for a top-down temporal disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Kalchmayr, M.; Orthofer, R.

    1997-11-01

    This report is a documentation of a presentation at the International Speciality Conference 'The Emission Inventory: Planning for the Future', October 28-30, 1997 in Research Triangle Park, North Carolina, USA. The Conference was organized by the Air and Waste Management Association (AWMA) and the U.S. Environmental Protection Agency. Emission data with high temporal resolution are necessary to analyze the relationship between emissions and their impacts. In many countries, however, emission inventories refer only to the annual countrywide emission sums, because underlying data (traffic, energy, industry statistics) are available for statistically relevant territorial units and for longer time periods only. This paper describes a method for the temporal disaggregation of yearly emission sums through application of continuous functions which simulate emission generating activities. The temporal patterns of the activities are derived through overlay of annual, weekly and diurnal variation functions which are based on statistical data of the relevant activities. If applied to annual emission data, these combined functions describe the dynamic patterns of emissions over year. The main advantage of the continuous functions method is that temporal emission patterns can be smoothed throughout one year, thus eliminating some of the major drawbacks from the traditional standardized fixed quota system. For handling in models, the continuous functions and their parameters can be directly included and the emission quota calculated directly for a certain hour of the year. The usefulness of the method is demonstrated with NMVOC emission data for Austria. Temporally disaggregated emission data can be used as input for ozone models as well as for visualization and animation of the emission dynamics. The analysis of the temporal dynamics of emission source strengths, e.g. during critical hours for ozone generation in summer, allows the implementation of efficient emission reduction

  3. Disaggregation of small, cohesive rubble pile asteroids due to YORP

    Science.gov (United States)

    Scheeres, D. J.

    2018-04-01

    The implication of small amounts of cohesion within relatively small rubble pile asteroids is investigated with regard to their evolution under the persistent presence of the YORP effect. We find that below a characteristic size, which is a function of cohesive strength, density and other properties, rubble pile asteroids can enter a "disaggregation phase" in which they are subject to repeated fissions after which the formation of a stabilizing binary system is not possible. Once this threshold is passed rubble pile asteroids may be disaggregated into their constituent components within a finite time span. These constituent components will have their own spin limits - albeit potentially at a much higher spin rate due to the greater strength of a monolithic body. The implications of this prediction are discussed and include modification of size distributions, prevalence of monolithic bodies among meteoroids and the lifetime of small rubble pile bodies in the solar system. The theory is then used to place constraints on the strength of binary asteroids characterized as a function of their type.

  4. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  5. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    Directory of Open Access Journals (Sweden)

    Zhao Hong-hao

    2016-01-01

    Full Text Available Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distributed normal process. Then the Bases theory is used to characterize the end-to-end network traffic. By calculating the parameters, the model is determined correctly. Simulation results show that our approach is feasible and effective.

  6. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  7. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    Directory of Open Access Journals (Sweden)

    Chun-Wei Tsai

    2014-01-01

    Full Text Available This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  8. Probabilistic disaggregation of a spatial portfolio of exposure for natural hazard risk assessment

    DEFF Research Database (Denmark)

    Custer, Rocco; Nishijima, Kazuyoshi

    2018-01-01

    In natural hazard risk assessment situations are encountered where information on the portfolio of exposure is only available in a spatially aggregated form, hindering a precise risk assessment. Recourse might be found in the spatial disaggregation of the portfolio of exposure to the resolution...... of a portfolio of buildings in two communes in Switzerland and the results are compared to sample observations. The relevance of probabilistic disaggregation uncertainty in natural hazard risk assessment is illustrated with the example of a simple flood risk assessment....

  9. Methodology for getting the end use of energy in the industrial sector from Parana State

    International Nuclear Information System (INIS)

    Haag Filho, A.

    1990-03-01

    A methodology for a survey on the utilization of energy in the industrial sector from Parana state, at low costs, and aiming the supply of data with the desired reliability and disaggregation is presented. The obtained data shall provide elements for the adoption of short term actions as well as serve as a basis for the elaboration of medium and long terms scenarios. The survey shall be conducted throughout the state, comprising all fields of activity and having the following objectives: determine the state's energetic consumption profile by industrial segment and by end use of energy; determine the state's energetic profile with the spatial distribution of consumption and detect the industrial segments which are more sensitive to the energetic substitution programs and/or of energy conservation. (author)

  10. The Behaviour of Disaggregated Public Expenditures and Income in Malaysia

    OpenAIRE

    Tang, Chor-Foon; Lau, Evan

    2011-01-01

    The present study attempts to re-investigate the behaviour of disaggregated public expenditures data and national income for Malaysia. This study covers the sample period of annual data from 1960 to 2007. The Bartlett-corrected trace tests proposed by Johansen (2002) were used to ascertain the presence of long run equilibrium relationship between public expenditures and national income. The results show one cointegrating vector for each specification of public expenditures. The relatively new...

  11. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2018-01-01

    Full Text Available Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning VOC (Visual Object Classes Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.

  12. Automatic detection of end-diastolic and end-systolic frames in 2D echocardiography.

    Science.gov (United States)

    Zolgharni, Massoud; Negoita, Madalina; Dhutia, Niti M; Mielewczik, Michael; Manoharan, Karikaran; Sohaib, S M Afzal; Finegold, Judith A; Sacchi, Stefania; Cole, Graham D; Francis, Darrel P

    2017-07-01

    Correctly selecting the end-diastolic and end-systolic frames on a 2D echocardiogram is important and challenging, for both human experts and automated algorithms. Manual selection is time-consuming and subject to uncertainty, and may affect the results obtained, especially for advanced measurements such as myocardial strain. We developed and evaluated algorithms which can automatically extract global and regional cardiac velocity, and identify end-diastolic and end-systolic frames. We acquired apical four-chamber 2D echocardiographic video recordings, each at least 10 heartbeats long, acquired twice at frame rates of 52 and 79 frames/s from 19 patients, yielding 38 recordings. Five experienced echocardiographers independently marked end-systolic and end-diastolic frames for the first 10 heartbeats of each recording. The automated algorithm also did this. Using the average of time points identified by five human operators as the reference gold standard, the individual operators had a root mean square difference from that gold standard of 46.5 ms. The algorithm had a root mean square difference from the human gold standard of 40.5 ms (P<.0001). Put another way, the algorithm-identified time point was an outlier in 122/564 heartbeats (21.6%), whereas the average human operator was an outlier in 254/564 heartbeats (45%). An automated algorithm can identify the end-systolic and end-diastolic frames with performance indistinguishable from that of human experts. This saves staff time, which could therefore be invested in assessing more beats, and reduces uncertainty about the reliability of the choice of frame. © 2017, Wiley Periodicals, Inc.

  13. Improved AODV route recovery in mobile ad-hoc networks using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Ahmad Maleki

    2014-09-01

    Full Text Available An important issue in ad-hoc on-demand distance vector (AODV routing protocols is route failure caused by node mobility in the MANETs. The AODV requires a new route discovery procedure whenever a route breaks and these frequent route discoveries increase transmission delays and routing overhead. The present study proposes a new method for AODVs using a genetic algorithm to improve the route recovery mechanism. When failure occurs in a route, the proposed method (GAAODV makes decisions regarding the QOS parameter to select source or local repair. The task of the genetic algorithm is to find an appropriate combination of weights to optimize end-to-end delay. This paper evaluates the metrics of routing overhead, average end-to-end delay, and packet delivery ratio. Comparison of the new algorithm and AODV (RFC 3561 using a NS-2 simulator shows that GAAODV obtains better results for the QOS parameters.

  14. Energy, cost, and emission end-use profiles of homes: An Ontario (Canada) case study

    International Nuclear Information System (INIS)

    Aydinalp Koksal, Merih; Rowlands, Ian H.; Parker, Paul

    2015-01-01

    Highlights: • Hourly electricity consumption data of seven end-uses from 25 homes are analyzed. • Hourly load, cost, and emission profiles of end-uses are developed and categorized. • Side-by-side analysis of energy, cost, and environmental effects is conducted. • Behaviour and outdoor temperature based end-uses are determined. • Share of each end-use in the total daily load, cost, and emission is determined. - Abstract: Providing information on the temporal distributions of residential electricity end-uses plays a major role in determining the potential savings in residential electricity demand, cost, and associated emissions. While the majority of the studies on disaggregated residential electricity end-use data provided hourly usage profiles of major appliances, only a few of them presented analysis on the effect of hourly electricity consumption of some specific end-uses on household costs and emissions. This study presents side-by-side analysis of energy, cost, and environment effects of hourly electricity consumption of the main electricity end-uses in a sample of homes in the Canadian province of Ontario. The data used in this study are drawn from a larger multi-stakeholder project in which electricity consumption of major end-uses at 25 homes in Milton, Ontario, was monitored in five-minute intervals for six-month to two-year periods. In addition to determining the hourly price of electricity during the monitoring period, the hourly carbon intensity is determined using fuel type hourly generation and the life cycle greenhouse gas intensities specifically determined for Ontario’s electricity fuel mix. The hourly load, cost, and emissions profiles are developed for the central air conditioner, furnace, clothes dryer, clothes washer, dishwasher, refrigerator, and stove and then grouped into eight day type categories. The side-by-side analysis of categorized load, cost, and emission profiles of the seven electricity end-uses provided information on

  15. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    Science.gov (United States)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of

  16. GIS aided spatial disaggregation of emission inventories

    International Nuclear Information System (INIS)

    Orthofer, R.; Loibl, W.

    1995-10-01

    We have applied our method to produce detailed NMVOC and NO x emission density maps for Austria. While theoretical average emission densities for the whole country would be only 5 t NMVOC and 2.5 t NO x per km 2 , the actual emission densities range from zero in the many uninhabited areas up to more than 3,000 t/km 2 along major highways. In Austria, small scale disaggregation is necessary particularly for the differentiated topography and population patterns in alpine valleys. (author)

  17. Amyloid formation and disaggregation of α-synuclein and its tandem repeat (α-TR)

    International Nuclear Information System (INIS)

    Bae, Song Yi; Kim, Seulgi; Hwang, Heejin; Kim, Hyun-Kyung; Yoon, Hyun C.; Kim, Jae Ho; Lee, SangYoon; Kim, T. Doohun

    2010-01-01

    Research highlights: → Formation of the α-synuclein amyloid fibrils by [BIMbF 3 Im]. → Disaggregation of amyloid fibrils by epigallocatechin gallate (EGCG) and baicalein. → Amyloid formation of α-synuclein tandem repeat (α-TR). -- Abstract: The aggregation of α-synuclein is clearly related to the pathogenesis of Parkinson's disease. Therefore, detailed understanding of the mechanism of fibril formation is highly valuable for the development of clinical treatment and also of the diagnostic tools. Here, we have investigated the interaction of α-synuclein with ionic liquids by using several biochemical techniques including Thioflavin T assays and transmission electron microscopy (TEM). Our data shows a rapid formation of α-synuclein amyloid fibrils was stimulated by 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [BIMbF 3 Im], and these fibrils could be disaggregated by polyphenols such as epigallocatechin gallate (EGCG) and baicalein. Furthermore, the effect of [BIMbF 3 Im] on the α-synuclein tandem repeat (α-TR) in the aggregation process was studied.

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  19. Long term building energy demand for India: Disaggregating end use energy services in an integrated assessment modeling framework

    International Nuclear Information System (INIS)

    Chaturvedi, Vaibhav; Eom, Jiyong; Clarke, Leon E.; Shukla, Priyadarshi R.

    2014-01-01

    With increasing population, income, and urbanization, meeting the energy service demands for the building sector will be a huge challenge for Indian energy policy. Although there is broad consensus that the Indian building sector will grow and evolve over the coming century, there is little understanding of the potential nature of this evolution over the longer term. The present study uses a technologically detailed, service based building energy model nested in the long term, global, integrated assessment framework, GCAM, to produce scenarios of the evolution of the Indian buildings sector up through the end of the century. The results support the idea that as India evolves toward developed country per-capita income levels, its building sector will largely evolve to resemble those of the currently developed countries (heavy reliance on electricity both for increasing cooling loads and a range of emerging appliance and other plug loads), albeit with unique characteristics based on its climate conditions (cooling dominating heating and even more so with climate change), on fuel preferences that may linger from the present (for example, a preference for gas for cooking), and vestiges of its development path (including remnants of rural poor that use substantial quantities of traditional biomass). - Highlights: ► Building sector final energy demand in India will grow to over five times by century end. ► Space cooling and appliance services will grow substantially in the future. ► Energy service demands will be met predominantly by electricity and gas. ► Urban centers will face huge demand for floor space and building energy services. ► Carbon tax policy will have little effect on reducing building energy demands

  20. Statistical Models for Disaggregation and Reaggregation of Natural Gas Consumption Data

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Malý, Marek; Kasanický, Ivan; Pelikán, Emil

    2015-01-01

    Roč. 42, č. 5 (2015), s. 921-937 ISSN 0266-4763 Institutional support: RVO:67985807 Keywords : natural gas consumption * semiparametric model * standardized load profiles * aggregation * disaggregation * 62P30 Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.419, year: 2015

  1. System of end-to-end symmetric database encryption

    Science.gov (United States)

    Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.

    2018-05-01

    The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.

  2. MRI simulation: end-to-end testing for prostate radiation therapy using geometric pelvic MRI phantoms

    International Nuclear Information System (INIS)

    Sun, Jidi; Menk, Fred; Lambert, Jonathan; Martin, Jarad; Denham, James W; Greer, Peter B; Dowling, Jason; Rivest-Henault, David; Pichler, Peter; Parker, Joel; Arm, Jameen; Best, Leah

    2015-01-01

    To clinically implement MRI simulation or MRI-alone treatment planning requires comprehensive end-to-end testing to ensure an accurate process. The purpose of this study was to design and build a geometric phantom simulating a human male pelvis that is suitable for both CT and MRI scanning and use it to test geometric and dosimetric aspects of MRI simulation including treatment planning and digitally reconstructed radiograph (DRR) generation.A liquid filled pelvic shaped phantom with simulated pelvic organs was scanned in a 3T MRI simulator with dedicated radiotherapy couch-top, laser bridge and pelvic coil mounts. A second phantom with the same external shape but with an internal distortion grid was used to quantify the distortion of the MR image. Both phantoms were also CT scanned as the gold-standard for both geometry and dosimetry. Deformable image registration was used to quantify the MR distortion. Dose comparison was made using a seven-field IMRT plan developed on the CT scan with the fluences copied to the MR image and recalculated using bulk electron densities.Without correction the maximum distortion of the MR compared with the CT scan was 7.5 mm across the pelvis, while this was reduced to 2.6 and 1.7 mm by the vendor’s 2D and 3D correction algorithms, respectively. Within the locations of the internal organs of interest, the distortion was <1.5 and <1 mm with 2D and 3D correction algorithms, respectively. The dose at the prostate isocentre calculated on CT and MRI images differed by 0.01% (1.1 cGy). Positioning shifts were within 1 mm when setup was performed using MRI generated DRRs compared to setup using CT DRRs.The MRI pelvic phantom allows end-to-end testing of the MRI simulation workflow with comparison to the gold-standard CT based process. MRI simulation was found to be geometrically accurate with organ dimensions, dose distributions and DRR based setup within acceptable limits compared to CT. (paper)

  3. Performance Enhancements of UMTS networks using end-to-end QoS provisioning

    DEFF Research Database (Denmark)

    Wang, Haibo; Prasad, Devendra; Teyeb, Oumer

    2005-01-01

    This paper investigates the end-to-end(E2E) quality of service(QoS) provisioning approaches for UMTS networks together with DiffServ IP network. The effort was put on QoS classes mapping from DiffServ to UMTS, Access Control(AC), buffering and scheduling optimization. The DiffServ Code Point (DSCP......) was utilized in the whole UMTS QoS provisioning to differentiate different type of traffics. The overall algorithm was optimized to guarantee the E2E QoS parameters of each service class, especially for realtime applications, as well as to improve the bandwidth utilization. Simulation shows that the enhanced...

  4. Load Disaggregation via Pattern Recognition: A Feasibility Study of a Novel Method in Residential Building

    Directory of Open Access Journals (Sweden)

    Younghoon Kwak

    2018-04-01

    Full Text Available In response to the need to improve energy-saving processes in older buildings, especially residential ones, this paper describes the potential of a novel method of disaggregating loads in light of the load patterns of household appliances determined in residential buildings. Experiments were designed to be applicable to general residential buildings and four types of commonly used appliances were selected to verify the method. The method assumes that loads are disaggregated and measured by a single primary meter. Following the metering of household appliances and an analysis of the usage patterns of each type, values of electric current were entered into a Hidden Markov Model (HMM to formulate predictions. Thereafter, the HMM repeatedly performed to output the predicted data close to the measured data, while errors between predicted and the measured data were evaluated to determine whether they met tolerance. When the method was examined for 4 days, matching rates in accordance with the load disaggregation outcomes of the household appliances (i.e., laptop, refrigerator, TV, and microwave were 0.994, 0.992, 0.982, and 0.988, respectively. The proposed method can provide insights into how and where within such buildings energy is consumed. As a result, effective and systematic energy saving measures can be derived even in buildings in which monitoring sensors and measurement equipment are not installed.

  5. A Label Correcting Algorithm for Partial Disassembly Sequences in the Production Planning for End-of-Life Products

    Directory of Open Access Journals (Sweden)

    Pei-Fang (Jennifer Tsai

    2012-01-01

    Full Text Available Remanufacturing of used products has become a strategic issue for cost-sensitive businesses. Due to the nature of uncertain supply of end-of-life (EoL products, the reverse logistic can only be sustainable with a dynamic production planning for disassembly process. This research investigates the sequencing of disassembly operations as a single-period partial disassembly optimization (SPPDO problem to minimize total disassembly cost. AND/OR graph representation is used to include all disassembly sequences of a returned product. A label correcting algorithm is proposed to find an optimal partial disassembly plan if a specific reusable subpart is retrieved from the original return. Then, a heuristic procedure that utilizes this polynomial-time algorithm is presented to solve the SPPDO problem. Numerical examples are used to demonstrate the effectiveness of this solution procedure.

  6. AN ACTIVE-PASSIVE COMBINED ALGORITHM FOR HIGH SPATIAL RESOLUTION RETRIEVAL OF SOIL MOISTURE FROM SATELLITE SENSORS (Invited)

    Science.gov (United States)

    Lakshmi, V.; Mladenova, I. E.; Narayan, U.

    2009-12-01

    Soil moisture is known to be an essential factor in controlling the partitioning of rainfall into surface runoff and infiltration and solar energy into latent and sensible heat fluxes. Remote sensing has long proven its capability to obtain soil moisture in near real-time. However, at the present time we have the Advanced Scanning Microwave Radiometer (AMSR-E) on board NASA’s AQUA platform is the only satellite sensor that supplies a soil moisture product. AMSR-E coarse spatial resolution (~ 50 km at 6.9 GHz) strongly limits its applicability for small scale studies. A very promising technique for spatial disaggregation by combining radar and radiometer observations has been demonstrated by the authors using a methodology is based on the assumption that any change in measured brightness temperature and backscatter from one to the next time step is due primarily to change in soil wetness. The approach uses radiometric estimates of soil moisture at a lower resolution to compute the sensitivity of radar to soil moisture at the lower resolution. This estimate of sensitivity is then disaggregated using vegetation water content, vegetation type and soil texture information, which are the variables on which determine the radar sensitivity to soil moisture and are generally available at a scale of radar observation. This change detection algorithm is applied to several locations. We have used aircraft observed active and passive data over Walnut Creek watershed in Central Iowa in 2002; the Little Washita Watershed in Oklahoma in 2003 and the Murrumbidgee Catchment in southeastern Australia for 2006. All of these locations have different soils and land cover conditions which leads to a rigorous test of the disaggregation algorithm. Furthermore, we compare the derived high spatial resolution soil moisture to in-situ sampling and ground observation networks

  7. Determining the disaggregated economic value of irrigation water in the Musi sub-basin in India

    NARCIS (Netherlands)

    Hellegers, P.J.G.J.; Davidson, B.

    2010-01-01

    In this paper the residual method is used to determine the disaggregated economic value of irrigation water used in agriculture across crops, zones and seasons. This method relies on the belief that the value of a good (its price by its quantity) is equal to the summation of the quantity of each

  8. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets.

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-12-01

    National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0-92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women's access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve an equitable

  9. HIV/AIDS National Strategic Plans of Sub-Saharan African countries: an analysis for gender equality and sex-disaggregated HIV targets

    Science.gov (United States)

    Sherwood, Jennifer; Sharp, Alana; Cooper, Bergen; Roose-Snyder, Beirne; Blumenthal, Susan

    2017-01-01

    Abstract National Strategic Plans (NSPs) for HIV/AIDS are country planning documents that set priorities for programmes and services, including a set of targets to quantify progress toward national and international goals. The inclusion of sex-disaggregated targets and targets to combat gender inequality is important given the high disease burden among young women and adolescent girls in Sub-Saharan Africa, yet no comprehensive gender-focused analysis of NSP targets has been performed. This analysis quantitatively evaluates national HIV targets, included in NSPs from eighteen Sub-Saharan African countries, for sex-disaggregation. Additionally, NSP targets aimed at reducing gender-based inequality in health outcomes are compiled and inductively coded to report common themes. On average, in the eighteen countries included in this analysis, 31% of NSP targets include sex-disaggregation (range 0–92%). Three countries disaggregated a majority (>50%) of their targets by sex. Sex-disaggregation in data reporting was more common for targets related to the early phases of the HIV care continuum: 83% of countries included any sex-disaggregated targets for HIV prevention, 56% for testing and linkage to care, 22% for improving antiretroviral treatment coverage, and 11% for retention in treatment. The most common target to reduce gender inequality was to prevent gender-based violence (present in 50% of countries). Other commonly incorporated target areas related to improving women’s access to family planning, human and legal rights, and decision-making power. The inclusion of sex-disaggregated targets in national planning is vital to ensure that programmes make progress for all population groups. Improving the availability and quality of indicators to measure gender inequality, as well as evaluating programme outcomes by sex, is critical to tracking this progress. This analysis reveals an urgent need to set specific and separate targets for men and women in order to achieve

  10. Employment in Disequilibrium: a Disaggregated Approach on a Panel of French Firms

    OpenAIRE

    Brigitte Dormont

    1989-01-01

    The purpose of this paper is to understand disequilibrium phenomena at a disaggregated level. By using data on French firms, we carry out the estimation of labor demand model with two regimes, which correspond to the Keynesian and classical hypotheses. The results enable us to characterize classical firms as being particularly good performers: they have more rapid growth, younger productive plant and higher productivity gains and profitability. Classical firms stand out, with respect to their...

  11. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    OpenAIRE

    Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

    2016-01-01

    Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

  12. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  13. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

    Science.gov (United States)

    Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos

    2017-12-01

    Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

  15. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    Science.gov (United States)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  16. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  17. Robust Spectrum Sensing Demonstration Using a Low-Cost Front-End Receiver

    Directory of Open Access Journals (Sweden)

    Daniele Borio

    2015-01-01

    Full Text Available Spectrum Sensing (SS is an important function in Cognitive Radio (CR to detect primary users. The design of SS algorithms is one of the most challenging tasks in CR and requires innovative hardware and software solutions to enhance detection probability and minimize low false alarm probability. Although several SS algorithms have been developed in the specialized literature, limited work has been done to practically demonstrate the feasibility of this function on platforms with significant computational and hardware constraints. In this paper, SS is demonstrated using a low cost TV tuner as agile front-end for sensing a large portion of the Ultra-High Frequency (UHF spectrum. The problems encountered and the limitations imposed by the front-end are analysed along with the solutions adopted. Finally, the spectrum sensor developed is implemented on an Android device and SS implementation is demonstrated using a smartphone.

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  19. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  20. Modelling OAIS Compliance for Disaggregated Preservation Services

    Directory of Open Access Journals (Sweden)

    Gareth Knight

    2007-07-01

    Full Text Available The reference model for the Open Archival Information System (OAIS is well established in the research community as a method of modelling the functions of a digital repository and as a basis in which to frame digital curation and preservation issues. In reference to the 5th anniversary review of the OAIS, it is timely to consider how it may be interpreted by an institutional repository. The paper examines methods of sharing essential functions and requirements of an OAIS between two or more institutions, outlining the practical considerations of outsourcing. It also details the approach taken by the SHERPA DP Project to introduce a disaggregated service model for institutional repositories that wish to implement preservation services.

  1. Greenhouse gas profiling by infrared-laser and microwave occultation: retrieval algorithm and demonstration results from end-to-end simulations

    Directory of Open Access Journals (Sweden)

    V. Proschek

    2011-10-01

    Full Text Available Measuring greenhouse gas (GHG profiles with global coverage and high accuracy and vertical resolution in the upper troposphere and lower stratosphere (UTLS is key for improved monitoring of GHG concentrations in the free atmosphere. In this respect a new satellite mission concept adding an infrared-laser part to the already well studied microwave occultation technique exploits the joint propagation of infrared-laser and microwave signals between Low Earth Orbit (LEO satellites. This synergetic combination, referred to as LEO-LEO microwave and infrared-laser occultation (LMIO method, enables to retrieve thermodynamic profiles (pressure, temperature, humidity and accurate altitude levels from the microwave signals and GHG profiles from the simultaneously measured infrared-laser signals. However, due to the novelty of the LMIO method, a retrieval algorithm for GHG profiling is not yet available. Here we introduce such an algorithm for retrieving GHGs from LEO-LEO infrared-laser occultation (LIO data, applied as a second step after retrieving thermodynamic profiles from LEO-LEO microwave occultation (LMO data. We thoroughly describe the LIO retrieval algorithm and unveil the synergy with the LMO-retrieved pressure, temperature, and altitude information. We furthermore demonstrate the effective independence of the GHG retrieval results from background (a priori information in discussing demonstration results from LMIO end-to-end simulations for a representative set of GHG profiles, including carbon dioxide (CO2, water vapor (H2O, methane (CH4, and ozone (O3. The GHGs except for ozone are well retrieved throughout the UTLS, while ozone is well retrieved from about 10 km to 15 km upwards, since the ozone layer resides in the lower stratosphere. The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. The retrieved profiles also appear unbiased, which points

  2. Spatial and temporal disaggregation of anthropogenic CO2 emissions from the City of Cape Town

    Directory of Open Access Journals (Sweden)

    Alecia Nickless

    2015-11-01

    Full Text Available This paper describes the methodology used to spatially and temporally disaggregate carbon dioxide emission estimates for the City of Cape Town, to be used for a city-scale atmospheric inversion estimating carbon dioxide fluxes. Fossil fuel emissions were broken down into emissions from road transport, domestic emissions, industrial emissions, and airport and harbour emissions. Using spatially explicit information on vehicle counts, and an hourly scaling factor, vehicle emissions estimates were obtained for the city. Domestic emissions from fossil fuel burning were estimated from household fuel usage information and spatially disaggregated population data from the 2011 national census. Fuel usage data were used to derive industrial emissions from listed activities, which included emissions from power generation, and these were distributed spatially according to the source point locations. The emissions from the Cape Town harbour and the international airport were determined from vessel and aircraft count data, respectively. For each emission type, error estimates were determined through error propagation techniques. The total fossil fuel emission field for the city was obtained by summing the spatial layers for each emission type, accumulated for the period of interest. These results will be used in a city-scale inversion study, and this method implemented in the future for a national atmospheric inversion study.

  3. Technological shape and size: A disaggregated perspective on sectoral innovation systems in renewable electrification pathways

    DEFF Research Database (Denmark)

    Hansen, Ulrich Elmer; Gregersen, Cecilia; Lema, Rasmus

    2018-01-01

    important analytical implications because the disaggregated perspective allows us to identify trajectories that cut across conventionally defined core technologies. This is important for ongoing discussions of electrification pathways in developing countries. We conclude the paper by distilling......The sectoral innovation system perspective has been developed as an analytical framework to analyse and understand innovation dynamics within and across various sectors. Most of the research conducted on sectoral innovation systems has focused on an aggregate-level analysis of entire sectors....... This paper argues that a disaggregated (sub-sectoral) focus is more suited to policy-oriented work on the development and diffusion of renewable energy, particularly in countries with rapidly developing energy systems and open technology choices. It focuses on size, distinguishing between small-scale (mini...

  4. Disaggregate energy consumption and industrial output in the United States

    International Nuclear Information System (INIS)

    Ewing, Bradley T.; Sari, Ramazan; Soytas, Ugur

    2007-01-01

    This paper investigates the effect of disaggregate energy consumption on industrial output in the United States. Most of the related research utilizes aggregate data which may not indicate the relative strength or explanatory power of various energy inputs on output. We use monthly data and employ the generalized variance decomposition approach to assess the relative impacts of energy and employment on real output. Our results suggest that unexpected shocks to coal, natural gas and fossil fuel energy sources have the highest impacts on the variation of output, while several renewable sources exhibit considerable explanatory power as well. However, none of the energy sources explain more of the forecast error variance of industrial output than employment

  5. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    Science.gov (United States)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness

  6. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  7. Validating CDIAC's population-based approach to the disaggregation of within-country CO2 emissions

    International Nuclear Information System (INIS)

    Cushman, R.M.; Beauchamp, J.J.; Brenkert, A.L.

    1998-01-01

    The Carbon Dioxide Information Analysis Center produces and distributes a data base of CO 2 emissions from fossil-fuel combustion and cement production, expressed as global, regional, and national estimates. CDIAC also produces a companion data base, expressed on a one-degree latitude-longitude grid. To do this gridding, emissions within each country are spatially disaggregated according to the distribution of population within that country. Previously, the lack of within-country emissions data prevented a validation of this approach. But emissions inventories are now becoming available for most US states. An analysis of these inventories confirms that population distribution explains most, but not all, of the variance in the distribution of CO 2 emissions within the US. Additional sources of variance (coal production, non-carbon energy sources, and interstate electricity transfers) are explored, with the hope that the spatial disaggregation of emissions can be improved

  8. Energy consumption and economic growth: Evidence from China at both aggregated and disaggregated levels

    International Nuclear Information System (INIS)

    Yuan Jiahai; Kang Jiangang; Zhao Changhong; Hu Zhaoguang

    2008-01-01

    Using a neo-classical aggregate production model where capital, labor and energy are treated as separate inputs, this paper tests for the existence and direction of causality between output growth and energy use in China at both aggregated total energy and disaggregated levels as coal, oil and electricity consumption. Using the Johansen cointegration technique, the empirical findings indicate that there exists long-run cointegration among output, labor, capital and energy use in China at both aggregated and all three disaggregated levels. Then using a VEC specification, the short-run dynamics of the interested variables are tested, indicating that there exists Granger causality running from electricity and oil consumption to GDP, but does not exist Granger causality running from coal and total energy consumption to GDP. On the other hand, short-run Granger causality exists from GDP to total energy, coal and oil consumption, but does not exist from GDP to electricity consumption. We thus propose policy suggestions to solve the energy and sustainable development dilemma in China as: enhancing energy supply security and guaranteeing energy supply, especially in the short run to provide adequate electric power supply and set up national strategic oil reserve; enhancing energy efficiency to save energy; diversifying energy sources, energetically exploiting renewable energy and drawing out corresponding policies and measures; and finally in the long run, transforming development pattern and cut reliance on resource- and energy-dependent industries

  9. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    Science.gov (United States)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation valuessituation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  10. Network analysis on skype end-to-end video quality

    NARCIS (Netherlands)

    Exarchakos, Georgios; Druda, Luca; Menkovski, Vlado; Liotta, Antonio

    2015-01-01

    Purpose – This paper aims to argue on the efficiency of Quality of Service (QoS)-based adaptive streamingwith regards to perceived quality Quality of Experience (QoE). Although QoS parameters are extensivelyused even by high-end adaptive streaming algorithms, achieved QoE fails to justify their use

  11. Navigating between Disaggregating Nation States and Entrenching Processes of Globalisation

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2007-01-01

    on the international community for its economic survival this dependency on the global has as a consequence that it rolls back aspects of national sovereignty thus opening up the national hinterland for further international influences. These developments initiate a process of disaggregating state and nation, meaning...... that a gradual disarticulation of the relationship between state and nation produces new societal spaces, which are contested by non-statist interest groups and transnational more or less deterritorialised ethnic affiliated groups and networks. The argument forwarded in this article is that the ethnic Chinese...

  12. Convergence of in-Country Prices for the Turkish Economy : A Panel Data Search for the PPP Hypothesis Using Sub-Regional Disaggregated Data

    Directory of Open Access Journals (Sweden)

    Mustafa METE

    2014-12-01

    Full Text Available This paper tries to examine that in-country prices from the Turkish economy can be specified as a stationary relationship giving support to the long-run purchasing power parity in economics theory. For this purpose, a sub-regional categorization of the economy is considered over the investigation period of 2005-2012, and, following Esaka (2003, the study uses a panel estimation framework consisting of 12 disaggregated consumer price indices to search for whether the relative prices of goods between sub-regions of the Turkish economy can be represented by stationary time series properties.

  13. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    International Nuclear Information System (INIS)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter

    2012-01-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.

  14. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    Science.gov (United States)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional

  15. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    Science.gov (United States)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  16. Joint control algorithm in access network

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To deal with long probing delay and inaccurate probing results in the endpoint admission control method,a joint local and end-to-end admission control algorithm is proposed,which introduces local probing of access network besides end-to-end probing.Through local probing,the algorithm accurately estimated the resource status of the access network.Simulation shows that this algorithm can improve admission control performance and reduce users' average waiting time when the access network is heavily loaded.

  17. The Disaggregation of Value-Added Test Scores to Assess Learning Outcomes in Economics Courses

    Science.gov (United States)

    Walstad, William B.; Wagner, Jamie

    2016-01-01

    This study disaggregates posttest, pretest, and value-added or difference scores in economics into four types of economic learning: positive, retained, negative, and zero. The types are derived from patterns of student responses to individual items on a multiple-choice test. The micro and macro data from the "Test of Understanding in College…

  18. A Practical Methodology for Disaggregating the Drivers of Drug Costs Using Administrative Data.

    Science.gov (United States)

    Lungu, Elena R; Manti, Orlando J; Levine, Mitchell A H; Clark, Douglas A; Potashnik, Tanya M; McKinley, Carol I

    2017-09-01

    Prescription drug expenditures represent a significant component of health care costs in Canada, with estimates of $28.8 billion spent in 2014. Identifying the major cost drivers and the effect they have on prescription drug expenditures allows policy makers and researchers to interpret current cost pressures and anticipate future expenditure levels. To identify the major drivers of prescription drug costs and to develop a methodology to disaggregate the impact of each of the individual drivers. The methodology proposed in this study uses the Laspeyres approach for cost decomposition. This approach isolates the effect of the change in a specific factor (e.g., price) by holding the other factor(s) (e.g., quantity) constant at the base-period value. The Laspeyres approach is expanded to a multi-factorial framework to isolate and quantify several factors that drive prescription drug cost. Three broad categories of effects are considered: volume, price and drug-mix effects. For each category, important sub-effects are quantified. This study presents a new and comprehensive methodology for decomposing the change in prescription drug costs over time including step-by-step demonstrations of how the formulas were derived. This methodology has practical applications for health policy decision makers and can aid researchers in conducting cost driver analyses. The methodology can be adjusted depending on the purpose and analytical depth of the research and data availability. © 2017 Journal of Population Therapeutics and Clinical Pharmacology. All rights reserved.

  19. Disaggregated Energy Consumption and Sectoral Outputs in Thailand: ARDL Bound Testing Approach

    OpenAIRE

    Thurai Murugan Nathan; Venus Khim-Sen Liew; Wing-Keung Wong

    2016-01-01

    From an economic perspective, energy-output relationship studies have become increasingly popular in recent times, partly fuelled by a need to understand the effect of energy on production outputs rather than overall GDP. This study dealt with disaggregated energy consumption and outputs of some major economic sectors in Thailand. ARDL bound testing approach was employed to examine the co-integration relationship. The Granger causality test of the aforementioned ARDL framework was done to inv...

  20. An Agent-Based Framework for E-Commerce Information Retrieval Management Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2009-01-01

    Full Text Available The paper addresses the issue of improving retrieval performance management for retrieval from document collections that exist on the Internet. It also comes with a solution that uses the benefits of the agent technology and genetic algorithms in the process of the information retrieving management. The most important paradigms of information retrieval are mentioned having the goal to make more evident the advantages of using the genetic algorithms based one. Within the paper, also a genetic algorithm that can be use for the proposed solution is detailed and a comparative description between the dynamic and static proposed solution is made. In the end, new future directions are shown based on elements presented in this paper. The future results look very encouraging.

  1. Disaggregation of SMOS soil moisture over West Africa using the Temperature and Vegetation Dryness Index based on SEVIRI land surface parameters

    DEFF Research Database (Denmark)

    Tagesson, T.; Horion, S.; Nieto, H.

    2018-01-01

    the Temperature and Vegetation Dryness Index (TVDI) that served as SM proxy within the disaggregation process. West Africa (3 N, 26 W; 28 N, 26 E) was selected as a case study as it presents both an important North-South climate gradient and a diverse range of ecosystem types. The main challenge was to set up...... resolution of SMOS SM, with potential application for local drought/flood monitoring of importance for the livelihood of the population of West Africa....

  2. A disaggregate model to predict the intercity travel demand

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, S.

    1988-01-01

    This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.

  3. Disaggregated regulation in network sections: The normative and positive theory; Disaggregierte Regulierung in Netzsektoren: Normative und positive Theorie

    Energy Technology Data Exchange (ETDEWEB)

    Knieps, G. [Inst. fuer Verkehrswissenschaft und Regionalpolitik, Albert-Ludwigs-Univ. Freiburg i.B. (Germany)

    2007-09-15

    The article deals with the interaction of normative and positive theorie of regulation. Those parts of the network which need regulation could be localised and regulated with the help of the normative theory of the monopolistic bottlenecks. Using the positive theory, the basic elements of a mandate for regulation in the sense of the disaggregated economy of regulation are derived.

  4. Telephony Over IP: A QoS Measurement-Based End to End Control Algorithm

    Directory of Open Access Journals (Sweden)

    Luigi Alcuri

    2004-12-01

    Full Text Available This paper presents a method for admitting voice calls in Telephony over IP (ToIP scenarios. This method, called QoS-Weighted CAC, aims to guarantee Quality of Service to telephony applications. We use a measurement-based call admission control algorithm, which detects network congested links through a feedback on overall link utilization. This feedback is based on the measures of packet delivery latencies related to voice over IP connections at the edges of the transport network. In this way we introduce a close loop control method, which is able to auto-adapt the quality margin on the basis of network load and specific service level requirements. Moreover we evaluate the difference in performance achieved by different Queue management configurations to guarantee Quality of Service to telephony applications, in which our goal was to evaluate the weight of edge router queue configuration in complex and real-like telephony over IP scenario. We want to compare many well-know queue scheduling algorithms, such as SFQ, WRR, RR, WIRR, and Priority. This comparison aims to locate queue schedulers in a more general control scheme context where different elements such as DiffServ marking and Admission control algorithms contribute to the overall Quality of Service required by real-time voice conversations. By means of software simulations we want to compare this solution with other call admission methods already described in scientific literature in order to locate this proposed method in a more general control scheme context. On the basis of the results we try to evidence the possible advantages of this QoS-Weighted solution in comparison with other similar CAC solutions ( in particular Measured Sum, Bandwidth Equivalent with Hoeffding Bounds, and Simple Measure CAC, on the planes of complexity, stability, management, tune-ability to service level requirements, and compatibility with actual network implementation.

  5. The Long-Run Macroeconomic Effects of Aid and Disaggregated Aid in Ethiopia

    DEFF Research Database (Denmark)

    Gebregziabher, Fiseha Haile

    2014-01-01

    positively, whereas it is negatively associated with government consumption. Our results concerning the impacts of disaggregated aid stand in stark contrast to earlier work. Bilateral aid increases investment and GDP and is negatively associated with government consumption, whereas multilateral aid is only...... positively associated with imports. Grants contribute to GDP, investment and imports, whereas loans affect none of the variables. Finally, there is evidence to suggest that multilateral aid and loans have been disbursed in a procyclical fashion...

  6. An initial assessment of a SMAP soil moisture disaggregation scheme using TIR surface evaporation data over the continental United States

    Science.gov (United States)

    Mishra, Vikalp; Ellenburg, W. Lee; Griffin, Robert E.; Mecikalski, John R.; Cruise, James F.; Hain, Christopher R.; Anderson, Martha C.

    2018-06-01

    The Soil Moisture Active Passive (SMAP) mission is dedicated toward global soil moisture mapping. Typically, an L-band microwave radiometer has spatial resolution on the order of 36-40 km, which is too coarse for many specific hydro-meteorological and agricultural applications. With the failure of the SMAP active radar within three months of becoming operational, an intermediate (9-km) and finer (3-km) scale soil moisture product solely from the SMAP mission is no longer possible. Therefore, the focus of this study is a disaggregation of the 36-km resolution SMAP passive-only surface soil moisture (SSM) using the Soil Evaporative Efficiency (SEE) approach to spatial scales of 3-km and 9-km. The SEE was computed using thermal-infrared (TIR) estimation of surface evaporation over Continental U.S. (CONUS). The disaggregation results were compared with the 3 months of SMAP-Active (SMAP-A) and Active/Passive (AP) products, while comparisons with SMAP-Enhanced (SMAP-E), SMAP-Passive (SMAP-P), as well as with more than 180 Soil Climate Analysis Network (SCAN) stations across CONUS were performed for a 19 month period. At the 9-km spatial scale, the TIR-Downscaled data correlated strongly with the SMAP-E SSM both spatially (r = 0.90) and temporally (r = 0.87). In comparison with SCAN observations, overall correlations of 0.49 and 0.47; bias of -0.022 and -0.019 and unbiased RMSD of 0.105 and 0.100 were found for SMAP-E and TIR-Downscaled SSM across the Continental U.S., respectively. At 3-km scale, TIR-Downscaled and SMAP-A had a mean temporal correlation of only 0.27. In terms of gain statistics, the highest percentage of SCAN sites with positive gains (>55%) was observed with the TIR-Downscaled SSM at 9-km. Overall, the TIR-based downscaled SSM showed strong correspondence with SMAP-E; compared to SCAN, and overall both SMAP-E and TIR-Downscaled performed similarly, however, gain statistics show that TIR-Downscaled SSM slightly outperformed SMAP-E.

  7. Visualization for Hyper-Heuristics: Back-End Processing

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Luke [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualization for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.

  8. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  9. Optimization on robot arm machining by using genetic algorithms

    Science.gov (United States)

    Liu, Tung-Kuan; Chen, Chiu-Hung; Tsai, Shang-En

    2007-12-01

    In this study, an optimization problem on the robot arm machining is formulated and solved by using genetic algorithms (GAs). The proposed approach adopts direct kinematics model and utilizes GA's global search ability to find the optimum solution. The direct kinematics equations of the robot arm are formulated and can be used to compute the end-effector coordinates. Based on these, the objective of optimum machining along a set of points can be evolutionarily evaluated with the distance between machining points and end-effector positions. Besides, a 3D CAD application, CATIA, is used to build up the 3D models of the robot arm, work-pieces and their components. A simulated experiment in CATIA is used to verify the computation results first and a practical control on the robot arm through the RS232 port is also performed. From the results, this approach is proved to be robust and can be suitable for most machining needs when robot arms are adopted as the machining tools.

  10. The Economic Impact of Higher Education Institutions in Ireland: Evidence from Disaggregated Input-Output Tables

    Science.gov (United States)

    Zhang, Qiantao; Larkin, Charles; Lucey, Brian M.

    2017-01-01

    While there has been a long history of modelling the economic impact of higher education institutions (HEIs), little research has been undertaken in the context of Ireland. This paper provides, for the first time, a disaggregated input-output table for Ireland's higher education sector. The picture painted overall is a higher education sector that…

  11. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  12. An adaptive inverse kinematics algorithm for robot manipulators

    Science.gov (United States)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  13. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  14. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  15. Energy consumption, carbon emissions and economic growth in Saudi Arabia: An aggregate and disaggregate analysis

    International Nuclear Information System (INIS)

    Alkhathlan, Khalid; Javid, Muhammad

    2013-01-01

    The objective of this study is to examine the relationship among economic growth, carbon emissions and energy consumption at the aggregate and disaggregate levels. For the aggregate energy consumption model, we use total energy consumption per capita and CO 2 emissions per capita based on the total energy consumption. For the disaggregate analysis, we used oil, gas and electricity consumption models along with their respective CO 2 emissions. The long-term income elasticities of carbon emissions in three of the four models are positive and higher than their estimated short-term income elasticities. These results suggest that carbon emissions increase with the increase in per capita income which supports the belief that there is a monotonically increasing relationship between per capita carbon emissions and per capita income for the aggregate model and for the oil and electricity consumption models. The long- and short-term income elasticities of carbon emissions are negative for the gas consumption model. This result indicates that if the Saudi Arabian economy switched from oil to gas consumption, then an increase in per capita income would reduce carbon emissions. The results also suggest that electricity is less polluting than other sources of energy. - Highlights: • Carbon emissions increase with the increase in per capita income in Saudi Arabia. • The income elasticity of CO 2 is negative for the gas consumption model. • The income elasticity of CO 2 is positive for the oil consumption model. • The results suggest that electricity is less polluting than oil and gas

  16. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    International Nuclear Information System (INIS)

    Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique. (paper)

  17. Fully 3D PET image reconstruction using a fourier preconditioned conjugate-gradient algorithm

    International Nuclear Information System (INIS)

    Fessler, J.A.; Ficaro, E.P.

    1996-01-01

    Since the data sizes in fully 3D PET imaging are very large, iterative image reconstruction algorithms must converge in very few iterations to be useful. One can improve the convergence rate of the conjugate-gradient (CG) algorithm by incorporating preconditioning operators that approximate the inverse of the Hessian of the objective function. If the 3D cylindrical PET geometry were not truncated at the ends, then the Hessian of the penalized least-squares objective function would be approximately shift-invariant, i.e. G'G would be nearly block-circulant, where G is the system matrix. We propose a Fourier preconditioner based on this shift-invariant approximation to the Hessian. Results show that this preconditioner significantly accelerates the convergence of the CG algorithm with only a small increase in computation

  18. Providing end-to-end QoS for multimedia applications in 3G wireless networks

    Science.gov (United States)

    Guo, Katherine; Rangarajan, Samapth; Siddiqui, M. A.; Paul, Sanjoy

    2003-11-01

    As the usage of wireless packet data services increases, wireless carriers today are faced with the challenge of offering multimedia applications with QoS requirements within current 3G data networks. End-to-end QoS requires support at the application, network, link and medium access control (MAC) layers. We discuss existing CDMA2000 network architecture and show its shortcomings that prevent supporting multiple classes of traffic at the Radio Access Network (RAN). We then propose changes in RAN within the standards framework that enable support for multiple traffic classes. In addition, we discuss how Session Initiation Protocol (SIP) can be augmented with QoS signaling for supporting end-to-end QoS. We also review state of the art scheduling algorithms at the base station and provide possible extensions to these algorithms to support different classes of traffic as well as different classes of users.

  19. POVERTY AND CALORIE DEPRIVATION ACROSS SOCIO-ECONOMIC GROUPS IN RURAL INDIA: A DISAGGREGATED ANALYSIS

    OpenAIRE

    Gupta, Abha; Mishra, Deepak K.

    2013-01-01

    This paper examines the linkages between calorie deprivation and poverty in rural India at a disaggregated level. It aims to explore the trends and pattern in levels of nutrient intake across social and economic groups. A spatial analysis at the state and NSS-region level unravels the spatial distribution of calorie deprivation in rural India. The gap between incidence of poverty and calorie deprivation has also been investigated. The paper also estimates the factors influencing calorie depri...

  20. Weighted-DESYNC and Its Application to End-to-End Throughput Fairness in Wireless Multihop Network

    Directory of Open Access Journals (Sweden)

    Ui-Seong Yu

    2017-01-01

    Full Text Available The end-to-end throughput of a routing path in wireless multihop network is restricted by a bottleneck node that has the smallest bandwidth among the nodes on the routing path. In this study, we propose a method for resolving the bottleneck-node problem in multihop networks, which is based on multihop DESYNC (MH-DESYNC algorithm that is a bioinspired resource allocation method developed for use in multihop environments and enables fair resource allocation among nearby (up to two hops neighbors. Based on MH-DESYNC, we newly propose weighted-DESYNC (W-DESYNC as a tool artificially to control the amount of resource allocated to the specific user and thus to achieve throughput fairness over a routing path. Proposed W-DESYNC employs the weight factor of a link to determine the amount of bandwidth allocated to a node. By letting the weight factor be the link quality of a routing path and making it the same across a routing path via Cucker-Smale flocking model, we can obtain throughput fairness over a routing path. The simulation results show that the proposed algorithm achieves throughput fairness over a routing path and can increase total end-to-end throughput in wireless multihop networks.

  1. The influence of energy consumption of China on its real GDP from aggregated and disaggregated viewpoints

    International Nuclear Information System (INIS)

    Zhang, Wei; Yang, Shuyun

    2013-01-01

    This paper investigated the causal relationship between energy consumption and gross domestic product (GDP) in China at both aggregated and disaggregated levels during the period of 1978–2009 by using a modified version of the Granger (1969) causality test proposed by Toda and Yamamoto (1995) within a multivariate framework. The empirical results suggested the existence of a negative bi-directional Granger causality running from aggregated energy consumption to real GDP. At disaggregated level of energy consumption, the results were complicated. For coal, empirical findings suggested that there was a negative bi-directional Granger causality running from coal consumption to real GDP. However, for oil and gas, empirical findings suggested a positive bi-directional Granger causality running from oil as well as gas consumption to real GDP. Though these results supported the feedback hypothesis, the negative relationship might be attributed to the growing economy production shifting towards less energy intensive sectors and excessive energy consumption in relatively unproductive sectors. The results indicated that policies with reducing aggregated energy consumption and promoting energy conservation may boost China's economic growth. - Highlights: ► A negative bi-directional Granger causality runs from energy consumption to real GDP. ► The same result runs from coal consumption to real GDP, but oil and gas it does not. ► The results partly derive from excessive energy consumption in unproductive sectors. ► Reducing aggregated energy consumption probably promotes the development of China's economy

  2. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  3. Residential energy use in Mexico: Structure, evolution, environmental impacts, and savings potential

    Energy Technology Data Exchange (ETDEWEB)

    Masera, O.; Friedmann, R.; deBuen, O.

    1993-05-01

    This article examines the characteristics of residential energy use in Mexico, its environmental impacts, and the savings potential of the major end-uses. The main options and barriers to increase the efficiency of energy use are discussed. The energy analysis is based on a disaggregation of residential energy use by end-uses. The dynamics of the evolution of the residential energy sector during the past 20 years are also addressed when the information is available. Major areas for research and for innovative decision-making are identified and prioritized.

  4. NIR-Red Spectra-Based Disaggregation of SMAP Soil Moisture to 250 m Resolution Based on SMAPEx-4/5 in Southeastern Australia

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-01-01

    Full Text Available To meet the demand of regional hydrological and agricultural applications, a new method named near infrared-red (NIR-red spectra-based disaggregation (NRSD was proposed to perform a disaggregation of Soil Moisture Active Passive (SMAP products from 36 km to 250 m resolution. The NRSD combined proposed normalized soil moisture index (NSMI with SMAP data to obtain 250 m resolution soil moisture mapping. The experiment was conducted in southeastern Australia during SMAP Experiments (SMAPEx 4/5 and validated with the in situ SMAPEx network. Results showed that NRSD performed a decent downscaling (root-mean-square error (RMSE = 0.04 m3/m3 and 0.12 m3/m3 during SMAPEx-4 and SMAPEx-5, respectively. Based on the validation, it was found that the proposed NSMI was a new alternative indicator for denoting the heterogeneity of soil moisture at sub-kilometer scales. Attributed to the excellent performance of the NSMI, NRSD has a higher overall accuracy, finer spatial representation within SMAP pixels and wider applicable scope on usability tests for land cover, vegetation density and drought condition than the disaggregation based on physical and theoretical scale change (DISPATCH has at 250 m resolution. This revealed that the NRSD method is expected to provide soil moisture mapping at 250-resolution for large-scale hydrological and agricultural studies.

  5. Road network selection for small-scale maps using an improved centrality-based algorithm

    Directory of Open Access Journals (Sweden)

    Roy Weiss

    2014-12-01

    Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.

  6. Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas

    Science.gov (United States)

    Wang, Jin; Peng, Wei; Liu, Song

    2017-10-01

    Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.

  7. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  8. Accurate fault location algorithm on power transmission lines with use of two-end unsynchronized measurements

    Directory of Open Access Journals (Sweden)

    Mohamed Dine

    2012-01-01

    Full Text Available This paper presents a new approach to fault location on power transmission lines. This approach uses two-end unsynchronised measurements of the line and benefits from the advantages of digital technology and numerical relaying, which are available today and can easily be applied for off-line analysis. The approach is to modify the apparent impedance method using a very simple first-order formula. The new method is independent of fault resistance, source impedances and pre-fault currents. In addition, the data volume communicated between relays is sufficiently small enough to be transmitted easily using a digital protection channel. The proposed approach is tested via digital simulation using MATLand the applied test results corroborate the superior performance of the proposed approach.

  9. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  10. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    Science.gov (United States)

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  11. Distributed consensus for metamorphic systems using a gossip algorithm for CAT(0) metric spaces

    Science.gov (United States)

    Bellachehab, Anass; Jakubowicz, Jérémie

    2015-01-01

    We present an application of distributed consensus algorithms to metamorphic systems. A metamorphic system is a set of identical units that can self-assemble to form a rigid structure. For instance, one can think of a robotic arm composed of multiple links connected by joints. The system can change its shape in order to adapt to different environments via reconfiguration of its constituting units. We assume in this work that several metamorphic systems form a network: two systems are connected whenever they are able to communicate with each other. The aim of this paper is to propose a distributed algorithm that synchronizes all the systems in the network. Synchronizing means that all the systems should end up having the same configuration. This aim is achieved in two steps: (i) we cast the problem as a consensus problem on a metric space and (ii) we use a recent distributed consensus algorithm that only make use of metrical notions.

  12. Neutron spectrum unfolding using genetic algorithm in a Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Suman, Vitisha [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sarkar, P.K., E-mail: pksarkar02@gmail.com [Manipal Centre for Natural Sciences, Manipal University, Manipal 576104 (India)

    2014-02-11

    A spectrum unfolding technique GAMCD (Genetic Algorithm and Monte Carlo based spectrum Deconvolution) has been developed using the genetic algorithm methodology within the framework of Monte Carlo simulations. Each Monte Carlo history starts with initial solution vectors (population) as randomly generated points in the hyper dimensional solution space that are related to the measured data by the response matrix of the detection system. The transition of the solution points in the solution space from one generation to another are governed by the genetic algorithm methodology using the techniques of cross-over (mating) and mutation in a probabilistic manner adding new solution points to the population. The population size is kept constant by discarding solutions having lesser fitness values (larger differences between measured and calculated results). Solutions having the highest fitness value at the end of each Monte Carlo history are averaged over all histories to obtain the final spectral solution. The present method shows promising results in neutron spectrum unfolding for both under-determined and over-determined problems with simulated test data as well as measured data when compared with some existing unfolding codes. An attractive advantage of the present method is the independence of the final spectra from the initial guess spectra.

  13. Neutron spectrum unfolding using genetic algorithm in a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2014-01-01

    A spectrum unfolding technique GAMCD (Genetic Algorithm and Monte Carlo based spectrum Deconvolution) has been developed using the genetic algorithm methodology within the framework of Monte Carlo simulations. Each Monte Carlo history starts with initial solution vectors (population) as randomly generated points in the hyper dimensional solution space that are related to the measured data by the response matrix of the detection system. The transition of the solution points in the solution space from one generation to another are governed by the genetic algorithm methodology using the techniques of cross-over (mating) and mutation in a probabilistic manner adding new solution points to the population. The population size is kept constant by discarding solutions having lesser fitness values (larger differences between measured and calculated results). Solutions having the highest fitness value at the end of each Monte Carlo history are averaged over all histories to obtain the final spectral solution. The present method shows promising results in neutron spectrum unfolding for both under-determined and over-determined problems with simulated test data as well as measured data when compared with some existing unfolding codes. An attractive advantage of the present method is the independence of the final spectra from the initial guess spectra

  14. End to end adaptive congestion control in TCP/IP networks

    CERN Document Server

    Houmkozlis, Christos N

    2012-01-01

    This book provides an adaptive control theory perspective on designing congestion controls for packet-switching networks. Relevant to a wide range of disciplines and industries, including the music industry, computers, image trading, and virtual groups, the text extensively discusses source oriented, or end to end, congestion control algorithms. The book empowers readers with clear understanding of the characteristics of packet-switching networks and their effects on system stability and performance. It provides schemes capable of controlling congestion and fairness and presents real-world app

  15. L-band brightness temperature disaggregation for use with S-band and C-band radiometer data for WCOM

    Science.gov (United States)

    Yao, P.; Shi, J.; Zhao, T.; Cosh, M. H.; Bindlish, R.

    2017-12-01

    There are two passive microwave sensors onboard the Water Cycle Observation Mission (WCOM), which includes a synthetic aperture radiometer operating at L-S-C bands and a scanning microwave radiometer operating from C- to W-bands. It provides a unique opportunity to disaggregate L-band brightness temperature (soil moisture) with S-band C-bands radiometer data. In this study, passive-only downscaling methodologies are developed and evaluated. Based on the radiative transfer modeling, it was found that the TBs (brightness temperature) between the L-band and S-band exhibit a linear relationship, and there is an exponential relationship between L-band and C-band. We carried out the downscaling results by two methods: (1) downscaling with L-S-C band passive measurements with the same incidence angle from payload IMI; (2) downscaling with L-C band passive measurements with different incidence angle from payloads IMI and PMI. The downscaling method with L-S bands with the same incident angle was first evaluated using SMEX02 data. The RMSE are 2.69 K and 1.52 K for H and V polarization respectively. The downscaling method with L-C bands is developed with different incident angles using SMEX03 data. The RMSE are 2.97 K and 2.68 K for H and V polarization respectively. These results showed that high-resolution L-band brightness temperature and soil moisture products could be generated from the future WCOM passive-only observations.

  16. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  17. Economic dispatch using chaotic bat algorithm

    International Nuclear Information System (INIS)

    Adarsh, B.R.; Raghunathan, T.; Jayabarathi, T.; Yang, Xin-She

    2016-01-01

    This paper presents the application of a new metaheuristic optimization algorithm, the chaotic bat algorithm for solving the economic dispatch problem involving a number of equality and inequality constraints such as power balance, prohibited operating zones and ramp rate limits. Transmission losses and multiple fuel options are also considered for some problems. The chaotic bat algorithm, a variant of the basic bat algorithm, is obtained by incorporating chaotic sequences to enhance its performance. Five different example problems comprising 6, 13, 20, 40 and 160 generating units are solved to demonstrate the effectiveness of the algorithm. The algorithm requires little tuning by the user, and the results obtained show that it either outperforms or compares favorably with several existing techniques reported in literature. - Highlights: • The chaotic bat algorithm, a new metaheuristic optimization algorithm has been used. • The problem solved – the economic dispatch problem – is nonlinear, discontinuous. • It has number of equality and inequality constraints. • The algorithm has been demonstrated to be applicable on high dimensional problems.

  18. End-to-end tests using alanine dosimetry in scanned proton beams

    Science.gov (United States)

    Carlino, A.; Gouldstone, C.; Kragl, G.; Traneus, E.; Marrale, M.; Vatnitsky, S.; Stock, M.; Palmans, H.

    2018-03-01

    This paper describes end-to-end test procedures as the last fundamental step of medical commissioning before starting clinical operation of the MedAustron synchrotron-based pencil beam scanning (PBS) therapy facility with protons. One in-house homogeneous phantom and two anthropomorphic heterogeneous (head and pelvis) phantoms were used for end-to-end tests at MedAustron. The phantoms were equipped with alanine detectors, radiochromic films and ionization chambers. The correction for the ‘quenching’ effect of alanine pellets was implemented in the Monte Carlo platform of the evaluation version of RayStation TPS. During the end-to-end tests, the phantoms were transferred through the workflow like real patients to simulate the entire clinical workflow: immobilization, imaging, treatment planning and dose delivery. Different clinical scenarios of increasing complexity were simulated: delivery of a single beam, two oblique beams without and with range shifter. In addition to the dose comparison in the plastic phantoms the dose obtained from alanine pellet readings was compared with the dose determined with the Farmer ionization chamber in water. A consistent systematic deviation of about 2% was found between alanine dosimetry and the ionization chamber dosimetry in water and plastic materials. Acceptable agreement of planned and delivered doses was observed together with consistent and reproducible results of the end-to-end testing performed with different dosimetric techniques (alanine detectors, ionization chambers and EBT3 radiochromic films). The results confirmed the adequate implementation and integration of the new PBS technology at MedAustron. This work demonstrates that alanine pellets are suitable detectors for end-to-end tests in proton beam therapy and the developed procedures with customized anthropomorphic phantoms can be used to support implementation of PBS technology in clinical practice.

  19. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  20. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  1. Short circuit: Disaggregation of adrenocorticotropic hormone and cortisol levels in HIV-positive, methamphetamine-using men who have sex with men.

    Science.gov (United States)

    Carrico, Adam W; Rodriguez, Violeta J; Jones, Deborah L; Kumar, Mahendra

    2018-01-01

    This study examined if methamphetamine use alone (METH + HIV-) and methamphetamine use in combination with HIV (METH + HIV+) were associated with hypothalamic-pituitary-adrenal (HPA) axis dysregulation as well as insulin resistance relative to a nonmethamphetamine-using, HIV-negative comparison group (METH-HIV-). Using an intact groups design, serum levels of HPA axis hormones in 46 METH + HIV- and 127 METH + HIV+ men who have sex with men (MSM) were compared to 136 METH-HIV- men. There were no group differences in prevailing adrenocorticotropic hormone (ACTH) or cortisol levels, but the association between ACTH and cortisol was moderated by METH + HIV+ group (β = -0.19, p < .05). Compared to METH-HIV- men, METH + HIV+ MSM displayed 10% higher log 10 cortisol levels per standard deviation lower ACTH. Both groups of methamphetamine-using MSM had lower insulin resistance and greater syndemic burden (i.e., sleep disturbance, severe depression, childhood trauma, and polysubstance use disorder) compared to METH-HIV- men. However, the disaggregated functional relationship between ACTH and cortisol in METH + HIV+ MSM was independent of these factors. Further research is needed to characterize the bio-behavioral pathways that explain dysregulated HPA axis functioning in HIV-positive, methamphetamine-using MSM. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses

    Science.gov (United States)

    Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation

    2011-12-01

    Water energy linkages have received growing attention from the water and energy utilities as utilities recognize that collaborative efforts can implement more effective conservation and efficiency improvement programs at lower cost with less effort. To date, limited energy-water household data has allowed only deterministic analysis for average, representative households and required coarse assumptions - like the water heater (the primary energy use in a home apart from heating and cooling) be a single end use. Here, we use recent available disaggregated hot and cold water household end-use data to estimate water and energy consumption for toilet, shower, faucet, dishwasher, laundry machine, leaks, and other household uses and savings from appliance retrofits. The disaggregated hot water and bulk water end-use data was previously collected by the USEPA for 96 single family households in Seattle WA and Oakland CA, and Tampa FL between the period from 2000 and 2003 for two weeks before and four weeks after each household was retrofitted with water efficient appliances. Using the disaggregated data, we developed a stochastic model that represents factors that influence water use for each appliance: behavioral (use frequency and duration), demographical (household size), and technological (use volume or flowrate). We also include stochastic factors that govern energy to heat hot water: hot water fraction (percentage of hot water volume to total water volume used in a certain end-use event), heater water intake and dispense temperatures, and energy source for the heater (gas, electric, etc). From the empirical household end-use data, we derive stochastic probability distributions for each water and energy factor where each distribution represents the range and likelihood of values that the factor may take. The uncertainty of the stochastic water and energy factors is propagated using Monte Carlo simulations to calculate the composite probability distribution for water

  3. Automatic boiling water reactor loading pattern design using ant colony optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.-D. [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China); Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)], E-mail: jdwang@iner.gov.tw; Lin Chaung [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2009-08-15

    An automatic boiling water reactor (BWR) loading pattern (LP) design methodology was developed using the rank-based ant system (RAS), which is a variant of the ant colony optimization (ACO) algorithm. To reduce design complexity, only the fuel assemblies (FAs) of one eight-core positions were determined using the RAS algorithm, and then the corresponding FAs were loaded into the other parts of the core. Heuristic information was adopted to exclude the selection of the inappropriate FAs which will reduce search space, and thus, the computation time. When the LP was determined, Haling cycle length, beginning of cycle (BOC) shutdown margin (SDM), and Haling end of cycle (EOC) maximum fraction of limit for critical power ratio (MFLCPR) were calculated using SIMULATE-3 code, which were used to evaluate the LP for updating pheromone of RAS. The developed design methodology was demonstrated using FAs of a reference cycle of the BWR6 nuclear power plant. The results show that, the designed LP can be obtained within reasonable computation time, and has a longer cycle length than that of the original design.

  4. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  5. An Alternative Approach to the Operation of Multinational Reservoir Systems: Application to the Amistad & Falcon System (Lower Rio Grande/Rí-o Bravo)

    Science.gov (United States)

    Serrat-Capdevila, A.; Valdes, J. B.

    2005-12-01

    An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.

  6. Use of Genetic Algorithms to solve Inverse Problems in Relativistic Hydrodynamics

    Science.gov (United States)

    Guzmán, F. S.; González, J. A.

    2018-04-01

    We present the use of Genetic Algorithms (GAs) as a strategy to solve inverse problems associated with models of relativistic hydrodynamics. The signal we consider to emulate an observation is the density of a relativistic gas, measured at a point where a shock is traveling. This shock is generated numerically out of a Riemann problem with mildly relativistic conditions. The inverse problem we propose is the prediction of the initial conditions of density, velocity and pressure of the Riemann problem that gave origin to that signal. For this we use the density, velocity and pressure of the gas at both sides of the discontinuity, as the six genes of an organism, initially with random values within a tolerance. We then prepare an initial population of N of these organisms and evolve them using methods based on GAs. In the end, the organism with the best fitness of each generation is compared to the signal and the process ends when the set of initial conditions of the organisms of a later generation fit the Signal within a tolerance.

  7. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    Science.gov (United States)

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p seizures by EEG monitoring at the NICU.

  8. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  9. Reconstruction of lower end of radius using vascularized upper end of fibula

    Directory of Open Access Journals (Sweden)

    Koul Ashok

    2007-01-01

    Full Text Available Background: Giant cell tumor is a fairly common locally invasive tumor in young adults. The lower end of the radius is the second commonest site for this tumor. The most common treatment for this tumor is curettage with or without bone grafting but it carries a significant rate of recurrence. Excision is the treatment of choice, especially for cases in which the cortex has been breached. After excision of the distal end of the radius, different procedures have been described to reconstruct the defect of distal radius. These include partial arthrodesis and hemiarthroplasty using the upper end of the fibula. The upper end of the fibula has a morphological resemblance to the lower end of the radius and has been used to replace the latter. Traditionally it was used as a ′free′ (non-vascularized graft. More recently the upper end of the fibula has been transferred as a vascularized transfer for the same purpose. Though vascularized transfer should be expected to be more physiological, its superiority over the technically simpler non-vascularized transfer has not been conclusively proven. Materials and Methods: Two patients are presented who had giant cell tumor of distal radius. They underwent wide local excision and reconstruction with free vascularized upper end of the fibula. Result: Follow-up period was two and a half years and 12 months respectively. Both patients have returned to routine work. One patient has excellent functional result and the other has a good result. Conclusion: Vascularized upper end of fibula transfer is a reliable method of reconstruction for loss of the distal end of the radius that restores local anatomy and physiology.

  10. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    Science.gov (United States)

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  11. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  12. Analysis of a DSM program using an end use model; End use model wo mochiita DSM program no bunseki

    Energy Technology Data Exchange (ETDEWEB)

    Asano, H.; Takahashi, M.; Okada, K. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1997-01-30

    An end use model used in the United States who is advanced in demand-side management (DSM) was used to discuss possibilities of designing and evaluating Japan`s future DSM measures. The end use model assumes energy demand based on such factors as device characteristics, meteorological data, energy prices, user characteristics, market characteristics and DSM measures. The model calculates energy demand amount by end uses basically by multiplying assumptions on device unit requirement, device retention rate, and number of users. A representative tool as an end use model that handles load shapes is the hourly electric load model (HELM). It assumes an annual load curve and predicts a maximum system load. The present discussions have performed estimation on demand for consumer use air conditioners in a day in which a maximum summer load occurs in a reference year, estimation on load in a maximum load day in an estimated year, and estimation on weather sensitivity of loads. 5 refs., 5 figs.

  13. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  14. Assessing gendered roles in water decision-making in semi-arid regions through sex-disaggregated water data with UNESCO-WWAP gender toolkit

    Science.gov (United States)

    Miletto, Michela; Greco, Francesca; Belfiore, Elena

    2017-04-01

    Global climate change is expected to exacerbate current and future stresses on water resources from population growth and land use, and increase the frequency and severity of droughts and floods. Women are more vulnerable to the effects of climate change than men not only because they constitute the majority of the world's poor but also because they are more dependent for their livelihood on natural resources that are threatened by climate change. In addition, social, economic and political barriers often limit their coping capacity. Women play a key role in the provision, management and safeguarding of water, nonetheless, gender inequality in water management framework persists around the globe. Sharp data are essential to inform decisions and support effective policies. Disaggregating water data by sex is crucial to analyse gendered roles in the water realm and inform gender sensitive water policies in light of the global commitments to gender equality of Agenda 2030. In view of this scenario, WWAP has created an innovative toolkit for sex-disaggregated water data collection, as a result of a participatory work of more than 35 experts, part of the WWAP Working Group on Sex-Disaggregated Indicators (http://www.unesco.org/new/en/natural-sciences/environment/water/wwap/water-and-gender/un-wwap-working-group-on-gender-disaggregated-indicators/#c1430774). The WWAP toolkit contains four tools: the methodology (Seager J. WWAP UNESCO, 2015), set of key indicators, the guideline (Pangare V.,WWAP UNESCO, 2015) and a questionnaire for field survey. WWAP key gender-sensitive indicators address water resources management, aspects of water quality and agricultural uses, water resources governance and management, and investigate unaccounted labour in according to gender and age. Managing water resources is key for climate adaptation. Women are particularly sensitive to water quality and the health of water-dependent ecosystems, often source of food and job opportunities

  15. A Rear-End Collision Avoidance Scheme for Intelligent Transportation System

    Directory of Open Access Journals (Sweden)

    Chen Chen

    2016-01-01

    Full Text Available In this paper, a rear-end collision control model is proposed using the fuzzy logic control scheme for the autonomous or cruising vehicles in Intelligent Transportation Systems (ITSs. Through detailed analysis of the car-following cases, our controller is established on some reasonable control rules. In addition, to refine the initialized fuzzy rules considering characteristics of the rear-end collisions, the genetic algorithm is introduced to reduce the computational complexity while maintaining accuracy. Numerical results indicate that our Genetic algorithm-optimized Fuzzy Logic Controller (GFLC outperforms the traditional fuzzy logic controller in terms of better safety guarantee and higher traffic efficiency.

  16. Automated Identification of Initial Storm Electrification and End-of-Storm Electrification Using Electric Field Mill Sensors

    Science.gov (United States)

    Maier, Launa M.; Huddleston, Lisa L.

    2017-01-01

    Kennedy Space Center (KSC) operations are located in a region which experiences one of the highest lightning densities across the United States. As a result, on average, KSC loses almost 30 minutes of operational availability each day for lightning sensitive activities. KSC is investigating using existing instrumentation and automated algorithms to improve the timeliness and accuracy of lightning warnings. Additionally, the automation routines will be warning on a grid to minimize under-warnings associated with not being located in the center of the warning area and over-warnings associated with encompassing too large an area. This study discusses utilization of electric field mill data to provide improved warning times. Specifically, this paper will demonstrate improved performance of an enveloping algorithm of the electric field mill data as compared with the electric field zero crossing to identify initial storm electrification. End-of-Storm-Oscillation (EOSO) identification algorithms will also be analyzed to identify performance improvement, if any, when compared with 30 minutes after the last lightning flash.

  17. Optimal path planning for a mobile robot using cuckoo search algorithm

    Science.gov (United States)

    Mohanty, Prases K.; Parhi, Dayal R.

    2016-03-01

    The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.

  18. Sleep/wake scheduling scheme for minimizing end-to-end delay in multi-hop wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Madani Sajjad

    2011-01-01

    Full Text Available Abstract We present a sleep/wake schedule protocol for minimizing end-to-end delay for event driven multi-hop wireless sensor networks. In contrast to generic sleep/wake scheduling schemes, our proposed algorithm performs scheduling that is dependent on traffic loads. Nodes adapt their sleep/wake schedule based on traffic loads in response to three important factors, (a the distance of the node from the sink node, (b the importance of the node's location from connectivity's perspective, and (c if the node is in the proximity where an event occurs. Using these heuristics, the proposed scheme reduces end-to-end delay and maximizes the throughput by minimizing the congestion at nodes having heavy traffic load. Simulations are carried out to evaluate the performance of the proposed protocol, by comparing its performance with S-MAC and Anycast protocols. Simulation results demonstrate that the proposed protocol has significantly reduced the end-to-end delay, as well as has improved the other QoS parameters, like average energy per packet, average delay, packet loss ratio, throughput, and coverage lifetime.

  19. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  20. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  1. Improved genetic algorithms using inverse-elitism; Gyakuerito senryaku wo mochiita kairyo identeki algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kawanishi, H.; Hagiwara, M. [Keio University, Tokyo (Japan)

    1998-05-01

    Improved Genetic Algorithms (GAs) have been proposed in this paper. We have directed our attention to `selection` and `crossover` in GAs. Novel strategies in selection and crossover are used in the proposed method. Various selecting strategies have been used in the conventional GAs such as Elitism, Tournament, Ranking, Roulette wheel, and Expected value model. These are not always effective, since these refer to only the fitness of each chromosome. We have developed the following techniques to improve the conventional GAs: `inverse-elitism` as a selecting strategy and variable crossover range as a crossover strategy. In the `inverse-elitism`, an inverse-elite whose gene values are reversed from those in the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes. As for the variable crossover range, we combine the following crossover techniques effectively: one is that range in crossover is varied from wide to narrow gradually to carry out global search in the beginning and local search in the ending; another is that range in crossover is varied from narrow to wide. We confirmed validity and superior performance of the proposed method by computer simulations. 18 refs., 9 figs., 3 tabs.

  2. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  3. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    Science.gov (United States)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  4. Availability and End-to-end Reliability in Low Duty Cycle Multihop Wireless Sensor Networks.

    Science.gov (United States)

    Suhonen, Jukka; Hämäläinen, Timo D; Hännikäinen, Marko

    2009-01-01

    A wireless sensor network (WSN) is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS). Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER). The forwarding algorithm guarantees reliability up to 30% PER.

  5. Availability and End-to-end Reliability in Low Duty Cycle MultihopWireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Timo D. Hämäläinen

    2009-03-01

    Full Text Available A wireless sensor network (WSN is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS. Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER. The forwarding algorithm guarantees reliability up to 30% PER.

  6. METHODS OF ASSESSING THE DEGREE OF DESTRUCTION OF RUBBER PRODUCTS USING COMPUTER VISION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2015-01-01

    Full Text Available For technical inspection of rubber products are essential methods of improving video scopes analyzing the degree of destruction and aging of rubber in an aggressive environment. The main factor determining the degree of destruction of the rubber product, the degree of coverage is cracked, which can be described as the amount of the total area, perimeter cracks, geometric shapes and other parameters. In the process of creating a methodology for assessing the degree of destruction of rubber products arises the problem of the development of machine vision algorithm for estimating the degree of coverage of the sample fractures and fracture characterization. For the development of image processing algorithm performed experimental studies on the artificial aging of several samples of products that are made from different rubbers. In the course of the experiments it was obtained several samples of shots vulcanizates in real time. To achieve the goals initially made light stabilization of array images using Gaussian filter. Thereafter, for each image binarization operation is applied. To highlight the contours of the surface damage of the sample is used Canny algorithm. The detected contours are converted into an array of pixels. However, a crack may be allocated to several contours. Therefore, an algorithm was developed by combining contours criterion of minimum distance between them. At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the At the end of the calculation is made of the morphological features of each contour (area, perimeter, length, width, angle of inclination, the Minkowski dimension. Show schedule obtained by the method parameters destruction of samples of rubber products. The developed method allows you to automate assessment of the degree of aging of rubber products in telemetry systems, to study the dynamics of the aging process of polymers to

  7. Optimizing models for production and inventory control using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dragan S. Pamučar

    2012-01-01

    Full Text Available In order to make the Economic Production Quantity (EPQ model more applicable to real-world production and inventory control problems, in this paper we expand this model by assuming that some imperfect items of different product types being produced such as reworks are allowed. In addition, we may have more than one product and supplier along with warehouse space and budget limitation. We show that the model of the problem is a constrained non-linear integer program and propose a genetic algorithm to solve it. Moreover, a design of experiments is employed to calibrate the parameters of the algorithm for different problem sizes. In the end, a numerical example is presented to demonstrate the application of the proposed methodology.

  8. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  9. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    Science.gov (United States)

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  10. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    Science.gov (United States)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  11. Evaluation of dose calculation algorithms using the treatment planning system Xi O with tissue heterogeneity correction turned on

    International Nuclear Information System (INIS)

    Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L.

    2011-01-01

    Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference ∼1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)

  12. Reversible end-to-end assembly of gold nanorods using a disulfide-modified polypeptide

    International Nuclear Information System (INIS)

    Walker, David A; Gupta, Vinay K

    2008-01-01

    Directing the self-assembly of colloidal particles into nanostructures is of great interest in nanotechnology. Here, reversible end-to-end assembly of gold nanorods (GNR) is induced by pH-dependent changes in the secondary conformation of a disulfide-modified poly(L-glutamic acid) (SSPLGA). The disulfide anchoring group drives chemisorption of the polyacid onto the end of the gold nanorods in an ethanolic solution. A layer of poly(vinyl pyrrolidone) is adsorbed on the positively charged, surfactant-stabilized GNR to screen the surfactant bilayer charge and provide stability for dispersion of the GNR in ethanol. For comparison, irreversible end-to-end assembly using a bidentate ligand, namely 1,6-hexanedithiol, is also performed. Characterization of the modified GNR and its end-to-end linking behavior using SSPLGA and hexanedithiol is performed using dynamic light scattering (DLS), UV-vis absorption spectroscopy and transmission electron microscopy (TEM). Experimental results show that, in a colloidal solution of GNR-SSPLGA at a pH∼3.5, where the PLGA is in an α-helical conformation, the modified GNR self-assemble into one-dimensional nanostructures. The linking behavior can be reversed by increasing the pH (>8.5) to drive the conformation of the polypeptide to a random coil and this reversal with pH occurs rapidly within minutes. Cycling the pH multiple times between low and high pH values can be used to drive the formation of the nanostructures of the GNR and disperse them in solution.

  13. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Dong Zhou

    2016-01-01

    Full Text Available Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  14. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  15. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    Science.gov (United States)

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  16. A Novel Magnetic Actuation Scheme to Disaggregate Nanoparticles and Enhance Passage across the Blood–Brain Barrier

    Directory of Open Access Journals (Sweden)

    Ali Kafash Hoshiar

    2017-12-01

    Full Text Available The blood–brain barrier (BBB hinders drug delivery to the brain. Despite various efforts to develop preprogramed actuation schemes for magnetic drug delivery, the unmodeled aggregation phenomenon limits drug delivery performance. This paper proposes a novel scheme with an aggregation model for a feed-forward magnetic actuation design. A simulation platform for aggregated particle delivery is developed and an actuation scheme is proposed to deliver aggregated magnetic nanoparticles (MNPs using a discontinuous asymmetrical magnetic actuation. The experimental results with a Y-shaped channel indicated the success of the proposed scheme in steering and disaggregation. The delivery performance of the developed scheme was examined using a realistic, three-dimensional (3D vessel simulation. Furthermore, the proposed scheme enhanced the transport and uptake of MNPs across the BBB in mice. The scheme presented here facilitates the passage of particles across the BBB to the brain using an electromagnetic actuation scheme.

  17. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth

  18. Portfolio selection using genetic algorithms | Yahaya | International ...

    African Journals Online (AJOL)

    In this paper, one of the nature-inspired evolutionary algorithms – a Genetic Algorithms (GA) was used in solving the portfolio selection problem (PSP). Based on a real dataset from a popular stock market, the performance of the algorithm in relation to those obtained from one of the popular quadratic programming (QP) ...

  19. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    Science.gov (United States)

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  20. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  1. Foreign labor and regional labor markets: aggregate and disaggregate impact on growth and wages in Danish regions

    DEFF Research Database (Denmark)

    Schmidt, Torben Dall; Jensen, Peter Sandholt

    2013-01-01

    non-negative effects on the job opportunities for Danish workers in regional labor markets, whereas the evidence of a regional wage growth effect is mixed. We also present disaggregated results focusing on regional heterogeneity of business structures, skill levels and backgrounds of foreign labor....... The results are interpreted within a specific Danish labor market context and the associated regional outcomes. This adds to previous findings and emphasizes the importance of labor market institutions for the effect of foreign labor on regional employment growth....

  2. Bouc–Wen hysteresis model identification using Modified Firefly Algorithm

    International Nuclear Information System (INIS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-01-01

    The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found

  3. Bouc–Wen hysteresis model identification using Modified Firefly Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Zaman, Mohammad Asif, E-mail: zaman@stanford.edu [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States)

    2015-12-01

    The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found.

  4. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  5. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, H.; Wade, J.

    2014-04-01

    While it is important to make the equipment (or 'plant') in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10 to 30 percent of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Five houses near Syracuse NY were monitored. Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  6. Disaggregating Hot Water Use and Predicting Hot Water Waste in Five Test Homes

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Hugh [ARIES Collaborative, New York, NY (United States); Wade, Jeremy [ARIES Collaborative, New York, NY (United States)

    2014-04-01

    While it is important to make the equipment (or "plant") in a residential hot water system more efficient, the hot water distribution system also affects overall system performance and energy use. Energy wasted in heating water that is not used is estimated to be on the order of 10%-30% of total domestic hot water (DHW) energy use. This field monitoring project installed temperature sensors on the distribution piping (on trunks and near fixtures) in five houses near Syracuse, NY, and programmed a data logger to collect data at 5 second intervals whenever there was a hot water draw. This data was used to assign hot water draws to specific end uses in the home as well as to determine the portion of each hot water that was deemed useful (i.e., above a temperature threshold at the fixture). Overall, the procedures to assign water draws to each end use were able to successfully assign about 50% of the water draws, but these assigned draws accounted for about 95% of the total hot water use in each home. The amount of hot water deemed as useful ranged from low of 75% at one house to a high of 91% in another. At three of the houses, new water heaters and distribution improvements were implemented during the monitoring period and the impact of these improvements on hot water use and delivery efficiency were evaluated.

  7. How sex- and age-disaggregated data and gender and generational analyses can improve humanitarian response.

    Science.gov (United States)

    Mazurana, Dyan; Benelli, Prisca; Walker, Peter

    2013-07-01

    Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  8. Integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Tsoumas, Chris

    2011-01-01

    This paper investigates the integration properties of disaggregated solar, geothermal and biomass energy consumption in the U.S. The analysis is performed for the 1989-2009 period and covers all sectors which use these types of energy, i.e., transportation, residence, industrial, electric power and commercial. The results suggest that there are differences in the order of integration depending on both the type of energy and the sector involved. Moreover, the inclusion of structural breaks traced from the regulatory changes for these energy types seem to affect the order of integration for each series. - Highlights: → Increasing importance of renewable energy sources. → Integration properties of solar, geothermal and biomass energy consumption in the U.S. → The results show differences in the order of integration depending on the type of energy. → Structural breaks traced for these energy types affect the order of integration. → The order of integration is less than 1, so energy conservation policies are transitory.

  9. Household energy consumption in the UK: A highly geographically and socio-economically disaggregated model

    International Nuclear Information System (INIS)

    Druckman, A.; Jackson, T.

    2008-01-01

    Devising policies for a low carbon society requires a careful understanding of energy consumption in different types of households. In this paper, we explore patterns of UK household energy use and associated carbon emissions at national level and also at high levels of socio-economic and geographical disaggregation. In particular, we examine specific neighbourhoods with contrasting levels of deprivation, and typical 'types' (segments) of UK households based on socio-economic characteristics. Results support the hypothesis that different segments have widely differing patterns of consumption. We show that household energy use and associated carbon emissions are both strongly, but not solely, related to income levels. Other factors, such as the type of dwelling, tenure, household composition and rural/urban location are also extremely important. The methodology described in this paper can be used in various ways to inform policy-making. For example, results can help in targeting energy efficiency measures; trends from time series results will form a useful basis for scenario building; and the methodology may be used to model expected outcomes of possible policy options, such as personal carbon trading or a progressive tax regime on household energy consumption

  10. Methodology to Assess No Touch Audit Software Using Field Data

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Jie [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States); Langner, M. Rois [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-10-01

    The research presented in this report builds upon these previous efforts and proposes a set of tests to assess no touch audit tools using real utility bill and on-site data. The proposed assessment methodology explicitly investigates the behaviors of the monthly energy end uses with respect to outdoor temperature, i.e., the building energy signature, to help understand the Tool's disaggregation accuracy. The project team collaborated with Field Diagnosis Services, Inc. (FDSI) to identify appropriate test sites for the evaluation.

  11. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  12. Conjugate gradient algorithms using multiple recursions

    Energy Technology Data Exchange (ETDEWEB)

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  13. Value of time determination for the city of Alexandria based on a disaggregate binary mode choice model

    Directory of Open Access Journals (Sweden)

    Mounir Mahmoud Moghazy Abdel-Aal

    2017-12-01

    Full Text Available In the travel demand modeling field, mode choice is the most important decision that affects the resulted road congestion. The behavioral nature of the disaggregate models and the associated advantages of such models over aggregate models have led to their extensive use. This paper proposes a framework to determine the value of time (VoT for the city of Alexandria through calibrating a disaggregate linear-in parameter utility-based binary logit mode choice model of the city. The mode attributes (travel time and travel cost along with traveler attributes (car ownership and income were selected as the utility attributes of the basic model formulation which included 5 models. Three additional alternative utility formulations based on the transformation of the mode attributes including relative travel cost (cost divided by income and log (travel time and the combination of the two transformations together were introduced. The parameter estimation procedure was based on the likelihood maximization technique and was performed in EXCEL. Out of 20 models estimated, only 2 models are considered successful in terms of the parameters estimates correct signs and the magnitude of their significance (t-statistics value. The determination of the VoT serves also in the model validation. The best two models estimated the value of time at LE 11.30/hr and LE 14.50/hr with a relative error of +3.7% and +33.0%, respectively, of the hourly salary of LE 10.9/hr. The proposed two models prove to be sensitive to trip time and income levels as factors affecting the choice mechanism. The sensitivity analysis was performed and proved the model with higher relative error is marginally more robust. Keywords: Transportation modeling, Binary mode choice, Parameter estimation, Value of time, Likelihood maximization, Sensitivity analysis

  14. Predicting Students’ Performance using Modified ID3 Algorithm

    OpenAIRE

    Ramanathan L; Saksham Dhanda; Suresh Kumar D

    2013-01-01

    The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain) as well as by giving weights to each attribute at every...

  15. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  16. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  17. Disaggregation of remotely sensed soil moisture under all sky condition using machine learning approach in Northeast Asia

    Science.gov (United States)

    Kim, S.; Kim, H.; Choi, M.; Kim, K.

    2016-12-01

    Estimating spatiotemporal variation of soil moisture is crucial to hydrological applications such as flood, drought, and near real-time climate forecasting. Recent advances in space-based passive microwave measurements allow the frequent monitoring of the surface soil moisture at a global scale and downscaling approaches have been applied to improve the spatial resolution of passive microwave products available at local scale applications. However, most downscaling methods using optical and thermal dataset, are valid only in cloud-free conditions; thus renewed downscaling method under all sky condition is necessary for the establishment of spatiotemporal continuity of datasets at fine resolution. In present study Support Vector Machine (SVM) technique was utilized to downscale a satellite-based soil moisture retrievals. The 0.1 and 0.25-degree resolution of daily Land Parameter Retrieval Model (LPRM) L3 soil moisture datasets from Advanced Microwave Scanning Radiometer 2 (AMSR2) were disaggregated over Northeast Asia in 2015. Optically derived estimates of surface temperature (LST), normalized difference vegetation index (NDVI), and its cloud products were obtained from MODerate Resolution Imaging Spectroradiometer (MODIS) for the purpose of downscaling soil moisture in finer resolution under all sky condition. Furthermore, a comparison analysis between in situ and downscaled soil moisture products was also conducted for quantitatively assessing its accuracy. Results showed that downscaled soil moisture under all sky condition not only preserves the quality of AMSR2 LPRM soil moisture at 1km resolution, but also attains higher spatial data coverage. From this research we expect that time continuous monitoring of soil moisture at fine scale regardless of weather conditions would be available.

  18. Adaptive sensor fusion using genetic algorithms

    International Nuclear Information System (INIS)

    Fitzgerald, D.S.; Adams, D.G.

    1994-01-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ''fuzzy'' sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion

  19. Android Malware Classification Using K-Means Clustering Algorithm

    Science.gov (United States)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  20. A Clustering Approach Using Cooperative Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Wenping Zou

    2010-01-01

    Full Text Available Artificial Bee Colony (ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. This paper presents an extended ABC algorithm, namely, the Cooperative Article Bee Colony (CABC, which significantly improves the original ABC in solving complex optimization problems. Clustering is a popular data analysis and data mining technique; therefore, the CABC could be used for solving clustering problems. In this work, first the CABC algorithm is used for optimizing six widely used benchmark functions and the comparative results produced by ABC, Particle Swarm Optimization (PSO, and its cooperative version (CPSO are studied. Second, the CABC algorithm is used for data clustering on several benchmark data sets. The performance of CABC algorithm is compared with PSO, CPSO, and ABC algorithms on clustering problems. The simulation results show that the proposed CABC outperforms the other three algorithms in terms of accuracy, robustness, and convergence speed.

  1. The CMS Tracker Readout Front End Driver

    CERN Document Server

    Foudas, C.; Ballard, D.; Church, I.; Corrin, E.; Coughlan, J.A.; Day, C.P.; Freeman, E.J.; Fulcher, J.; Gannon, W.J.F.; Hall, G.; Halsall, R.N.J.; Iles, G.; Jones, J.; Leaver, J.; Noy, M.; Pearson, M.; Raymond, M.; Reid, I.; Rogers, G.; Salisbury, J.; Taghavi, S.; Tomalin, I.R.; Zorba, O.

    2004-01-01

    The Front End Driver, FED, is a 9U 400mm VME64x card designed for reading out the Compact Muon Solenoid, CMS, silicon tracker signals transmitted by the APV25 analogue pipeline Application Specific Integrated Circuits. The FED receives the signals via 96 optical fibers at a total input rate of 3.4 GB/sec. The signals are digitized and processed by applying algorithms for pedestal and common mode noise subtraction. Algorithms that search for clusters of hits are used to further reduce the input rate. Only the cluster data along with trigger information of the event are transmitted to the CMS data acquisition system using the S-LINK64 protocol at a maximum rate of 400 MB/sec. All data processing algorithms on the FED are executed in large on-board Field Programmable Gate Arrays. Results on the design, performance, testing and quality control of the FED are presented and discussed.

  2. Realization of Deutsch-like algorithm using ensemble computing

    International Nuclear Information System (INIS)

    Wei Daxiu; Luo Jun; Sun Xianping; Zeng Xizhi

    2003-01-01

    The Deutsch-like algorithm [Phys. Rev. A. 63 (2001) 034101] distinguishes between even and odd query functions using fewer function calls than its possible classical counterpart in a two-qubit system. But the similar method cannot be applied to a multi-qubit system. We propose a new approach for solving Deutsch-like problem using ensemble computing. The proposed algorithm needs an ancillary qubit and can be easily extended to multi-qubit system with one query. Our ensemble algorithm beginning with a easily-prepared initial state has three main steps. The classifications of the functions can be obtained directly from the spectra of the ancilla qubit. We also demonstrate the new algorithm in a four-qubit molecular system using nuclear magnetic resonance (NMR). One hydrogen and three carbons are selected as the four qubits, and one of carbons is ancilla qubit. We choice two unitary transformations, corresponding to two functions (one odd function and one even function), to validate the ensemble algorithm. The results show that our experiment is successfully and our ensemble algorithm for solving the Deutsch-like problem is virtual

  3. A filtering method to generate high quality short reads using illumina paired-end technology.

    Science.gov (United States)

    Eren, A Murat; Vineis, Joseph H; Morrison, Hilary G; Sogin, Mitchell L

    2013-01-01

    Consensus between independent reads improves the accuracy of genome and transcriptome analyses, however lack of consensus between very similar sequences in metagenomic studies can and often does represent natural variation of biological significance. The common use of machine-assigned quality scores on next generation platforms does not necessarily correlate with accuracy. Here, we describe using the overlap of paired-end, short sequence reads to identify error-prone reads in marker gene analyses and their contribution to spurious OTUs following clustering analysis using QIIME. Our approach can also reduce error in shotgun sequencing data generated from libraries with small, tightly constrained insert sizes. The open-source implementation of this algorithm in Python programming language with user instructions can be obtained from https://github.com/meren/illumina-utils.

  4. Using the Perceptron Algorithm to Find Consistent Hypotheses

    OpenAIRE

    Anthony, M.; Shawe-Taylor, J.

    1993-01-01

    The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.

  5. End point control of an actinide precipitation reactor

    International Nuclear Information System (INIS)

    Muske, K.R.

    1997-01-01

    The actinide precipitation reactors in the nuclear materials processing facility at Los Alamos National Laboratory are used to remove actinides and other heavy metals from the effluent streams generated during the purification of plutonium. These effluent streams consist of hydrochloric acid solutions, ranging from one to five molar in concentration, in which actinides and other metals are dissolved. The actinides present are plutonium and americium. Typical actinide loadings range from one to five grams per liter. The most prevalent heavy metals are iron, chromium, and nickel that are due to stainless steel. Removal of these metals from solution is accomplished by hydroxide precipitation during the neutralization of the effluent. An end point control algorithm for the semi-batch actinide precipitation reactors at Los Alamos National Laboratory is described. The algorithm is based on an equilibrium solubility model of the chemical species in solution. This model is used to predict the amount of base hydroxide necessary to reach the end point of the actinide precipitation reaction. The model parameters are updated by on-line pH measurements

  6. Sleep/wake scheduling scheme for minimizing end-to-end delay in multi-hop wireless sensor networks

    OpenAIRE

    Madani Sajjad; Nazir Babar; Hasbullah Halabi

    2011-01-01

    Abstract We present a sleep/wake schedule protocol for minimizing end-to-end delay for event driven multi-hop wireless sensor networks. In contrast to generic sleep/wake scheduling schemes, our proposed algorithm performs scheduling that is dependent on traffic loads. Nodes adapt their sleep/wake schedule based on traffic loads in response to three important factors, (a) the distance of the node from the sink node, (b) the importance of the node's location from connectivity's perspective, and...

  7. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald

    2014-01-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  8. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  9. Using a genetic algorithm to solve fluid-flow problems

    International Nuclear Information System (INIS)

    Pryor, R.J.

    1990-01-01

    Genetic algorithms are based on the mechanics of the natural selection and natural genetics processes. These algorithms are finding increasing application to a wide variety of engineering optimization and machine learning problems. In this paper, the authors demonstrate the use of a genetic algorithm to solve fluid flow problems. Specifically, the authors use the algorithm to solve the one-dimensional flow equations for a pipe

  10. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  11. PM1 steganographic algorithm using ternary Hamming Code

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2015-12-01

    Full Text Available PM1 algorithm is a modification of well-known LSB steganographic algorithm. It has increased resistance to selected steganalytic attacks and increased embedding efficiency. Due to its uniqueness, PM1 algorithm allows us to use of larger alphabet of symbols, making it possible to further increase steganographic capacity. In this paper, we present the modified PM1 algorithm which utilizies so-called syndrome coding and ternary Hamming code. The modified algorithm has increased embedding efficiency, which means fewer changes introduced to carrier and increased capacity.[b]Keywords[/b]: steganography, linear codes, PM1, LSB, ternary Hamming code

  12. PENGENALAN POLA SIDIK JARI RIDGE ENDING DAN BIFURCATION POINT DENGAN EKSTRAKSI MINUSI METODE CROSSING NUMBER

    Directory of Open Access Journals (Sweden)

    I Putu Dody Lesmana

    2012-09-01

    Full Text Available Abstract: Biometric is a development of basic method of identification using human natural characteristics as its basic. One of the biometric system that is often used is fingerprint. Fingerprint matching system can be obtained by extraction of minutiae information. Information from minutiae extraction generated ridge ending and bifurcation. The technique coffered in this paper is based on the extraction of minutiae from fingerprint image using crossing number (CN method to get ridge ending and bifurcation point by scanning each of ridges point. False identification of minutiae structure may be introduced into the fingerprint image due to hole and spur structure. It is necessary to test the validity of each minutiae point to eliminate false minutiae. Experiments are firstly conducted to assess how well the crossing number method is able to extract the minutiae point. The minutiae validation algorithm is then evaluated to see how effective the algorithm is in detecting the false minutiae. From experiments result using crossing number method, it can be deduced that all ridge points corresponding to ridge ending and bifurcation point have been detected successfully. However, there are a few cases where the extracted minutiae do not correspond to true minutiae points due to hole and spur structure. Applying minutiae validation algorithm is able to cancel out the false ridge endings created by the spur structure and bifurcations created by the hole structures.

  13. Note on the End Game in Homotopy Zero Curve Tracking

    OpenAIRE

    Sosonkina, Masha; Watson, Layne T.; Stewart, David E.

    1995-01-01

    Homotopy algorithms to solve a nonlinear system of equations f(x)=0 involve tracking the zero curve of a homotopy map p(a,theta,x) from theta=0 until theta=1. When the algorithm nears or crosses the hyperplane theta=1, an "end game" phase is begun to compute the solution x(bar) satisfying p(a,theta,x(bar))=f(x(bar))=0. This note compares several end game strategies, including the one implemented in the normal flow code FIXPNF in the homotopy software package HOMPACK.

  14. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  15. Optical flow optimization using parallel genetic algorithm

    Science.gov (United States)

    Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe

    2011-06-01

    A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.

  16. Automatable algorithms to identify nonmedical opioid use using electronic data: a systematic review.

    Science.gov (United States)

    Canan, Chelsea; Polinski, Jennifer M; Alexander, G Caleb; Kowal, Mary K; Brennan, Troyen A; Shrank, William H

    2017-11-01

    Improved methods to identify nonmedical opioid use can help direct health care resources to individuals who need them. Automated algorithms that use large databases of electronic health care claims or records for surveillance are a potential means to achieve this goal. In this systematic review, we reviewed the utility, attempts at validation, and application of such algorithms to detect nonmedical opioid use. We searched PubMed and Embase for articles describing automatable algorithms that used electronic health care claims or records to identify patients or prescribers with likely nonmedical opioid use. We assessed algorithm development, validation, and performance characteristics and the settings where they were applied. Study variability precluded a meta-analysis. Of 15 included algorithms, 10 targeted patients, 2 targeted providers, 2 targeted both, and 1 identified medications with high abuse potential. Most patient-focused algorithms (67%) used prescription drug claims and/or medical claims, with diagnosis codes of substance abuse and/or dependence as the reference standard. Eleven algorithms were developed via regression modeling. Four used natural language processing, data mining, audit analysis, or factor analysis. Automated algorithms can facilitate population-level surveillance. However, there is no true gold standard for determining nonmedical opioid use. Users must recognize the implications of identifying false positives and, conversely, false negatives. Few algorithms have been applied in real-world settings. Automated algorithms may facilitate identification of patients and/or providers most likely to need more intensive screening and/or intervention for nonmedical opioid use. Additional implementation research in real-world settings would clarify their utility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. End-to-end System Performance Simulation: A Data-Centric Approach

    Science.gov (United States)

    Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier

    2013-08-01

    In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.

  18. Acoustic change detection algorithm using an FM radio

    Science.gov (United States)

    Goldman, Geoffrey H.; Wolfe, Owen

    2012-06-01

    The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.

  19. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  20. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  1. SU-E-J-25: End-To-End (E2E) Testing On TomoHDA System Using a Real Pig Head for Intracranial Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Corradini, N; Leick, M; Bonetti, M; Negretti, L [Clinica Luganese, Radiotherapy Center, Lugano (Switzerland)

    2015-06-15

    Purpose: To determine the MVCT imaging uncertainty on the TomoHDA system for intracranial radiosurgery treatments. To determine the end-to-end (E2E) overall accuracy of the TomoHDA system for intracranial radiosurgery. Methods: A pig head was obtained from the butcher, cut coronally through the brain, and preserved in formaldehyde. The base of the head was fixed to a positioning plate allowing precise movement, i.e. translation and rotation, in all 6 axes. A repeatability test was performed on the pig head to determine uncertainty in the image bone registration algorithm. Furthermore, the test studied images with MVCT slice thicknesses of 1 and 3 mm in unison with differing scan lengths. A sensitivity test was performed to determine the registration algorithm’s ability to find the absolute position of known translations/rotations of the pig head. The algorithm’s ability to determine absolute position was compared against that of manual operators, i.e. a radiation therapist and radiation oncologist. Finally, E2E tests for intracranial radiosurgery were performed by measuring the delivered dose distributions within the pig head using Gafchromic films. Results: The repeatability test uncertainty was lowest for the MVCTs of 1-mm slice thickness, which measured less than 0.10 mm and 0.12 deg for all axes. For the sensitivity tests, the bone registration algorithm performed better than human eyes and a maximum difference of 0.3 mm and 0.4 deg was observed for the axes. E2E test results in absolute position difference measured 0.03 ± 0.21 mm in x-axis and 0.28 ± 0.18 mm in y-axis. A maximum difference of 0.32 and 0.66 mm was observed in x and y, respectively. The average peak dose difference between measured and calculated dose was 2.7 cGy or 0.4%. Conclusion: Our tests using a pig head phantom estimate the TomoHDA system to have a submillimeter overall accuracy for intracranial radiosurgery.

  2. SU-E-J-25: End-To-End (E2E) Testing On TomoHDA System Using a Real Pig Head for Intracranial Radiosurgery

    International Nuclear Information System (INIS)

    Corradini, N; Leick, M; Bonetti, M; Negretti, L

    2015-01-01

    Purpose: To determine the MVCT imaging uncertainty on the TomoHDA system for intracranial radiosurgery treatments. To determine the end-to-end (E2E) overall accuracy of the TomoHDA system for intracranial radiosurgery. Methods: A pig head was obtained from the butcher, cut coronally through the brain, and preserved in formaldehyde. The base of the head was fixed to a positioning plate allowing precise movement, i.e. translation and rotation, in all 6 axes. A repeatability test was performed on the pig head to determine uncertainty in the image bone registration algorithm. Furthermore, the test studied images with MVCT slice thicknesses of 1 and 3 mm in unison with differing scan lengths. A sensitivity test was performed to determine the registration algorithm’s ability to find the absolute position of known translations/rotations of the pig head. The algorithm’s ability to determine absolute position was compared against that of manual operators, i.e. a radiation therapist and radiation oncologist. Finally, E2E tests for intracranial radiosurgery were performed by measuring the delivered dose distributions within the pig head using Gafchromic films. Results: The repeatability test uncertainty was lowest for the MVCTs of 1-mm slice thickness, which measured less than 0.10 mm and 0.12 deg for all axes. For the sensitivity tests, the bone registration algorithm performed better than human eyes and a maximum difference of 0.3 mm and 0.4 deg was observed for the axes. E2E test results in absolute position difference measured 0.03 ± 0.21 mm in x-axis and 0.28 ± 0.18 mm in y-axis. A maximum difference of 0.32 and 0.66 mm was observed in x and y, respectively. The average peak dose difference between measured and calculated dose was 2.7 cGy or 0.4%. Conclusion: Our tests using a pig head phantom estimate the TomoHDA system to have a submillimeter overall accuracy for intracranial radiosurgery

  3. A DISAGGREGATED MEASURES APPROACH OF POVERTY STATUS OF FARMING HOUSEHOLDS IN KWARA STATE, NIGERIA

    Directory of Open Access Journals (Sweden)

    Grace Oluwabukunmi Akinsola

    2016-12-01

    Full Text Available In a bid to strengthen the agricultural sector in Nigeria, the Kwara State Government invited thirteen Zimbabwean farmers to participate in agricultural production in Kwara State in 2004. The main objective of this study therefore was to examine the effect of the activities of these foreign farmers on local farmers’ poverty status. A questionnaire was administered on the heads of farming households. A total of 240 respondents were used for the study, which was comprised of 120 contact and 120 non-contact heads of farming households. The analytical tools employed included descriptive statistics and the Foster, Greer and Thorbecke method. The result indicated that the non-contact farming households are poorer than the contact farming households. Using the disaggregated poverty profile, poverty is most severe among the age group of above 60 years. The intensity of poverty is also higher among the married group than the singles. Based on the education level, poverty seems to be most severe among those without any formal education. It is therefore recommended that a minimum of secondary school education should be encouraged among the farming households to prevent higher incidence of poverty in the study area.

  4. Algorithms for monitoring warfarin use: Results from Delphi Method.

    Science.gov (United States)

    Kano, Eunice Kazue; Borges, Jessica Bassani; Scomparini, Erika Burim; Curi, Ana Paula; Ribeiro, Eliane

    2017-10-01

    Warfarin stands as the most prescribed oral anticoagulant. New oral anticoagulants have been approved recently; however, their use is limited and the reversibility techniques of the anticoagulation effect are little known. Thus, our study's purpose was to develop algorithms for therapeutic monitoring of patients taking warfarin based on the opinion of physicians who prescribe this medicine in their clinical practice. The development of the algorithm was performed in two stages, namely: (i) literature review and (ii) algorithm evaluation by physicians using a Delphi Method. Based on the articles analyzed, two algorithms were developed: "Recommendations for the use of warfarin in anticoagulation therapy" and "Recommendations for the use of warfarin in anticoagulation therapy: dose adjustment and bleeding control." Later, these algorithms were analyzed by 19 medical doctors that responded to the invitation and agreed to participate in the study. Of these, 16 responded to the first round, 11 to the second and eight to the third round. A 70% consensus or higher was reached for most issues and at least 50% for six questions. We were able to develop algorithms to monitor the use of warfarin by physicians using a Delphi Method. The proposed method is inexpensive and involves the participation of specialists, and it has proved adequate for the intended purpose. Further studies are needed to validate these algorithms, enabling them to be used in clinical practice.

  5. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  6. A filtering method to generate high quality short reads using illumina paired-end technology.

    Directory of Open Access Journals (Sweden)

    A Murat Eren

    Full Text Available Consensus between independent reads improves the accuracy of genome and transcriptome analyses, however lack of consensus between very similar sequences in metagenomic studies can and often does represent natural variation of biological significance. The common use of machine-assigned quality scores on next generation platforms does not necessarily correlate with accuracy. Here, we describe using the overlap of paired-end, short sequence reads to identify error-prone reads in marker gene analyses and their contribution to spurious OTUs following clustering analysis using QIIME. Our approach can also reduce error in shotgun sequencing data generated from libraries with small, tightly constrained insert sizes. The open-source implementation of this algorithm in Python programming language with user instructions can be obtained from https://github.com/meren/illumina-utils.

  7. A Wavelet Analysis-Based Dynamic Prediction Algorithm to Network Traffic

    Directory of Open Access Journals (Sweden)

    Meng Fan-Bo

    2016-01-01

    Full Text Available Network traffic is a significantly important parameter for network traffic engineering, while it holds highly dynamic nature in the network. Accordingly, it is difficult and impossible to directly predict traffic amount of end-to-end flows. This paper proposes a new prediction algorithm to network traffic using the wavelet analysis. Firstly, network traffic is converted into the time-frequency domain to capture time-frequency feature of network traffic. Secondly, in different frequency components, we model network traffic in the time-frequency domain. Finally, we build the prediction model about network traffic. At the same time, the corresponding prediction algorithm is presented to attain network traffic prediction. Simulation results indicates that our approach is promising.

  8. Improved Global Ocean Color Using Polymer Algorithm

    Science.gov (United States)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  9. Duality reconstruction algorithm for use in electrical impedance tomography

    International Nuclear Information System (INIS)

    Abdullah, M.Z.; Dickin, F.J.

    1996-01-01

    A duality reconstruction algorithm for solving the inverse problem in electrical impedance tomography (EIT) is described. In this method, an algorithm based on the Geselowitz compensation (GC) theorem is used first to reconstruct an approximate version of the image. It is then fed as a first guessed data to the modified Newton-Raphson (MNR) algorithm which iteratively correct the image until a final acceptable solution is reached. The implementation of the GC and MNR based algorithms using the finite element method will be discussed. Reconstructed images produced by the algorithm will also be presented. Consideration is also given to the most computationally intensive aspects of the algorithm, namely the inversion of the large and sparse matrices. The methods taken to approximately compute the inverse ot those matrices will be outlined. (author)

  10. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    Science.gov (United States)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  11. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  12. Selection of views to materialize using simulated annealing algorithms

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  13. Optimizing Raytracing Algorithm Using CUDA

    Directory of Open Access Journals (Sweden)

    Sayed Ahmadreza Razian

    2017-11-01

    The results show that one can generate at least 11 frames per second in HD (720p resolution by GPU processor and GT 840M graphic card, using trace method. If better graphic card employ, this algorithm and program can be used to generate real-time animation.

  14. An algorithm for ranking assignments using reoptimization

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, Lars Relund; Andersen, Kim Allan

    2008-01-01

    We consider the problem of ranking assignments according to cost in the classical linear assignment problem. An algorithm partitioning the set of possible assignments, as suggested by Murty, is presented where, for each partition, the optimal assignment is calculated using a new reoptimization...... technique. Computational results for the new algorithm are presented...

  15. Method and apparatus for shape and end position determination using an optical fiber

    Science.gov (United States)

    Moore, Jason P. (Inventor)

    2010-01-01

    A method of determining the shape of an unbound optical fiber includes collecting strain data along a length of the fiber, calculating curvature and bending direction data of the fiber using the strain data, curve-fitting the curvature and bending direction data to derive curvature and bending direction functions, calculating a torsion function using the bending direction function, and determining the 3D shape from the curvature, bending direction, and torsion functions. An apparatus for determining the 3D shape of the fiber includes a fiber optic cable unbound with respect to a protective sleeve, strain sensors positioned along the cable, and a controller in communication with the sensors. The controller has an algorithm for determining a 3D shape and end position of the fiber by calculating a set of curvature and bending direction data, deriving curvature, bending, and torsion functions, and solving Frenet-Serret equations using these functions.

  16. ANOMALY DETECTION IN NETWORKING USING HYBRID ARTIFICIAL IMMUNE ALGORITHM

    Directory of Open Access Journals (Sweden)

    D. Amutha Guka

    2012-01-01

    Full Text Available Especially in today’s network scenario, when computers are interconnected through internet, security of an information system is very important issue. Because no system can be absolutely secure, the timely and accurate detection of anomalies is necessary. The main aim of this research paper is to improve the anomaly detection by using Hybrid Artificial Immune Algorithm (HAIA which is based on Artificial Immune Systems (AIS and Genetic Algorithm (GA. In this research work, HAIA approach is used to develop Network Anomaly Detection System (NADS. The detector set is generated by using GA and the anomalies are identified using Negative Selection Algorithm (NSA which is based on AIS. The HAIA algorithm is tested with KDD Cup 99 benchmark dataset. The detection rate is used to measure the effectiveness of the NADS. The results and consistency of the HAIA are compared with earlier approaches and the results are presented. The proposed algorithm gives best results when compared to the earlier approaches.

  17. Patient adaptive control of end-effector based gait rehabilitation devices using a haptic control framework.

    Science.gov (United States)

    Hussein, Sami; Kruger, Jörg

    2011-01-01

    Robot assisted training has proven beneficial as an extension of conventional therapy to improve rehabilitation outcome. Further facilitation of this positive impact is expected from the application of cooperative control algorithms to increase the patient's contribution to the training effort according to his level of ability. This paper presents an approach for cooperative training for end-effector based gait rehabilitation devices. Thereby it provides the basis to firstly establish sophisticated cooperative control methods in this class of devices. It uses a haptic control framework to synthesize and render complex, task specific training environments, which are composed of polygonal primitives. Training assistance is integrated as part of the environment into the haptic control framework. A compliant window is moved along a nominal training trajectory compliantly guiding and supporting the foot motion. The level of assistance is adjusted via the stiffness of the moving window. Further an iterative learning algorithm is used to automatically adjust this assistance level. Stable haptic rendering of the dynamic training environments and adaptive movement assistance have been evaluated in two example training scenarios: treadmill walking and stair climbing. Data from preliminary trials with one healthy subject is provided in this paper. © 2011 IEEE

  18. DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Ramesh

    2010-06-01

    Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.

  19. Seismic noise attenuation using an online subspace tracking algorithm

    NARCIS (Netherlands)

    Zhou, Yatong; Li, Shuhua; Zhang, D.; Chen, Yangkang

    2018-01-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient

  20. Supercomputer implementation of finite element algorithms for high speed compressible flows. Progress report, period ending 30 June 1986

    International Nuclear Information System (INIS)

    Thornton, E.A.; Ramakrishnan, R.

    1986-06-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes

  1. Loading pattern optimization using ant colony algorithm

    International Nuclear Information System (INIS)

    Hoareau, Fabrice

    2008-01-01

    Electricite de France (EDF) operates 58 nuclear power plants (NPP), of the Pressurized Water Reactor type. The loading pattern optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R and D has developed automatic optimization tools that assist the experts. LOOP is an industrial tool, developed by EDF R and D and based on a simulated annealing algorithm. In order to improve the results of such automatic tools, new optimization methods have to be tested. Ant Colony Optimization (ACO) algorithms are recent methods that have given very good results on combinatorial optimization problems. In order to evaluate the performance of such methods on loading pattern optimization, direct comparisons between LOOP and a mock-up based on the Max-Min Ant System algorithm (a particular variant of ACO algorithms) were made on realistic test-cases. It is shown that the results obtained by the ACO mock-up are very similar to those of LOOP. Future research will consist in improving these encouraging results by using parallelization and by hybridizing the ACO algorithm with local search procedures. (author)

  2. Analysis of Fuel Cell Markets in Japan and the US: Experience Curve Development and Cost Reduction Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Max [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Smith, Sarah J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-07-15

    Fuel cells are both a longstanding and emerging technology for stationary and transportation applications, and their future use will likely be critical for the deep decarbonization of global energy systems. As we look into future applications, a key challenge for policy-makers and technology market forecasters who seek to track and/or accelerate their market adoption is the ability to forecast market costs of the fuel cells as technology innovations are incorporated into market products. Specifically, there is a need to estimate technology learning rates, which are rates of cost reduction versus production volume. Unfortunately, no literature exists for forecasting future learning rates for fuel cells. In this paper, we look retrospectively to estimate learning rates for two fuel cell deployment programs: (1) the micro-combined heat and power (CHP) program in Japan, and (2) the Self-Generation Incentive Program (SGIP) in California. These two examples have a relatively broad set of historical market data and thus provide an informative and international comparison of distinct fuel cell technologies and government deployment programs. We develop a generalized procedure for disaggregating experience-curve cost-reductions in order to disaggregate the Japanese fuel cell micro-CHP market into its constituent components, and we derive and present a range of learning rates that may explain observed market trends. Finally, we explore the differences in the technology development ecosystem and market conditions that may have contributed to the observed differences in cost reduction and draw policy observations for the market adoption of future fuel cell technologies. The scientific and policy contributions of this paper are the first comparative experience curve analysis of past fuel cell technologies in two distinct markets, and the first quantitative comparison of a detailed cost model of fuel cell systems with actual market data. The resulting approach is applicable to

  3. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  4. END-OF-USE PRODUCTS IN REVERSE LOGISTICS

    OpenAIRE

    Marta Starostka-Patyk

    2007-01-01

    Reverse logistics is a very useful tool for enterprises which have to deal with end-of-use products. Forward logistics is not able to manage them, because they show up on the beginning of reverse supply chain. That is the reason for growing importance of reverse flows. Reverse logistics is quite new logistics system. This paper presents the idea of reverse logistics and end-of-use products problems.

  5. Users’ Perceptions Using Low-End and High-End Mobile-Rendered HMDs: A Comparative Study

    Directory of Open Access Journals (Sweden)

    M.-Carmen Juan

    2018-02-01

    Full Text Available Currently, it is possible to combine Mobile-Rendered Head-Mounted Displays (MR HMDs with smartphones to have Augmented Reality platforms. The differences between these types of platforms can affect the user’s experiences and satisfaction. This paper presents a study that analyses the user’s perception when using the same Augmented Reality app with two MR HMD (low-end and high-end. Our study evaluates the user’s experience taking into account several factors (control, sensory, distraction, ergonomics and realism. An Augmalpha-lowerented Reality app was developed to carry out the comparison for two MR HMDs. The application had exactly the same visual appearance and functionality for both devices. Forty adults participated in our study. From the results, there were no statistically significant differences for the users’ experience for the different factors when using the two MR HMDs, except for the ergonomic factors in favour of the high-end MR HMD. Even though the scores for the high-end MR HMD were higher in nearly all of the questions, both MR HMDs provided a very satisfying viewing experience with very high scores. The results were independent of gender and age. The participants rated the high-end MR HMD as the best one. Nevertheless, when they were asked which MR HMD they would buy, the participants chose the low-end MR HMD taking into account its price.

  6. Infrastructure system restoration planning using evolutionary algorithms

    Science.gov (United States)

    Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.

    2016-01-01

    This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.

  7. Reactor controller design using genetic algorithms with simulated annealing

    International Nuclear Information System (INIS)

    Erkan, K.; Buetuen, E.

    2000-01-01

    This chapter presents a digital control system for ITU TRIGA Mark-II reactor using genetic algorithms with simulated annealing. The basic principles of genetic algorithms for problem solving are inspired by the mechanism of natural selection. Natural selection is a biological process in which stronger individuals are likely to be winners in a competing environment. Genetic algorithms use a direct analogy of natural evolution. Genetic algorithms are global search techniques for optimisation but they are poor at hill-climbing. Simulated annealing has the ability of probabilistic hill-climbing. Thus, the two techniques are combined here to get a fine-tuned algorithm that yields a faster convergence and a more accurate search by introducing a new mutation operator like simulated annealing or an adaptive cooling schedule. In control system design, there are currently no systematic approaches to choose the controller parameters to obtain the desired performance. The controller parameters are usually determined by test and error with simulation and experimental analysis. Genetic algorithm is used automatically and efficiently searching for a set of controller parameters for better performance. (orig.)

  8. Semi-automated categorization of open-ended questions

    Directory of Open Access Journals (Sweden)

    Matthias Schonlau

    2016-08-01

    Full Text Available Text data from open-ended questions in surveys are difficult to analyze and are frequently ignored. Yet open-ended questions are important because they do not constrain respondents’ answer choices. Where open-ended questions are necessary, sometimes multiple human coders hand-code answers into one of several categories. At the same time, computer scientists have made impressive advances in text mining that may allow automation of such coding. Automated algorithms do not achieve an overall accuracy high enough to entirely replace humans. We categorize open-ended questions soliciting narrative responses using text mining for easy-to-categorize answers and humans for the remainder using expected accuracies to guide the choice of the threshold delineating between “easy” and “hard”. Employing multinomial boosting avoids the common practice of converting machine learning “confidence scores” into pseudo-probabilities. This approach is illustrated with examples from open-ended questions related to respondents’ advice to a patient in a hypothetical dilemma, a follow-up probe related to respondents’ perception of disclosure/privacy risk, and from a question on reasons for quitting smoking from a follow-up survey from the Ontario Smoker’s Helpline. Targeting 80% combined accuracy, we found that 54%-80% of the data could be categorized automatically in research surveys.

  9. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  10. PASSion: a pattern growth algorithm-based pipeline for splice junction detection in paired-end RNA-Seq data.

    Science.gov (United States)

    Zhang, Yanju; Lameijer, Eric-Wubbo; 't Hoen, Peter A C; Ning, Zemin; Slagboom, P Eline; Ye, Kai

    2012-02-15

    RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon-exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ≈ 137,000 and 173,000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion.

  11. Advanced defect detection algorithm using clustering in ultrasonic NDE

    Science.gov (United States)

    Gongzhang, Rui; Gachagan, Anthony

    2016-02-01

    A range of materials used in industry exhibit scattering properties which limits ultrasonic NDE. Many algorithms have been proposed to enhance defect detection ability, such as the well-known Split Spectrum Processing (SSP) technique. Scattering noise usually cannot be fully removed and the remaining noise can be easily confused with real feature signals, hence becoming artefacts during the image interpretation stage. This paper presents an advanced algorithm to further reduce the influence of artefacts remaining in A-scan data after processing using a conventional defect detection algorithm. The raw A-scan data can be acquired from either traditional single transducer or phased array configurations. The proposed algorithm uses the concept of unsupervised machine learning to cluster segmental defect signals from pre-processed A-scans into different classes. The distinction and similarity between each class and the ensemble of randomly selected noise segments can be observed by applying a classification algorithm. Each class will then be labelled as `legitimate reflector' or `artefacts' based on this observation and the expected probability of defection (PoD) and probability of false alarm (PFA) determined. To facilitate data collection and validate the proposed algorithm, a 5MHz linear array transducer is used to collect A-scans from both austenitic steel and Inconel samples. Each pulse-echo A-scan is pre-processed using SSP and the subsequent application of the proposed clustering algorithm has provided an additional reduction to PFA while maintaining PoD for both samples compared with SSP results alone.

  12. Mobile Ad Hoc Network Energy Cost Algorithm Based on Artificial Bee Colony

    Directory of Open Access Journals (Sweden)

    Mustafa Tareq

    2017-01-01

    Full Text Available A mobile ad hoc network (MANET is a collection of mobile nodes that dynamically form a temporary network without using any existing network infrastructure. MANET selects a path with minimal number of intermediate nodes to reach the destination node. As the distance between each node increases, the quantity of transmission power increases. The power level of nodes affects the simplicity with which a route is constituted between a couple of nodes. This study utilizes the swarm intelligence technique through the artificial bee colony (ABC algorithm to optimize the energy consumption in a dynamic source routing (DSR protocol in MANET. The proposed algorithm is called bee DSR (BEEDSR. The ABC algorithm is used to identify the optimal path from the source to the destination to overcome energy problems. The performance of the BEEDSR algorithm is compared with DSR and bee-inspired protocols (BeeIP. The comparison was conducted based on average energy consumption, average throughput, average end-to-end delay, routing overhead, and packet delivery ratio performance metrics, varying the node speed and packet size. The BEEDSR algorithm is superior in performance than other protocols in terms of energy conservation and delay degradation relating to node speed and packet size.

  13. Contextualising Water Use in Residential Settings: A Survey of Non-Intrusive Techniques and Approaches

    Directory of Open Access Journals (Sweden)

    Davide Carboni

    2016-05-01

    Full Text Available Water monitoring in households is important to ensure the sustainability of fresh water reserves on our planet. It provides stakeholders with the statistics required to formulate optimal strategies in residential water management. However, this should not be prohibitive and appliance-level water monitoring cannot practically be achieved by deploying sensors on every faucet or water-consuming device of interest due to the higher hardware costs and complexity, not to mention the risk of accidental leakages that can derive from the extra plumbing needed. Machine learning and data mining techniques are promising techniques to analyse monitored data to obtain non-intrusive water usage disaggregation. This is because they can discern water usage from the aggregated data acquired from a single point of observation. This paper provides an overview of water usage disaggregation systems and related techniques adopted for water event classification. The state-of-the art of algorithms and testbeds used for fixture recognition are reviewed and a discussion on the prominent challenges and future research are also included.

  14. Hyperforin prevents beta-amyloid neurotoxicity and spatial memory impairments by disaggregation of Alzheimer's amyloid-beta-deposits.

    Science.gov (United States)

    Dinamarca, M C; Cerpa, W; Garrido, J; Hancke, J L; Inestrosa, N C

    2006-11-01

    The major protein constituent of amyloid deposits in Alzheimer's disease (AD) is the amyloid beta-peptide (Abeta). In the present work, we have determined the effect of hyperforin an acylphloroglucinol compound isolated from Hypericum perforatum (St John's Wort), on Abeta-induced spatial memory impairments and on Abeta neurotoxicity. We report here that hyperforin: (1) decreases amyloid deposit formation in rats injected with amyloid fibrils in the hippocampus; (2) decreases the neuropathological changes and behavioral impairments in a rat model of amyloidosis; (3) prevents Abeta-induced neurotoxicity in hippocampal neurons both from amyloid fibrils and Abeta oligomers, avoiding the increase in reactive oxidative species associated with amyloid toxicity. Both effects could be explained by the capacity of hyperforin to disaggregate amyloid deposits in a dose and time-dependent manner and to decrease Abeta aggregation and amyloid formation. Altogether these evidences suggest that hyperforin may be useful to decrease amyloid burden and toxicity in AD patients, and may be a putative therapeutic agent to fight the disease.

  15. Using Genetic Algorithms in Secured Business Intelligence Mobile Applications

    Directory of Open Access Journals (Sweden)

    Silvia TRIF

    2011-01-01

    Full Text Available The paper aims to assess the use of genetic algorithms for training neural networks used in secured Business Intelligence Mobile Applications. A comparison is made between classic back-propagation method and a genetic algorithm based training. The design of these algorithms is presented. A comparative study is realized for determining the better way of training neural networks, from the point of view of time and memory usage. The results show that genetic algorithms based training offer better performance and memory usage than back-propagation and they are fit to be implemented on mobile devices.

  16. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  17. Silicon Photonics towards Disaggregation of Resources in Data Centers

    Directory of Open Access Journals (Sweden)

    Miltiadis Moralis-Pegios

    2018-01-01

    Full Text Available In this paper, we demonstrate two subsystems based on Silicon Photonics, towards meeting the network requirements imposed by disaggregation of resources in Data Centers. The first one utilizes a 4 × 4 Silicon photonics switching matrix, employing Mach Zehnder Interferometers (MZIs with Electro-Optical phase shifters, directly controlled by a high speed Field Programmable Gate Array (FPGA board for the successful implementation of a Bloom-Filter (BF-label forwarding scheme. The FPGA is responsible for extracting the BF-label from the incoming optical packets, carrying out the BF-based forwarding function, determining the appropriate switching state and generating the corresponding control signals towards conveying incoming packets to the desired output port of the matrix. The BF-label based packet forwarding scheme allows rapid reconfiguration of the optical switch, while at the same time reduces the memory requirements of the node’s lookup table. Successful operation for 10 Gb/s data packets is reported for a 1 × 4 routing layout. The second subsystem utilizes three integrated spiral waveguides, with record-high 2.6 ns/mm2, delay versus footprint efficiency, along with two Semiconductor Optical Amplifier Mach-Zehnder Interferometer (SOA-MZI wavelength converters, to construct a variable optical buffer and a Time Slot Interchange module. Error-free on-chip variable delay buffering from 6.5 ns up to 17.2 ns and successful timeslot interchanging for 10 Gb/s optical packets are presented.

  18. Optimum design for rotor-bearing system using advanced generic algorithm

    International Nuclear Information System (INIS)

    Kim, Young Chan; Choi, Seong Pil; Yang, Bo Suk

    2001-01-01

    This paper describes a combinational method to compute the global and local solutions of optimization problems. The present hybrid algorithm uses both a generic algorithm and a local concentrate search algorithm (e.g simplex method). The hybrid algorithm is not only faster than the standard genetic algorithm but also supplies a more accurate solution. In addition, this algorithm can find the global and local optimum solutions. The present algorithm can be supplied to minimize the resonance response (Q factor) and to yield the critical speeds as far from the operating speed as possible. These factors play very important roles in designing a rotor-bearing system under the dynamic behavior constraint. In the present work, the shaft diameter, the bearing length, and clearance are used as the design variables

  19. Multi-User Identification-Based Eye-Tracking Algorithm Using Position Estimation

    Directory of Open Access Journals (Sweden)

    Suk-Ju Kang

    2016-12-01

    Full Text Available This paper proposes a new multi-user eye-tracking algorithm using position estimation. Conventional eye-tracking algorithms are typically suitable only for a single user, and thereby cannot be used for a multi-user system. Even though they can be used to track the eyes of multiple users, their detection accuracy is low and they cannot identify multiple users individually. The proposed algorithm solves these problems and enhances the detection accuracy. Specifically, the proposed algorithm adopts a classifier to detect faces for the red, green, and blue (RGB and depth images. Then, it calculates features based on the histogram of the oriented gradient for the detected facial region to identify multiple users, and selects the template that best matches the users from a pre-determined face database. Finally, the proposed algorithm extracts the final eye positions based on anatomical proportions. Simulation results show that the proposed algorithm improved the average F1 score by up to 0.490, compared with benchmark algorithms.

  20. Algorithms and architectures of artificial intelligence

    CERN Document Server

    Tyugu, E

    2007-01-01

    This book gives an overview of methods developed in artificial intelligence for search, learning, problem solving and decision-making. It gives an overview of algorithms and architectures of artificial intelligence that have reached the degree of maturity when a method can be presented as an algorithm, or when a well-defined architecture is known, e.g. in neural nets and intelligent agents. It can be used as a handbook for a wide audience of application developers who are interested in using artificial intelligence methods in their software products. Parts of the text are rather independent, so that one can look into the index and go directly to a description of a method presented in the form of an abstract algorithm or an architectural solution. The book can be used also as a textbook for a course in applied artificial intelligence. Exercises on the subject are added at the end of each chapter. Neither programming skills nor specific knowledge in computer science are expected from the reader. However, some p...

  1. Increasing operations profitability using an end-to-end, wireless internet, gas monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    McDougall, M. [Northrock Resources Ltd., AB (Canada); Benterud, K. [zed.i solutions, inc., Calgary, AB (Canada)

    2004-10-01

    Implementation by Northrock Resources Ltd., a wholly-owned subsidiary of Unocal Corporation, of a fully integrated end-to-end gas measurement and production analysis system, is discussed. The system, dubbed Smart-Alek(TM), utilizes public wireless communications and a web browser only delivery system to provide seamless well visibility to a desk-top computer. Smart-Alek(TM) is an example of a new type of end-to-end electronic gas flow measurement system, known as FINE(TM), which is an acronym for Field Intelligence Network and End-User Interface. The system delivers easy-to-use, complete, reliable and cost effective production information, far more effective than is possible to obtain with conventional SCADA technology. By installing the system, Northrock was able to increase gas volumes with more accurate electronic flow measurement in place of mechanical charts, with very low technical maintenance, and at a reduced operating cost. It is emphasized that deploying the technology alone will produce only partial benefits; to realize full benefits it is also essential to change grass roots operating practices, aiming at timely decision-making at the field level. 5 refs., 5 figs.

  2. Virtual machine consolidation enhancement using hybrid regression algorithms

    Directory of Open Access Journals (Sweden)

    Amany Abdelsamea

    2017-11-01

    Full Text Available Cloud computing data centers are growing rapidly in both number and capacity to meet the increasing demands for highly-responsive computing and massive storage. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. The reason for this extremely high energy consumption is not just the quantity of computing resources and the power inefficiency of hardware, but rather lies in the inefficient usage of these resources. VM consolidation involves live migration of VMs hence the capability of transferring a VM between physical servers with a close to zero down time. It is an effective way to improve the utilization of resources and increase energy efficiency in cloud data centers. VM consolidation consists of host overload/underload detection, VM selection and VM placement. Most of the current VM consolidation approaches apply either heuristic-based techniques, such as static utilization thresholds, decision-making based on statistical analysis of historical data; or simply periodic adaptation of the VM allocation. Most of those algorithms rely on CPU utilization only for host overload detection. In this paper we propose using hybrid factors to enhance VM consolidation. Specifically we developed a multiple regression algorithm that uses CPU utilization, memory utilization and bandwidth utilization for host overload detection. The proposed algorithm, Multiple Regression Host Overload Detection (MRHOD, significantly reduces energy consumption while ensuring a high level of adherence to Service Level Agreements (SLA since it gives a real indication of host utilization based on three parameters (CPU, Memory, Bandwidth utilizations instead of one parameter only (CPU utilization. Through simulations we show that our approach reduces power consumption by 6 times compared to single factor algorithms using random workload. Also using PlanetLab workload traces we show that MRHOD improves

  3. The importance of disaggregated freight flow forecasts to inform transport infrastructure investments

    Directory of Open Access Journals (Sweden)

    Jan H. Havenga

    2013-09-01

    Full Text Available This article presents the results of a comprehensive disaggregated commodity flow model for South Africa. The wealth of data available enables a segmented analysis of future freight transportation demand in order to assist with the prioritisation of transportation investments, the development of transport policy and the growth of the logistics service provider industry. In 2011, economic demand for commodities in South Africa’s competitive surface-freight transport market amounted to 622 million tons and is predicted to increase to 1834m tons by 2041, which is a compound annual growth rate of 3.67%. Fifty percent of corridor freight constitutes break bulk; intermodal solutions are therefore critical in South Africa. Scenario analysis indicates that 80%of corridor break-bulk tons can by serviced by four intermodal facilities – in Gauteng, Durban, Cape Town and Port Elizabeth. This would allow for the development of an investment planning hierarchy, enable industry targeting (through commodity visibility, ensure capacity development ahead of demand and lower the cost of logistics in South Africa.

  4. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    Science.gov (United States)

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  5. Evacuation route planning during nuclear emergency using genetic algorithm

    International Nuclear Information System (INIS)

    Suman, Vitisha; Sarkar, P.K.

    2012-01-01

    In nuclear industry the routing in case of any emergency is a cause of concern and of great importance. Even the smallest of time saved in the affected region saves a huge amount of otherwise received dose. Genetic algorithm an optimization technique has great ability to search for the optimal path from the affected region to a destination station in a spatially addressed problem. Usually heuristic algorithms are used to carry out these types of search strategy, but due to the lack of global sampling in the feasible solution space, these algorithms have considerable possibility of being trapped into local optima. Routing problems mainly are search problems for finding the shortest distance within a time limit to cover the required number of stations taking care of the traffics, road quality, population size etc. Lack of any formal mechanisms to help decision-makers explore the solution space of their problem and thereby challenges their assumptions about the number and range of options available. The Genetic Algorithm provides a way to optimize a multi-parameter constrained problem with an ease. Here use of Genetic Algorithm to generate a range of options available and to search a solution space and selectively focus on promising combinations of criteria makes them ideally suited to such complex spatial decision problems. The emergency response and routing can be made efficient, in accessing the closest facilities and determining the shortest route using genetic algorithm. The accuracy and care in creating database can be used to improve the result of the final output. The Genetic algorithm can be used to improve the accuracy of result on the basis of distance where other algorithm cannot be obtained. The search space can be utilized to its great extend

  6. Screening California Current fishery management scenarios using the Atlantis end-to-end ecosystem model

    Science.gov (United States)

    Kaplan, Isaac C.; Horne, Peter J.; Levin, Phillip S.

    2012-09-01

    End-to-end marine ecosystem models link climate and oceanography to the food web and human activities. These models can be used as forecasting tools, to strategically evaluate management options and to support ecosystem-based management. Here we report the results of such forecasts in the California Current, using an Atlantis end-to-end model. We worked collaboratively with fishery managers at NOAA’s regional offices and staff at the National Marine Sanctuaries (NMS) to explore the impact of fishery policies on management objectives at different spatial scales, from single Marine Sanctuaries to the entire Northern California Current. In addition to examining Status Quo management, we explored the consequences of several gear switching and spatial management scenarios. Of the scenarios that involved large scale management changes, no single scenario maximized all performance metrics. Any policy choice would involve trade-offs between stakeholder groups and policy goals. For example, a coast-wide 25% gear shift from trawl to pot or longline appeared to be one possible compromise between an increase in spatial management (which sacrificed revenue) and scenarios such as the one consolidating bottom impacts to deeper areas (which did not perform substantially differently from Status Quo). Judged on a coast-wide scale, most of the scenarios that involved minor or local management changes (e.g. within Monterey Bay NMS only) yielded results similar to Status Quo. When impacts did occur in these cases, they often involved local interactions that were difficult to predict a priori based solely on fishing patterns. However, judged on the local scale, deviation from Status Quo did emerge, particularly for metrics related to stationary species or variables (i.e. habitat and local metrics of landed value or bycatch). We also found that isolated management actions within Monterey Bay NMS would cause local fishers to pay a cost for conservation, in terms of reductions in landed

  7. Calibration of neural networks using genetic algorithms, with application to optimal path planning

    Science.gov (United States)

    Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel

    1987-01-01

    Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.

  8. Detection of Illegitimate Emails using Boosting Algorithm

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    and spam email detection. For our desired task, we have applied a boosting technique. With the use of boosting we can achieve high accuracy of traditional classification algorithms. When using boosting one has to choose a suitable weak learner as well as the number of boosting iterations. In this paper, we......In this paper, we report on experiments to detect illegitimate emails using boosting algorithm. We call an email illegitimate if it is not useful for the receiver or for the society. We have divided the problem into two major areas of illegitimate email detection: suspicious email detection...

  9. A Full Front End Chain for Drift Chambers

    Energy Technology Data Exchange (ETDEWEB)

    Chiarello, G. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Università del Salento, Lecce (Italy); Corvaglia, A.; Grancagnolo, F. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Panareo, M. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Università del Salento, Lecce (Italy); Pepino, A., E-mail: aurora.pepino@le.infn.it [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Università del Salento, Lecce (Italy); Primiceri, P. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Tassielli, G. [Istituto Nazionale di Fisica Nucleare, Lecce (Italy); Fermilab, Batavia, Illinois (United States); Università Marconi, Roma (Italy)

    2014-03-01

    We developed a high performance full chain for drift chamber signals processing. The Front End electronics is a multistage amplifier board based on high performance commercial devices. In addition a fast readout algorithm for Cluster Counting and Timing purposes has been implemented on a Xilinx-Virtex 4 core FPGA. The algorithm analyzes and stores data coming from a Helium based drift tube and represents the outcome of balancing between efficiency and high speed performance.

  10. Accelerating the XGBoost algorithm using GPU computing

    Directory of Open Access Journals (Sweden)

    Rory Mitchell

    2017-07-01

    Full Text Available We present a CUDA-based implementation of a decision tree construction algorithm within the gradient boosting library XGBoost. The tree construction algorithm is executed entirely on the graphics processing unit (GPU and shows high performance with a variety of datasets and settings, including sparse input matrices. Individual boosting iterations are parallelised, combining two approaches. An interleaved approach is used for shallow trees, switching to a more conventional radix sort-based approach for larger depths. We show speedups of between 3× and 6× using a Titan X compared to a 4 core i7 CPU, and 1.2× using a Titan X compared to 2× Xeon CPUs (24 cores. We show that it is possible to process the Higgs dataset (10 million instances, 28 features entirely within GPU memory. The algorithm is made available as a plug-in within the XGBoost library and fully supports all XGBoost features including classification, regression and ranking tasks.

  11. Seasonal streamflow prediction by a combined climate-hydrologic system for river basins of Taiwan

    Science.gov (United States)

    Kuo, Chun-Chao; Gan, Thian Yew; Yu, Pao-Shan

    2010-06-01

    SummaryA combined, climate-hydrologic system with three components to predict the streamflow of two river basins of Taiwan at one season (3-month) lead time for the NDJ and JFM seasons was developed. The first component consists of the wavelet-based, ANN-GA model (Artificial Neural Network calibrated by Genetic Algorithm) which predicts the seasonal rainfall by using selected sea surface temperature (SST) as predictors, given that SST are generally predictable by climate models up to 6-month lead time. For the second component, three disaggregation models, Valencia and Schaake (VS), Lane, and Canonical Random Cascade Model (CRCM), were tested to compare the accuracy of seasonal rainfall disaggregated by these three models to 3-day time scale rainfall data. The third component consists of the continuous rainfall-runoff model modified from HBV (called the MHBV) and calibrated by a global optimization algorithm against the observed rainfall and streamflow data of the Shihmen and Tsengwen river basins of Taiwan. The proposed system was tested, first by disaggregating the predicted seasonal rainfall of ANN-GA to rainfall of 3-day time step using the Lane model; then the disaggregated rainfall data was used to drive the calibrated MHBV to predict the streamflow for both river basins at 3-day time step up to a season's lead time. Overall, the streamflow predicted by this combined system for the NDJ season, which is better than that of the JFM season, will be useful for the seasonal planning and management of water resources of these two river basins of Taiwan.

  12. Industrial Computed Tomography using Proximal Algorithm

    KAUST Repository

    Zang, Guangming

    2016-04-14

    In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.

  13. Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.

    Science.gov (United States)

    Fanjiang, Yong-Yi; Lu, Shih-Wei

    2017-04-10

    This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.

  14. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  15. Algorithm for detection of the broken phase conductor in the radial networks

    Directory of Open Access Journals (Sweden)

    Ostojić Mladen M.

    2016-01-01

    Full Text Available The paper presents an algorithm for a directional relay to be used for a detection of the broken phase conductor in the radial networks. The algorithm would use synchronized voltages, measured at the beginning and at the end of the line, as input signals. During the process, the measured voltages would be phase-compared. On the basis of the normalized energy, the direction of the phase conductor, with a broken point, would be detected. Software tool Matlab/Simulink package has developed a radial network model which simulates the broken phase conductor. The simulations generated required input signals by which the algorithm was tested. Development of the algorithm along with the formation of the simulation model and the test results of the proposed algorithm are presented in this paper.

  16. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data-The LEA algorithm.

    Science.gov (United States)

    Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan

    2017-10-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be

  17. Document Organization Using Kohonen's Algorithm.

    Science.gov (United States)

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  18. High speed numerical integration algorithm using FPGA | Razak ...

    African Journals Online (AJOL)

    Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...

  19. Solving Multiobjective Optimization Problems Using Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Wenping Zou

    2011-01-01

    Full Text Available Multiobjective optimization has been a difficult problem and focus for research in fields of science and engineering. This paper presents a novel algorithm based on artificial bee colony (ABC to deal with multi-objective optimization problems. ABC is one of the most recently introduced algorithms based on the intelligent foraging behavior of a honey bee swarm. It uses less control parameters, and it can be efficiently used for solving multimodal and multidimensional optimization problems. Our algorithm uses the concept of Pareto dominance to determine the flight direction of a bee, and it maintains nondominated solution vectors which have been found in an external archive. The proposed algorithm is validated using the standard test problems, and simulation results show that the proposed approach is highly competitive and can be considered a viable alternative to solve multi-objective optimization problems.

  20. Start/End Delays of Voiced and Unvoiced Speech Signals

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, A

    1999-09-24

    Recent experiments using low power EM-radar like sensors (e.g, GEMs) have demonstrated a new method for measuring vocal fold activity and the onset times of voiced speech, as vocal fold contact begins to take place. Similarly the end time of a voiced speech segment can be measured. Secondly it appears that in most normal uses of American English speech, unvoiced-speech segments directly precede or directly follow voiced-speech segments. For many applications, it is useful to know typical duration times of these unvoiced speech segments. A corpus, assembled earlier of spoken ''Timit'' words, phrases, and sentences and recorded using simultaneously measured acoustic and EM-sensor glottal signals, from 16 male speakers, was used for this study. By inspecting the onset (or end) of unvoiced speech, using the acoustic signal, and the onset (or end) of voiced speech using the EM sensor signal, the average duration times for unvoiced segments preceding onset of vocalization were found to be 300ms, and for following segments, 500ms. An unvoiced speech period is then defined in time, first by using the onset of the EM-sensed glottal signal, as the onset-time marker for the voiced speech segment and end marker for the unvoiced segment. Then, by subtracting 300ms from the onset time mark of voicing, the unvoiced speech segment start time is found. Similarly, the times for a following unvoiced speech segment can be found. While data of this nature have proven to be useful for work in our laboratory, a great deal of additional work remains to validate such data for use with general populations of users. These procedures have been useful for applying optimal processing algorithms over time segments of unvoiced, voiced, and non-speech acoustic signals. For example, these data appear to be of use in speaker validation, in vocoding, and in denoising algorithms.

  1. Choice of crystal surface finishing for a dual-ended readout depth-of-interaction (DOI) detector

    International Nuclear Information System (INIS)

    Fan, Peng; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Wei, Qingyang; Yao, Rutao

    2016-01-01

    The objective of this study was to choose the crystal surface finishing for a dual-ended readout (DER) DOI detector. Through Monte Carlo simulations and experimental studies, we evaluated 4 crystal surface finishing options as combinations of crystal surface polishing (diffuse or specular) and reflector (diffuse or specular) options on a DER detector. We also tested one linear and one logarithm DOI calculation algorithm. The figures of merit used were DOI resolution, DOI positioning error, and energy resolution. Both the simulation and experimental results show that (1) choosing a diffuse type in either surface polishing or reflector would improve DOI resolution but degrade energy resolution; (2) crystal surface finishing with a diffuse polishing combined with a specular reflector appears a favorable candidate with a good balance of DOI and energy resolution; and (3) the linear and logarithm DOI calculation algorithms show overall comparable DOI error, and the linear algorithm was better for photon interactions near the ends of the crystal while the logarithm algorithm was better near the center. These results provide useful guidance in DER DOI detector design in choosing the crystal surface finishing and DOI calculation methods. (paper)

  2. Rendezvous maneuvers using Genetic Algorithm

    International Nuclear Information System (INIS)

    Dos Santos, Denílson Paulo Souza; De Almeida Prado, Antônio F Bertachini; Teodoro, Anderson Rodrigo Barretto

    2013-01-01

    The present paper has the goal of studying orbital maneuvers of Rendezvous, that is an orbital transfer where a spacecraft has to change its orbit to meet with another spacecraft that is travelling in another orbit. This transfer will be accomplished by using a multi-impulsive control. A genetic algorithm is used to find the transfers that have minimum fuel consumption

  3. Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Saad Mohd Sazli

    2016-01-01

    Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.

  4. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  5. Time-Delay System Identification Using Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Seested, Glen Thane

    2013-01-01

    Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...

  6. Using ADOPT Algorithm and Operational Data to Discover Precursors to Aviation Adverse Events

    Science.gov (United States)

    Janakiraman, Vijay; Matthews, Bryan; Oza, Nikunj

    2018-01-01

    The US National Airspace System (NAS) is making its transition to the NextGen system and assuring safety is one of the top priorities in NextGen. At present, safety is managed reactively (correct after occurrence of an unsafe event). While this strategy works for current operations, it may soon become ineffective for future airspace designs and high density operations. There is a need for proactive management of safety risks by identifying hidden and "unknown" risks and evaluating the impacts on future operations. To this end, NASA Ames has developed data mining algorithms that finds anomalies and precursors (high-risk states) to safety issues in the NAS. In this paper, we describe a recently developed algorithm called ADOPT that analyzes large volumes of data and automatically identifies precursors from real world data. Precursors help in detecting safety risks early so that the operator can mitigate the risk in time. In addition, precursors also help identify causal factors and help predict the safety incident. The ADOPT algorithm scales well to large data sets and to multidimensional time series, reduce analyst time significantly, quantify multiple safety risks giving a holistic view of safety among other benefits. This paper details the algorithm and includes several case studies to demonstrate its application to discover the "known" and "unknown" safety precursors in aviation operation.

  7. Video game for learning and metaphorization of recursive algorithms

    Directory of Open Access Journals (Sweden)

    Ricardo Inacio Alvares Silva

    2013-09-01

    Full Text Available The learning of recursive algorithms in computer programming is problematic, because its execution and resolution is not natural to the thinking way people are trained and used to since young. As with other topics in algorithms, we use metaphors to make parallels between the abstract and the concrete to help in understanding the operation of recursive algorithms. However, the classic metaphors employed in this area, such as calculating factorial recursively and Towers of Hanoi game, may just confuse more or be insufficient. In this work, we produced a computer game to assist students in computer courses in learning recursive algorithms. It was designed to have regular video game characteristics, with narrative and classical gameplay elements, commonly found in this kind of product. Aiding to education occurs through metaphorization, or in other words, through experiences provided by game situations that refer to recursive algorithms. To this end, we designed and imbued in the game four valid metaphors related to the theory, and other minor references to the subject.

  8. Photon Counting Using Edge-Detection Algorithm

    Science.gov (United States)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    -bit comparator, which digitizes the input referenced to an adjustable threshold value. This results in four independent serial sample streams of binary 1s and 0s, which are ORed together at rates up to 10 GHz. This single serial stream is then deserialized by a factor of 16 to create 16 signal lines at a rate of 622.5 MHz or lower for input to a high-speed digital processor assembly. The new design and corresponding hardware can be employed with a quad-photon counting detector capable of handling photon rates on the order of multi-gigaphotons per second, whereas prior art was only capable of handling a single input at 1/4 the flux rate. Additionally, the hardware edge-detection algorithm has provided the ability to process 3-10 higher photon flux rates than previously possible by removing the limitation that photoncounting detector output pulses on multiple channels being ORed not overlap. Now, only the leading edges of the pulses are required to not overlap. This new photon counting digitizer hardware architecture supports a universal front end for an optical communications receiver operating at data rates from kilobits to over one gigabit per second to meet increased mission data volume requirements.

  9. Bernstein Algorithm for Vertical Normalization to 3NF Using Synthesis

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2013-07-01

    Full Text Available This paper demonstrates the use of Bernstein algorithm for vertical normalization to 3NF using synthesis. The aim of the paper is to provide an algorithm for database normalization and present a set of steps which minimize redundancy in order to increase the database management efficiency, and specify tests and algorithms for testing and proving the reversibility (i.e., proving that the normalization did not cause loss of information. Using Bernstein algorithm steps, the paper gives examples of vertical normalization to 3NF through synthesis and proposes a test and an algorithm to demonstrate decomposition reversibility. This paper also sets out to explain that the reasons for generating normal forms are to facilitate data search, eliminate data redundancy as well as delete, insert and update anomalies and explain how anomalies develop using examples-

  10. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  11. End-point detection in potentiometric titration by continuous wavelet transform.

    Science.gov (United States)

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  12. Genetic algorithms and their use in Geophysical Problems

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Paul B. [Univ. of California, Berkeley, CA (United States)

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems

  13. Disaggregated Imaging Spacecraft Constellation Optimization with a Genetic Algorithm

    Science.gov (United States)

    2014-03-27

    Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree...distinct mod- ules which, once ‘assembled’ on orbit, deliver the capability of the original monolithic system [5].” Jerry Sellers includes a comic in

  14. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  15. Multivariate exploration of non-intrusive load monitoring via spatiotemporal pattern network

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chao; Akintayo, Adedotun; Jiang, Zhanhong; Henze, Gregor P.; Sarkar, Soumik

    2018-02-01

    Non-intrusive load monitoring (NILM) of electrical demand for the purpose of identifying load components has thus far mostly been studied using univariate data, e.g., using only whole building electricity consumption time series to identify a certain type of end-use such as lighting load. However, using additional variables in the form of multivariate time series data may provide more information in terms of extracting distinguishable features in the context of energy disaggregation. In this work, a novel probabilistic graphical modeling approach, namely the spatiotemporal pattern network (STPN) is proposed for energy disaggregation using multivariate time-series data. The STPN framework is shown to be capable of handling diverse types of multivariate time-series to improve the energy disaggregation performance. The technique outperforms the state of the art factorial hidden Markov models (FHMM) and combinatorial optimization (CO) techniques in multiple real-life test cases. Furthermore, based on two homes' aggregate electric consumption data, a similarity metric is defined for the energy disaggregation of one home using a trained model based on the other home (i.e., out-of-sample case). The proposed similarity metric allows us to enhance scalability via learning supervised models for a few homes and deploying such models to many other similar but unmodeled homes with significantly high disaggregation accuracy.

  16. ENHANCED PROVISIONING ALGORITHM FOR VIRTUAL PRIVATE NETWORK IN HOSE MODEL WITH QUALITY OF SERVICE SUPPORT USING WAXMAN MODEL

    Directory of Open Access Journals (Sweden)

    R. Ravi

    2011-03-01

    Full Text Available As Internet usage grows exponentially, network security issues become increasingly important. Network security measures are needed to protect data during transmission. Various security controls are used to prevent the access of hackers in networks. They are firewall, virtual private networks and encryption algorithms. Out of these, the virtual private network plays a vital role in preventing hackers from accessing the networks. A Virtual Private Network (VPN provides end users with a way to privately access information on their network over a public network infrastructure such as the internet. Using a technique called “Tunneling”, data packets are transmitted across a public routed network, such as the internet that simulates a point-to-point connection. Virtual private networks provide customers with a secure and low-cost communication environment. The basic structure of the virtual circuit is to create a logical path from the source port to the destination port. This path may incorporate many hops between routers for the formation of the circuit. The final, logical path or virtual circuit acts in the same way as a direct connection between the two ports. The K-Cost Optimized Delay Satisfied Virtual Private Networks Tree Provisioning Algorithm connects VPN nodes using a tree structure and attempts to optimize the total bandwidth reserved on the edges of the VPN tree that satisfies the delay requirement. It also allows sharing the bandwidth on the links to improve the performance. The proposed KCDVT algorithm computes the optimal VPN Tree. The performance analysis of the proposed algorithm is carried out in terms of cost, the number of nodes, and the number of VPN nodes, delay, asymmetric ratio and delay with constraints with Breadth First Search Algorithm. The KCDVT performs better than the Breadth First Search Algorithm.

  17. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  18. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    Science.gov (United States)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

  19. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  20. Analysis on learning curves of end-use appliances for the establishment of price-sensitivity load model in competitive electricity market

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Sung Wook; Kim, Jung Hoon [Hongik University (Korea); Song, Kyung Bin [Keimyung University (Korea); Choi, Joon Young [Jeonju University (Korea)

    2001-07-01

    The change of the electricity charge from cost base to price base due to the introduction to the electricity market competition causes consumer to choose a variety of charge schemes and a portion of loads to be affected by this change. Besides, it is required the index that consolidate the price volatility experienced on the power exchange with gaming and strategic bidding by suppliers to increase profits. Therefore, in order to find a mathematical model of the sensitively-responding to-price loads, the price-sensitive load model is needed. And the development of state-of- the-art technologies affects the electricity price, so the diffusion of high-efficient end-uses and these price affect load patterns. This paper shows the analysis on learning curves algorithms which is used to investigate the correlation of the end-uses' price and load patterns. (author). 6 refs., 4 figs., 4 tabs.

  1. A novel evaluation of two related and two independent algorithms for eye movement classification during reading.

    Science.gov (United States)

    Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V

    2018-05-15

    Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.

  2. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  3. Conditions for the Occurrence of Slaking and Other Disaggregation Processes under Rainfall

    Directory of Open Access Journals (Sweden)

    Frédéric Darboux

    2016-07-01

    Full Text Available Under rainfall conditions, aggregates may suffer breakdown by different mechanisms. Slaking is a very efficient breakdown mechanism. However, its occurrence under rainfall conditions has not been demonstrated. Therefore, the aim of this study was to evaluate the occurrence of slaking under rain. Two soils with silt loam (SL and clay loam (CL textures were analyzed. Two classes of aggregates were utilized: 1–3 mm and 3–5 mm. The aggregates were submitted to stability tests and to high intensity (90 mm·h−1 and low intensity (28 mm·h−1 rainfalls, and different kinetic energy impacts (large and small raindrops using a rainfall simulator. The fragment size distributions were determined both after the stability tests and rainfall simulations, with the calculation of the mean weighted diameter (MWD. After the stability tests the SL presented smaller MWDs for all stability tests when compared to the CL. In both soils the lowest MWD was obtained using the fast wetting test, showing they were sensitive to slaking. For both soils and the two aggregate classes evaluated, the MWDs were recorded from the early beginning of the rainfall event under the four rainfall conditions. The occurrence of slaking in the evaluated soils was not verified under the simulated rainfall conditions studied. The early disaggregation was strongly related to the cumulative kinetic energy, advocating for the occurrence of mechanical breakdown. Because slaking requires a very high wetting rate on initially dry aggregates, it seems unlikely to occur under field conditions, except perhaps for furrow irrigation.

  4. STEGANOGRAPHY FOR TWO AND THREE LSBs USING EXTENDED SUBSTITUTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    R.S. Gutte

    2013-03-01

    Full Text Available The Security of data on internet has become a prior thing. Though any message is encrypted using a stronger cryptography algorithm, it cannot avoid the suspicion of intruder. This paper proposes an approach in such way that, data is encrypted using Extended Substitution Algorithm and then this cipher text is concealed at two or three LSB positions of the carrier image. This algorithm covers almost all type of symbols and alphabets. The encrypted text is concealed variably into the LSBs. Therefore, it is a stronger approach. The visible characteristics of the carrier image before and after concealment remained almost the same. The algorithm has been implemented using Matlab.

  5. Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm

    Science.gov (United States)

    Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda

    2017-04-01

    Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.

  6. Analysis algorithm for digital data used in nuclear spectroscopy

    CERN Document Server

    AUTHOR|(CDS)2085950; Sin, Mihaela

    Data obtained from digital acquisition systems used in nuclear spectroscopy experiments must be converted by a dedicated algorithm in or- der to extract the physical quantities of interest. I will report here the de- velopment of an algorithm capable to read digital data, discriminate between random and true signals and convert the results into a format readable by a special data analysis program package used to interpret nuclear spectra and to create coincident matrices. The algorithm can be used in any nuclear spectroscopy experimental setup provided that digital acquisition modules are involved. In particular it was used to treat data obtained from the IS441 experiment at ISOLDE where the beta decay of 80Zn was investigated as part of ultra-fast timing studies of neutron rich Zn nuclei. The results obtained for the half-lives of 80Zn and 80Ga were in very good agreement with previous measurements. This fact proved unquestionably that the conversion algorithm works. Another remarkable result was the improve...

  7. Pose estimation for augmented reality applications using genetic algorithm.

    Science.gov (United States)

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-12-01

    This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.

  8. Size and importance of small electrical end uses in households

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, J R; Zogg, R A; Alberino, D L

    1998-07-01

    Miscellaneous end uses (an energy-consumption category in the residential sector) has recently emerged with more importance than ever before. Miscellaneous end uses are a collection of numerous end uses (often unrelated in technology or market characteristics) that individually are small consumers but when grouped together can become notable in size. The Annual Energy Outlook 1998, published by the Energy Information Administration (EIA), suggests that about 32% of residential electricity use in 1996 is attributable to miscellaneous end uses (21% from the Other Uses category and 11% from other miscellaneous categories). The EIA predicts this consumption will grow to about 47% of residential electricity use by 2010. Other studies have shown substantial consumption in this category, and forecast substantial future growth as well. However, it is not clear that the current accounting structure of the miscellaneous category is the most appropriate one, nor that the forecast growth in consumption will materialize. A bottom-up study on a collection of miscellaneous electric end uses was performed to better understand this complex, ill-defined category. Initial results show that many end uses can be categorized more appropriately, such as furnace fans, which belong in Space Heating. A recommended categorization reduces the Other Uses category from 21% to 12% of electric consumption estimated in 1996. Thus, the consumption from miscellaneous end uses is not nearly as large as thought. Furthermore, the growth rate associated with small end uses is projected to be lower relative to projections from other sources.

  9. Parametric optimization of CNC end milling using entropy ...

    African Journals Online (AJOL)

    Parametric optimization of CNC end milling using entropy measurement technique combined with grey-Taguchi method. ... International Journal of Engineering, Science and Technology ... Keywords: CNC end milling, surface finish, material removal rate (MRR), entropy measurement technique, Taguchi method ...

  10. Using neural networks to speed up optimization algorithms

    CERN Document Server

    Bazan, M

    2000-01-01

    The paper presents the application of radial-basis-function (RBF) neural networks to speed up deterministic search algorithms used for the design and optimization of superconducting LHC magnets. The optimization of the iron yoke of the main dipoles requires a number of numerical field computations per trial solution as the field quality depends on the excitation of the magnets. This results in computation times of about 30 minutes for each objective function evaluation (on a DEC-Alpha 600/333) and only the most robust (deterministic) optimization algorithms can be applied. Using a RBF function approximator, the achieved speed-up of the search algorithm is in the order of 25% for problems with two parameters and about 18% for problems with three and five design variables. (13 refs).

  11. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  12. An accident diagnosis algorithm using long short-term memory

    Directory of Open Access Journals (Sweden)

    Jaemin Yang

    2018-05-01

    Full Text Available Accident diagnosis is one of the complex tasks for nuclear power plant (NPP operators. In abnormal or emergency situations, the diagnostic activity of the NPP states is burdensome though necessary. Numerous computer-based methods and operator support systems have been suggested to address this problem. Among them, the recurrent neural network (RNN has performed well at analyzing time series data. This study proposes an algorithm for accident diagnosis using long short-term memory (LSTM, which is a kind of RNN, which improves the limitation for time reflection. The algorithm consists of preprocessing, the LSTM network, and postprocessing. In the LSTM-based algorithm, preprocessed input variables are calculated to output the accident diagnosis results. The outputs are also postprocessed using softmax to determine the ranking of accident diagnosis results with probabilities. This algorithm was trained using a compact nuclear simulator for several accidents: a loss of coolant accident, a steam generator tube rupture, and a main steam line break. The trained algorithm was also tested to demonstrate the feasibility of diagnosing NPP accidents. Keywords: Accident Diagnosis, Long Short-term Memory, Recurrent Neural Network, Softmax

  13. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  14. Automatic Data Filter Customization Using a Genetic Algorithm

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  15. Fuzzy cluster means algorithm for the diagnosis of confusable disease

    African Journals Online (AJOL)

    ... end platform while Microsoft Access was used as the database application. The system gives a measure of each disease within a set of confusable disease. The proposed system had a classification accuracy of 60%. Keywords: Artificial Intelligence, expert system Fuzzy cluster – means Algorithm, physician, Diagnosis ...

  16. The Texas medication algorithm project: clinical results for schizophrenia.

    Science.gov (United States)

    Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven

    2004-01-01

    In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.

  17. A new technique for end-to-end ureterostomy in the rat, using an indwelling reabsorbable stent.

    Science.gov (United States)

    Carmignani, G; Farina, F P; De Stefani, S; Maffezzini, M

    1983-01-01

    The restoration of the continuity of the urinary tract represents one of the major problems in rat renal transplantation. End-to-end ureterostomy is the most physiologically effective technique; however, it involves noteworthy technical difficulties because of the extremely thin caliber of the ureter in the rat and the high incidence of postoperative hydronephrosis. We describe a new technique for end-to-end ureterostomy in the rat, where the use of an absorbable ureteral stent is recommended. A 5-0 plain catgut thread is used as a stent. The anastomosis is performed under an operating microscope at X 25-40 magnification with interrupted sutures of 11-0 Vicryl. The use of the indwelling stent facilitates the performance of the anastomosis and yields optimal results. The macroscopical, radiological, and histological controls in a group of rats operated on with this technique showed a very high percentage of success with no complications, a result undoubtedly superior to that obtained with conventional methods.

  18. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  19. Thinning an object boundary on digital image using pipelined algorithm

    International Nuclear Information System (INIS)

    Dewanto, S.; Aliyanta, B.

    1997-01-01

    In digital image processing, the thinning process to an object boundary is required to analyze the image structure with a measurement of parameter such as area, circumference of the image object. The process needs a sufficient large memory and time consuming if all the image pixels stored in the memory and the following process is done after all the pixels has ben transformed. pipelined algorithm can reduce the time used in the process. This algorithm uses buffer memory where its size can be adjusted. the next thinning process doesn't need to wait all the transformation of pixels. This paper described pipelined algorithm with some result on the use of the algorithm to digital image

  20. Evolving markets and new end use gas technologies

    International Nuclear Information System (INIS)

    Overall, J.

    1995-01-01

    End use gas technologies, and products for residential, commercial, and industrial uses were reviewed, and markets and market drivers needed for end use technologies in the different types of markets were summarized. The range of end use technologies included: gas fireplaces, combination heating/water heating systems, integrated appliance such as heating/ventilation units, gas cooling, and space cooling for commercial markets. The present and future status of each product market was discussed. Growing markets such as cogeneration, and gas turbine technology also received attention, along with regulatory and environmental concerns. The need to be knowledgeable about current market drivers and to introduce new ones, and the evolution of technology were emphasized as means by which the industry will continue to be able to exert a decisive influence on the direction of these markets

  1. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data − the LEA algorithm

    DEFF Research Database (Denmark)

    Birkegård, Anna Camilla; Dalhoff Andersen, Vibe; Hisham Beshara Halasa, Tariq

    2017-01-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need...... for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial...... purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure...

  2. Multi-objective optimization of in-situ bioremediation of groundwater using a hybrid metaheuristic technique based on differential evolution, genetic algorithms and simulated annealing

    Directory of Open Access Journals (Sweden)

    Kumar Deepak

    2015-12-01

    Full Text Available Groundwater contamination due to leakage of gasoline is one of the several causes which affect the groundwater environment by polluting it. In the past few years, In-situ bioremediation has attracted researchers because of its ability to remediate the contaminant at its site with low cost of remediation. This paper proposed the use of a new hybrid algorithm to optimize a multi-objective function which includes the cost of remediation as the first objective and residual contaminant at the end of the remediation period as the second objective. The hybrid algorithm was formed by combining the methods of Differential Evolution, Genetic Algorithms and Simulated Annealing. Support Vector Machines (SVM was used as a virtual simulator for biodegradation of contaminants in the groundwater flow. The results obtained from the hybrid algorithm were compared with Differential Evolution (DE, Non Dominated Sorting Genetic Algorithm (NSGA II and Simulated Annealing (SA. It was found that the proposed hybrid algorithm was capable of providing the best solution. Fuzzy logic was used to find the best compromising solution and finally a pumping rate strategy for groundwater remediation was presented for the best compromising solution. The results show that the cost incurred for the best compromising solution is intermediate between the highest and lowest cost incurred for other non-dominated solutions.

  3. Generating Daily Synthetic Landsat Imagery by Combining Landsat and MODIS Data.

    Science.gov (United States)

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-09-18

    Owing to low temporal resolution and cloud interference, there is a shortage of high spatial resolution remote sensing data. To address this problem, this study introduces a modified spatial and temporal data fusion approach (MSTDFA) to generate daily synthetic Landsat imagery. This algorithm was designed to avoid the limitations of the conditional spatial temporal data fusion approach (STDFA) including the constant window for disaggregation and the sensor difference. An adaptive window size selection method is proposed in this study to select the best window size and moving steps for the disaggregation of coarse pixels. The linear regression method is used to remove the influence of differences in sensor systems using disaggregated mean coarse reflectance by testing and validation in two study areas located in Xinjiang Province, China. The results show that the MSTDFA algorithm can generate daily synthetic Landsat imagery with a high correlation coefficient (R) ranged from 0.646 to 0.986 between synthetic images and the actual observations. We further show that MSTDFA can be applied to 250 m 16-day MODIS MOD13Q1 products and the Landsat Normalized Different Vegetation Index (NDVI) data by generating a synthetic NDVI image highly similar to actual Landsat NDVI observation with a high R of 0.97.

  4. DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.

    Science.gov (United States)

    Kalsi, Shruti; Kaur, Harleen; Chang, Victor

    2017-12-05

    Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.

  5. Vehicle routing problem with time windows using natural inspired algorithms

    Science.gov (United States)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  6. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  7. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  8. Measuring the energy intensity of domestic activities from smart meter data

    International Nuclear Information System (INIS)

    Stankovic, L.; Stankovic, V.; Liao, J.; Wilson, C.

    2016-01-01

    Highlights: • Innovative method linking appliance usage and energy use with domestic activities. • Inferring the energy and time use profile of activities based on smart meter data. • Standardised metrics quantifying energy intensity + temporal routines of activities. • Insights from analysing electricity consumption through the lens of activities. - Abstract: Household electricity consumption can be broken down to appliance end-use through a variety of methods such as modelling, sub-metering, load disaggregation or non-intrusive appliance load monitoring (NILM). We advance and complement this important field of energy research through an innovative methodology that characterises the energy consumption of domestic life by making the linkages between appliance end-use and activities through an ontology built from qualitative data about the household and NILM data. We use activities as a descriptive term for the common ways households spend their time at home. These activities, such as cooking or laundering, are meaningful to households’ own lived experience. Thus, besides strictly technical algorithmic approaches for processing quantitative smart meter data, we also draw on social science time use approaches and interview and ethnography data. Our method disaggregates a households total electricity load down to appliance level and provides the start time, duration, and total electricity consumption for each occurrence of appliance usage. We then make inferences about activities occurring in the home by combining these disaggregated data with an ontology that formally specifies the relationships between electricity-using appliances and activities. We also propose two novel standardised metrics to enable easy quantifiable comparison within and across households of the energy intensity and routine of activities of interest. Finally, we demonstrate our results over a sample of ten households with an in-depth analysis of which activities can be inferred with

  9. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  10. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  11. Optimization Shape of Variable Capacitance Micromotor Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2010-01-01

    Full Text Available A new method for optimum shape design of variable capacitance micromotor (VCM using Differential Evolution (DE, a stochastic search algorithm, is presented. In this optimization exercise, the objective function aims to maximize torque value and minimize the torque ripple, where the geometric parameters are considered to be the variables. The optimization process is carried out using a combination of DE algorithm and FEM analysis. Fitness value is calculated by FEM analysis using COMSOL3.4, and the DE algorithm is realized by MATLAB7.4. The proposed method is applied to a VCM with 8 poles at the stator and 6 poles at the rotor. The results show that the optimized micromotor using DE algorithm had higher torque value and lower torque ripple, indicating the validity of this methodology for VCM design.

  12. Optimal Intermittent Dose Schedules for Chemotherapy Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Nadia ALAM

    2013-08-01

    Full Text Available In this paper, a design method for optimal cancer chemotherapy schedules via genetic algorithm (GA is presented. The design targets the key objective of chemotherapy to minimize the size of cancer tumor after a predefined time with keeping toxic side effects in limit. This is a difficult target to achieve using conventional clinical methods due to poor therapeutic indices of existing anti-cancer drugs. Moreover, there are clinical limitations in treatment administration to maintain continuous treatment. Besides, carefully decided rest periods are recommended to for patient’s comfort. Three intermittent drug scheduling schemes are presented in this paper where GA is used to optimize the dose quantities and timings by satisfying several treatment constraints. All three schemes are found to be effective in total elimination of cancer tumor after an agreed treatment length. The number of cancer cells is found zero at the end of the treatment for all three cases with tolerable toxicity. Finally, two of the schemes, “Fixed interval variable dose (FIVD and “Periodic dose” that are periodic in characteristic have been emphasized due to their additional simplicity in administration along with friendliness to patients. responses to the designed treatment schedules. Therefore the proposed design method is capable of planning effective, simple, patient friendly and acceptable chemotherapy schedules.

  13. Strength Pareto Evolutionary Algorithm using Self-Organizing Data Analysis Techniques

    Directory of Open Access Journals (Sweden)

    Ionut Balan

    2015-03-01

    Full Text Available Multiobjective optimization is widely used in problems solving from a variety of areas. To solve such problems there was developed a set of algorithms, most of them based on evolutionary techniques. One of the algorithms from this class, which gives quite good results is SPEA2, method which is the basis of the proposed algorithm in this paper. Results from this paper are obtained by running these two algorithms on a flow-shop problem.

  14. Environmental benefits of electrification and end-use efficiency

    International Nuclear Information System (INIS)

    McMenamin, J.S.; Monforte, F.A.; Sioshansi, F.P.

    1997-01-01

    Significant reductions in greenhouse gases and criteria pollutants can be achieved through continued substitution of clean, efficient electrotechnologies for fossil fuel-based technologies. Continued improvements in the efficiency of electrical appliances already in use will further increase the environmental benefits of electricity. Over the last several decades, electricity use in the US has grown strongly. Over a 35 year period 1960-95, electric utility sales increased more than fourfold, from under 700 billion kWh (BkWh) to almost 3,000 BkWh. This increase was due, in part, to a growing economy, but it also reflects the increasingly broad application of electricity to provide comfort, convenience, entertainment, safety and productivity. Reflecting this expanding role, energy used for electricity generation by utilities has nearly doubled, increasing from 19 percent of US primary energy use in 1960 to about 36 percent in 1995. Environmental factors have also provided support to policies that promote improved end-use efficiency. More efficient end-use equipment allows consumers to obtain the same level of end-use services with less electricity. Reduced electricity consumption levels imply reduced generation requirements and therefore, lower levels of emissions associated with generation. Beginning in the mid-1970's, and stimulated by abrupt increases in fossil fuel prices, both government and utility policies began to emphasize end-use efficiency

  15. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  16. Implementation of trigonometric function using CORDIC algorithms

    Science.gov (United States)

    Mokhtar, A. S. N.; Ayub, M. I.; Ismail, N.; Daud, N. G. Nik

    2018-02-01

    In 1959, Jack E. Volder presents a brand new formula to the real-time solution of the equation raised in navigation system. This new algorithm was the most beneficial replacement of analog navigation system by the digital. The CORDIC (Coordinate Rotation Digital Computer) algorithm are used for the rapid calculation associated with elementary operates like trigonometric function, multiplication, division and logarithm function, and also various conversions such as conversion of rectangular to polar coordinate including the conversion between binary coded information. In this current time CORDIC formula have many applications in the field of communication, signal processing, 3-D graphics, and others. This paper would be presents the trigonometric function implementation by using CORDIC algorithm in rotation mode for circular coordinate system. The CORDIC technique is used in order to generating the output angle between range 0o to 90o and error analysis is concern. The result showed that the average percentage error is about 0.042% at angles between ranges 00 to 900. But the average percentage error rose up to 45% at angle 90o and above. So, this method is very accurate at the 1st quadrant. The mirror properties method is used to find out an angle at 2nd, 3rd and 4th quadrant.

  17. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  18. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...

  19. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  20. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  1. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  2. Visualization for Hyper-Heuristics. Front-End Graphical User Interface

    Energy Technology Data Exchange (ETDEWEB)

    Kroenung, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-03-01

    Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.

  3. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    Science.gov (United States)

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  4. Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    N. Sri Madhava Raja

    2014-01-01

    Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.

  5. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  6. Energy efficient data sorting using standard sorting algorithms

    KAUST Repository

    Bunse, Christian; Hö pfner, Hagen; Roychoudhury, Suman; Mansour, Essam

    2011-01-01

    Protecting the environment by saving energy and thus reducing carbon dioxide emissions is one of todays hottest and most challenging topics. Although the perspective for reducing energy consumption, from ecological and business perspectives is clear, from a technological point of view, the realization especially for mobile systems still falls behind expectations. Novel strategies that allow (software) systems to dynamically adapt themselves at runtime can be effectively used to reduce energy consumption. This paper presents a case study that examines the impact of using an energy management component that dynamically selects and applies the "optimal" sorting algorithm, from an energy perspective, during multi-party mobile communication. Interestingly, the results indicate that algorithmic performance is not key and that dynamically switching algorithms at runtime does have a significant impact on energy consumption. © Springer-Verlag Berlin Heidelberg 2011.

  7. PWR loading pattern optimization using Harmony Search algorithm

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► Numerical results reveal that the HS method is reliable. ► The great advantage of HS is significant gain in computational cost. ► On the average, the final band width of search fitness values is narrow. ► Our experiments show that the search approaches the optimal value fast. - Abstract: In this paper a core reloading technique using Harmony Search, HS, is presented in the context of finding an optimal configuration of fuel assemblies, FA, in pressurized water reactors. To implement and evaluate the proposed technique a Harmony Search along Nodal Expansion Code for 2-D geometry, HSNEC2D, is developed to obtain nearly optimal arrangement of fuel assemblies in PWR cores. This code consists of two sections including Harmony Search algorithm and Nodal Expansion modules using fourth degree flux expansion which solves two dimensional-multi group diffusion equations with one node per fuel assembly. Two optimization test problems are investigated to demonstrate the HS algorithm capability in converging to near optimal loading pattern in the fuel management field and other subjects. Results, convergence rate and reliability of the method are quite promising and show the HS algorithm performs very well and is comparable to other competitive algorithms such as Genetic Algorithm and Particle Swarm Intelligence. Furthermore, implementation of nodal expansion technique along HS causes considerable reduction of computational time to process and analysis optimization in the core fuel management problems

  8. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  9. A new recontruction algorithm for use with capacitance-based tomography

    Directory of Open Access Journals (Sweden)

    Ø. Isaksen

    1994-01-01

    Full Text Available A new reconstruction algorithm for use with capacitance-based process tomography is proposed. A numerical simulator, capable of calculating the capacitances for a particular sensor configuration and flow regime is used together with a parameter representation of the dielectric distribution and an optimization algorithm. The algorithm calculates these parameters and hence the dielectric distribution, by minimizing a function defined as a weighted sum of square differences between the measured and estimated capacitances. The method is tested by using both synthetic and experimental data, and the results are compared with results from the commonly used Linear Back Projection (LBP algorithm. The method is capable of obtaining the correct parameter values for all the flow regimes tested, and does provide a better estimate than the LBP method. The method proves to be very promising, and is a step towards quantitative capacitance tomography.

  10. An efficient community detection algorithm using greedy surprise maximization

    International Nuclear Information System (INIS)

    Jiang, Yawen; Jia, Caiyan; Yu, Jian

    2014-01-01

    Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)

  11. Healthcare Energy End-Use Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sheppy, M.; Pless, S.; Kung, F.

    2014-08-01

    NREL partnered with two hospitals (MGH and SUNY UMU) to collect data on the energy used for multiple thermal and electrical end-use categories, including preheat, heating, and reheat; humidification; service water heating; cooling; fans; pumps; lighting; and select plug and process loads. Additional data from medical office buildings were provided for an analysis focused on plug loads. Facility managers, energy managers, and engineers in the healthcare sector will be able to use these results to more effectively prioritize and refine the scope of investments in new metering and energy audits.

  12. A practical guide to data structures and algorithms using Java

    CERN Document Server

    Goldman, Sally A

    2007-01-01

    Although traditional texts present isolated algorithms and data structures, they do not provide a unifying structure and offer little guidance on how to appropriately select among them. Furthermore, these texts furnish little, if any, source code and leave many of the more difficult aspects of the implementation as exercises. A fresh alternative to conventional data structures and algorithms books, A Practical Guide to Data Structures and Algorithms using Java presents comprehensive coverage of fundamental data structures and algorithms in a unifying framework with full implementation details.

  13. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  14. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  15. Solid-state personal dosimeter using dose conversion algorithm

    International Nuclear Information System (INIS)

    Lee, B.J.; Lee, Wanno; Cho, Gyuseong; Chang, S.Y.; Rho, S.R.

    2003-01-01

    Solid-state personal dosimeters using semiconductor detectors have been widely used because of their simplicity and real time operation. In this paper, a personal dosimeter based on a silicon PIN photodiode has been optimally designed by the Monte Carlo method and also developed. For performance test, the developed dosimeter was irradiated within the energy range between 50 keV and 1.25 MeV, the exposure dose rate between 3 mR/h and 25 R/h. The thickness of 0.2 mm Cu and 1.0 mm Al was selected as an optimal filter by simulation results. For minimizing the non-linear sensitivity on energy, dose conversion algorithm was presented, which was able to consider pulse number as well as pulse amplitude related to absorbed energies. The sensitivities of dosimeters developed by the proposed algorithm and the conventional method were compared and analyzed in detail. When dose conversion algorithm was used, the linearity of sensitivity was better about 38%. This dosimeter will be used for above 65 keV within the relative response of ±10% to 137 Cs

  16. Reasoning about Grover's Quantum Search Algorithm using Probabilistic wp

    NARCIS (Netherlands)

    Butler, M.J.; Hartel, Pieter H.

    Grover's search algorithm is designed to be executed on a quantum mechanical computer. In this paper, the probabilistic wp-calculus is used to model and reason about Grover's algorithm. It is demonstrated that the calculus provides a rigorous programming notation for modelling this and other quantum

  17. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  18. Developing robust arsenic awareness prediction models using machine learning algorithms.

    Science.gov (United States)

    Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet

    2018-04-01

    Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a

  19. Using wound care algorithms: a content validation study.

    Science.gov (United States)

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  20. Pattern Nulling of Linear Antenna Arrays Using Backtracking Search Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kerim Guney

    2015-01-01

    Full Text Available An evolutionary method based on backtracking search optimization algorithm (BSA is proposed for linear antenna array pattern synthesis with prescribed nulls at interference directions. Pattern nulling is obtained by controlling only the amplitude, position, and phase of the antenna array elements. BSA is an innovative metaheuristic technique based on an iterative process. Various numerical examples of linear array patterns with the prescribed single, multiple, and wide nulls are given to illustrate the performance and flexibility of BSA. The results obtained by BSA are compared with the results of the following seventeen algorithms: particle swarm optimization (PSO, genetic algorithm (GA, modified touring ant colony algorithm (MTACO, quadratic programming method (QPM, bacterial foraging algorithm (BFA, bees algorithm (BA, clonal selection algorithm (CLONALG, plant growth simulation algorithm (PGSA, tabu search algorithm (TSA, memetic algorithm (MA, nondominated sorting GA-2 (NSGA-2, multiobjective differential evolution (MODE, decomposition with differential evolution (MOEA/D-DE, comprehensive learning PSO (CLPSO, harmony search algorithm (HSA, seeker optimization algorithm (SOA, and mean variance mapping optimization (MVMO. The simulation results show that the linear antenna array synthesis using BSA provides low side-lobe levels and deep null levels.

  1. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Directory of Open Access Journals (Sweden)

    Mohd Taufiq Muslim

    Full Text Available In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM algorithm, Bayesian Regularization (BR algorithm and Particle Swarm Optimization (PSO algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS. The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  2. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Science.gov (United States)

    Muslim, Mohd Taufiq; Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  3. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  4. Using transformation algorithms to estimate (co)variance ...

    African Journals Online (AJOL)

    REML) procedures by a diagonalization approach is extended to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of ...

  5. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  6. [Algorithms of artificial neural networks--practical application in medical science].

    Science.gov (United States)

    Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna

    2005-12-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.

  7. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    Energy Technology Data Exchange (ETDEWEB)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{sub 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose

  8. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    International Nuclear Information System (INIS)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D 95 , D 90 , D 50 , and D 2 of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the

  9. Market potential for solar thermal energy supply systems in the United States industrial and commercial sectors: 1990--2030

    International Nuclear Information System (INIS)

    1991-12-01

    This report revises and extends previous work sponsored by the US DOE on the potential industrial market in the United States for solar thermal energy systems and presents a new analysis of the commercial sector market potential. Current and future industrial process heat demand and commercial water heating, space heating and space cooling end-use demands are estimated. The PC Industrial Model (PCIM) and the commercial modules of the Building Energy End-Use Model (BEEM) used by the DOE's Energy Information Administration (EIA) to support the recent National Energy Strategy (NES) analysis are used to forecast industrial and commercial end-use energy demand respectively. Energy demand is disaggregated by US Census region to account for geographic variation in solar insolation and regional variation in cost of alternative natural gas-fired energy sources. The industrial sector analysis also disaggregates demand by heat medium and temperature range to facilitate process end-use matching with appropriate solar thermal energy supply technologies. The commercial sector analysis disaggregates energy demand by three end uses: water heating, space heating, and space cooling. Generic conceptual designs are created for both industrial and commercial applications. Levelized energy costs (LEC) are calculated for industrial sector applications employing low temperature flat plate collectors for process water preheat; parabolic troughs for intermediate temperature process steam and direct heat industrial application; and parabolic dish technologies for high temperature, direct heat industrial applications. LEC are calculated for commercial sector applications employing parabolic trough technologies for low temperature water and space heating. Cost comparisons are made with natural gas-fired sources for both the industrial market and the commercial market assuming fuel price escalation consistent with NES reference case scenarios for industrial and commercial sector gas markets

  10. Automatic Circuit Design and Optimization Using Modified PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Subhash Patel

    2016-04-01

    Full Text Available In this work, we have proposed modified PSO algorithm based optimizer for automatic circuit design. The performance of the modified PSO algorithm is compared with two other evolutionary algorithms namely ABC algorithm and standard PSO algorithm by designing two stage CMOS operational amplifier and bulk driven OTA in 130nm technology. The results show the robustness of the proposed algorithm. With modified PSO algorithm, the average design error for two stage op-amp is only 0.054% in contrast to 3.04% for standard PSO algorithm and 5.45% for ABC algorithm. For bulk driven OTA, average design error is 1.32% with MPSO compared to 4.70% with ABC algorithm and 5.63% with standard PSO algorithm.

  11. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Worm Algorithm for CP(N-1) Model

    CERN Document Server

    Rindlisbacher, Tobias

    2017-01-01

    The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...

  13. Planning the FUSE Mission Using the SOVA Algorithm

    Science.gov (United States)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  14. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  15. A new accurate curvature matching and optimal tool based five-axis machining algorithm

    International Nuclear Information System (INIS)

    Lin, Than; Lee, Jae Woo; Bohez, Erik L. J.

    2009-01-01

    Free-form surfaces are widely used in CAD systems to describe the part surface. Today, the most advanced machining of free from surfaces is done in five-axis machining using a flat end mill cutter. However, five-axis machining requires complex algorithms for gouging avoidance, collision detection and powerful computer-aided manufacturing (CAM) systems to support various operations. An accurate and efficient method is proposed for five-axis CNC machining of free-form surfaces. The proposed algorithm selects the best tool and plans the tool path autonomously using curvature matching and integrated inverse kinematics of the machine tool. The new algorithm uses the real cutter contact tool path generated by the inverse kinematics and not the linearized piecewise real cutter location tool path

  16. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L; Young, Bruce E; Brooks, Thomas M; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H M; Larsen, Frank W; Hamilton, Healy; Hansen, Matthew C; Turner, Will R

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of

  17. A biodiversity indicators dashboard: addressing challenges to monitoring progress towards the Aichi biodiversity targets using disaggregated global data.

    Directory of Open Access Journals (Sweden)

    Xuemei Han

    Full Text Available Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate, state of species (Red List Index, conservation response (protection of key biodiversity areas, and benefits to human populations (freshwater provision. Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  18. A Biodiversity Indicators Dashboard: Addressing Challenges to Monitoring Progress towards the Aichi Biodiversity Targets Using Disaggregated Global Data

    Science.gov (United States)

    Han, Xuemei; Smyth, Regan L.; Young, Bruce E.; Brooks, Thomas M.; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H. M.; Larsen, Frank W.; Hamilton, Healy; Hansen, Matthew C.; Turner, Will R.

    2014-01-01

    Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's “Aichi Targets”. These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity “dashboard” – a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the

  19. Evaluation of algorithms used to order markers on genetic maps.

    Science.gov (United States)

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  20. Optimal reservoir operation policies using novel nested algorithms

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  1. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    Science.gov (United States)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  2. Tolerance analysis of null lenses using an end-use system performance criterion

    Science.gov (United States)

    Rodgers, J. Michael

    2000-07-01

    An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.

  3. Economic modeling using evolutionary algorithms : the effect of binary encoding of strategies

    NARCIS (Netherlands)

    Waltman, L.R.; Eck, van N.J.; Dekker, Rommert; Kaymak, U.

    2011-01-01

    We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational economics

  4. Food processing optimization using evolutionary algorithms | Enitan ...

    African Journals Online (AJOL)

    Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the ...

  5. Public Transport Route Finding using a Hybrid Genetic Algorithm

    OpenAIRE

    Liviu Adrian COTFAS; Andreea DIOSTEANU

    2011-01-01

    In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.

  6. Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition

    Science.gov (United States)

    2017-01-01

    NAVAL SURFACE WARFARE CENTER PANAMA CITY DIVISION PANAMA CITY, FL 32407-7001 TECHNICAL REPORT NSWC PCD TR-2017-004 MODULAR ...31-01-2017 Technical Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition DR...flexible platform to facilitate the development and testing of ATR algorithms. To that end, NSWC PCD has created the Modular Algorithm Testbed Suite

  7. Mathematical Use Of Polynomials Of Different End Periods Of ...

    African Journals Online (AJOL)

    This paper focused on how polynomials of different end period of random numbers can be used in the application of encryption and decryption of a message. Eight steps were used in generating information on how polynomials of different end periods of random numbers in the application of encryption and decryption of a ...

  8. A low complexity based spectrum management algorithm for ‘Near–Far’ problem in VDSL environment

    Directory of Open Access Journals (Sweden)

    Sunil Sharma

    2015-10-01

    Full Text Available In digital subscriber line (DSL system, crosstalk created by electromagnetic interference among twisted pairs degrades the system performance. Very high bit rate DSL (VDSL, utilizes higher bandwidth of copper cable for data transmission. During upstream transmission, a ‘Near–Far’ problem occurs in VDSL system. In this problem the far end crosstalk (FEXT is produced from near end user degrades the data rate achieved at the far end user. The effect of FEXT can be reduced by properly managing power spectral densities (PSD of transmitters of near and far users. This kind of power allocation is called dynamic spectrum management (DSM. In this paper, a new distributed DSM algorithm is proposed in which power from only those sub channels of near end user are reduced which create interference to far end user. This power back off strategy takes place with the help of power spectral density (PSD masks at interference creating sub channels of near end user. The simulation results of the proposed algorithm show an improvement in terms of data rate and approaches near to that of optimal spectrum balancing (OSB algorithm.

  9. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    Science.gov (United States)

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  10. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  11. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  12. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  13. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    Science.gov (United States)

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  14. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms

    International Nuclear Information System (INIS)

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-01-01

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  15. Active Noise Control Using Modified FsLMS and Hybrid PSOFF Algorithm

    Directory of Open Access Journals (Sweden)

    Ranjan Walia

    2018-04-01

    Full Text Available Active noise control is an efficient technique for noise cancellation of the system, which has been defined in this paper with the aid of Modified Filtered-s Least Mean Square (MFsLMS algorithm. The Hybrid Particle Swarm Optimization and Firefly (HPSOFF algorithm are used to identify the stability factor of the MFsLMS algorithm. The computational difficulty of the modified algorithm is reduced when compared with the original Filtered-s Least Mean Square (FsLMS algorithm. The noise sources are removed from the signal and it is compared with the existing FsLMS algorithm. The performance of the system is established with the normalized mean square error for two different types of noises. The proposed method has also been compared with the existing algorithms for the same purposes.

  16. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  17. Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm

    Science.gov (United States)

    Mavriplis, Dimitri J.

    1999-01-01

    The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.

  18. Towards Automatic Controller Design using Multi-Objective Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf

    of evolutionary computation, a choice was made to use multi-objective algorithms for the purpose of aiding in automatic controller design. More specifically, the choice was made to use the Non-dominated Sorting Genetic Algorithm II (NSGAII), which is one of the most potent algorithms currently in use...... for automatic controller design. However, because the field of evolutionary computation is relatively unknown in the field of control engineering, this thesis also includes a comprehensive introduction to the basic field of evolutionary computation as well as a description of how the field has previously been......In order to design the controllers of tomorrow, a need has risen for tools that can aid in the design of these. A desire to use evolutionary computation as a tool to achieve that goal is what gave inspiration for the work contained in this thesis. After having studied the foundations...

  19. Mining the National Career Assessment Examination Result Using Clustering Algorithm

    Science.gov (United States)

    Pagudpud, M. V.; Palaoag, T. T.; Padirayon, L. M.

    2018-03-01

    Education is an essential process today which elicits authorities to discover and establish innovative strategies for educational improvement. This study applied data mining using clustering technique for knowledge extraction from the National Career Assessment Examination (NCAE) result in the Division of Quirino. The NCAE is an examination given to all grade 9 students in the Philippines to assess their aptitudes in the different domains. Clustering the students is helpful in identifying students’ learning considerations. With the use of the RapidMiner tool, clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), k-means, k-medoid, expectation maximization clustering, and support vector clustering algorithms were analyzed. The silhouette indexes of the said clustering algorithms were compared, and the result showed that the k-means algorithm with k = 3 and silhouette index equal to 0.196 is the most appropriate clustering algorithm to group the students. Three groups were formed having 477 students in the determined group (cluster 0), 310 proficient students (cluster 1) and 396 developing students (cluster 2). The data mining technique used in this study is essential in extracting useful information from the NCAE result to better understand the abilities of students which in turn is a good basis for adopting teaching strategies.

  20. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  1. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    Directory of Open Access Journals (Sweden)

    Zhongyi Hu

    2013-01-01

    Full Text Available Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA based memetic algorithm (FA-MA to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  2. Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yingcheng Xu

    2013-01-01

    Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.

  3. Public Transport Route Finding using a Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Liviu Adrian COTFAS

    2011-01-01

    Full Text Available In this paper we present a public transport route finding solution based on a hybrid genetic algorithm. The algorithm uses two heuristics that take into consideration the number of trans-fers and the remaining distance to the destination station in order to improve the convergence speed. The interface of the system uses the latest web technologies to offer both portability and advanced functionality. The approach has been evaluated using the data for the Bucharest public transport network.

  4. Prediction of customer behaviour analysis using classification algorithms

    Science.gov (United States)

    Raju, Siva Subramanian; Dhandayudam, Prabha

    2018-04-01

    Customer Relationship management plays a crucial role in analyzing of customer behavior patterns and their values with an enterprise. Analyzing of customer data can be efficient performed using various data mining techniques, with the goal of developing business strategies and to enhance the business. In this paper, three classification models (NB, J48, and MLPNN) are studied and evaluated for our experimental purpose. The performance measures of the three classifications are compared using three different parameters (accuracy, sensitivity, specificity) and experimental results expose J48 algorithm has better accuracy with compare to NB and MLPNN algorithm.

  5. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    Science.gov (United States)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  6. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  7. Fast prediction of RNA-RNA interaction using heuristic algorithm.

    Science.gov (United States)

    Montaseri, Soheila

    2015-01-01

    Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.

  8. The HSBQ Algorithm with Triple-play Services for Broadband Hybrid Satellite Constellation Communication System

    Directory of Open Access Journals (Sweden)

    Anupon Boriboon

    2016-07-01

    Full Text Available The HSBQ algorithm is the one of active queue management algorithms, which orders to avoid high packet loss rates and control stable stream queue. That is the problem of calculation of the drop probability for both queue length stability and bandwidth fairness. This paper proposes the HSBQ, which drop the packets before the queues overflow at the gateways, so that the end nodes can respond to the congestion before queue overflow. This algorithm uses the change of the average queue length to adjust the amount by which the mark (or drop probability is changed. Moreover it adjusts the queue weight, which is used to estimate the average queue length, based on the rate. The results show that HSBQ algorithm could maintain control stable stream queue better than group of congestion metric without flow information algorithm as the rate of hybrid satellite network changing dramatically, as well as the presented empiric evidences demonstrate that the use of HSBQ algorithm offers a better quality of service than the traditionally queue control mechanisms used in hybrid satellite network.

  9. Improved multilayer OLED architecture using evolutionary genetic algorithm

    International Nuclear Information System (INIS)

    Quirino, W.G.; Teixeira, K.C.; Legnani, C.; Calil, V.L.; Messer, B.; Neto, O.P. Vilela; Pacheco, M.A.C.; Cremona, M.

    2009-01-01

    Organic light-emitting diodes (OLEDs) constitute a new class of emissive devices, which present high efficiency and low voltage operation, among other advantages over current technology. Multilayer architecture (M-OLED) is generally used to optimize these devices, specially overcoming the suppression of light emission due to the exciton recombination near the metal layers. However, improvement in recombination, transport and charge injection can also be achieved by blending electron and hole transporting layers into the same one. Graded emissive region devices can provide promising results regarding quantum and power efficiency and brightness, as well. The massive number of possible model configurations, however, suggests that a search algorithm would be more suitable for this matter. In this work, multilayer OLEDs were simulated and fabricated using Genetic Algorithms (GAs) as evolutionary strategy to improve their efficiency. Genetic Algorithms are stochastic algorithms based on genetic inheritance and Darwinian strife to survival. In our simulations, it was assumed a 50 nm width graded region, divided into five equally sized layers. The relative concentrations of the materials within each layer were optimized to obtain the lower V/J 0.5 ratio, where V is the applied voltage and J the current density. The best M-OLED architecture obtained by genetic algorithm presented a V/J 0.5 ratio nearly 7% lower than the value reported in the literature. In order to check the experimental validity of the improved results obtained in the simulations, two M-OLEDs with different architectures were fabricated by thermal deposition in high vacuum environment. The results of the comparison between simulation and some experiments are presented and discussed.

  10. Inertial measurement unit–based iterative pose compensation algorithm for low-cost modular manipulator

    Directory of Open Access Journals (Sweden)

    Yunhan Lin

    2016-01-01

    Full Text Available It is a necessary mean to realize the accurate motion control of the manipulator which uses end-effector pose correction method and compensation method. In this article, first, we established the kinematic model and error model of the modular manipulator (WUST-ARM, and then we discussed the measurement methods and precision of the inertial measurement unit sensor. The inertial measurement unit sensor is mounted on the end-effector of modular manipulator, to get the real-time pose of the end-effector. At last, a new inertial measurement unit–based iterative pose compensation algorithm is proposed. By applying this algorithm in the pose compensation experiment of modular manipulator which is composed of low-cost rotation joints, the results show that the inertial measurement unit can obtain a higher precision when in static state; it will accurately feedback to the control system with an accurate error compensation angle after a brief delay when the end-effector moves to the target point, and after compensation, the precision errors of roll angle, pitch angle, and yaw angle are reached at 0.05°, 0.01°, and 0.27° respectively. It proves that this low-cost method provides a new solution to improve the end-effector pose of low-cost modular manipulator.

  11. Realizing directional cloning using sticky ends produced by 3ʹ-5ʹ ...

    Indian Academy of Sciences (India)

    The Klenow fragment (KF) has been used to make the blunt end as a tool enzyme. Its 5′-3′ polymerase activity can extend the 5′ overhanging sticky end to the blunt end, and 3′-5′ exonuclease activity can cleave the 3′ overhanging sticky end to the blunt end. The blunt end is useful for cloning. Here, we for the first ...

  12. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  13. Construction Example for Algebra System Using Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    FangAn Deng

    2015-01-01

    Full Text Available The construction example of algebra system is to verify the existence of a complex algebra system, and it is a NP-hard problem. In this paper, to solve this kind of problems, firstly, a mathematical optimization model for construction example of algebra system is established. Secondly, an improved harmony search algorithm based on NGHS algorithm (INGHS is proposed to find as more solutions as possible for the optimization model; in the proposed INGHS algorithm, to achieve the balance between exploration power and exploitation power in the search process, a global best strategy and parameters dynamic adjustment method are present. Finally, nine construction examples of algebra system are used to evaluate the optimization model and performance of INGHS. The experimental results show that the proposed algorithm has strong performance for solving complex construction example problems of algebra system.

  14. SHARPEN-Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network

    KAUST Repository

    Loksha, Ilya V.

    2009-04-30

    Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. © 2009 Wiley Periodicals, Inc.

  15. Optimized hyperspectral band selection using hybrid genetic algorithm and gravitational search algorithm

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2015-12-01

    The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.

  16. An adaptive occlusion culling algorithm for use in large ves

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2000-01-01

    The Hierarchical Occlusion Map algorithm is combined with Frustum Slicing to give a simpler occlusion-culling algorithm that more adequately caters to large, open VEs. The algorithm adapts to the level of visual congestion and is well suited for use with large, complex models with long mean free ...... line of sight ('the great outdoors'), models for which it is not feasible to construct, or search, a database of occluders to be rendered each frame....

  17. Biomass Resource Allocation among Competing End Uses

    Energy Technology Data Exchange (ETDEWEB)

    Newes, E.; Bush, B.; Inman, D.; Lin, Y.; Mai, T.; Martinez, A.; Mulcahy, D.; Short, W.; Simpkins, T.; Uriarte, C.; Peck, C.

    2012-05-01

    The Biomass Scenario Model (BSM) is a system dynamics model developed by the U.S. Department of Energy as a tool to better understand the interaction of complex policies and their potential effects on the biofuels industry in the United States. However, it does not currently have the capability to account for allocation of biomass resources among the various end uses, which limits its utilization in analysis of policies that target biomass uses outside the biofuels industry. This report provides a more holistic understanding of the dynamics surrounding the allocation of biomass among uses that include traditional use, wood pellet exports, bio-based products and bioproducts, biopower, and biofuels by (1) highlighting the methods used in existing models' treatments of competition for biomass resources; (2) identifying coverage and gaps in industry data regarding the competing end uses; and (3) exploring options for developing models of biomass allocation that could be integrated with the BSM to actively exchange and incorporate relevant information.

  18. Harmonic elimination in diode-clamped multilevel inverter using evolutionary algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Barkati, Said [Laboratoire d' analyse des Signaux et Systemes (LASS), Universite de M' sila, BP. 166, rue Ichbilia 28000 M' sila (Algeria); Baghli, Lotfi [Groupe de Recherche en Electrotechnique et Electronique de Nancy (GREEN), CNRS UMR 7030, Universite Henri Poincare Nancy 1, BP. 239, 54506 Vandoeuvre-les-Nancy (France); Berkouk, El Madjid; Boucherit, Mohamed-Seghir [Laboratoire de Commande des Processus (LCP), Ecole Nationale Polytechnique, BP. 182, 10 Avenue Hassen Badi, 16200 El Harrach, Alger (Algeria)

    2008-10-15

    This paper describes two evolutionary algorithms for the optimized harmonic stepped-waveform technique. Genetic algorithms and particle swarm optimization are applied to compute the switching angles in a three-phase seven-level inverter to produce the required fundamental voltage while, at the same time, specified harmonics are eliminated. Furthermore, these algorithms are also used to solve the starting point problem of the Newton-Raphson conventional method. This combination provides a very effective method for the harmonic elimination technique. This strategy is useful for different structures of seven-level inverters. The diode-clamped topology is considered in this study. (author)

  19. Optimization of Support Structures for Offshore Wind Turbines Using Genetic Algorithm with Domain-Trimming

    Directory of Open Access Journals (Sweden)

    Mohammad AlHamaydeh

    2017-01-01

    Full Text Available The powerful genetic algorithm optimization technique is augmented with an innovative “domain-trimming” modification. The resulting adaptive, high-performance technique is called Genetic Algorithm with Domain-Trimming (GADT. As a proof of concept, the GADT is applied to a widely used benchmark problem. The 10-dimensional truss optimization benchmark problem has well documented global and local minima. The GADT is shown to outperform several published solutions. Subsequently, the GADT is deployed onto three-dimensional structural design optimization for offshore wind turbine supporting structures. The design problem involves complex least-weight topology as well as member size optimizations. The GADT is applied to two popular design alternatives: tripod and quadropod jackets. The two versions of the optimization problem are nonlinearly constrained where the objective function is the material weight of the supporting truss. The considered design variables are the truss members end node coordinates, as well as the cross-sectional areas of the truss members, whereas the constraints are the maximum stresses in members and the maximum displacements of the nodes. These constraints are managed via dynamically modified, nonstationary penalty functions. The structures are subject to gravity, wind, wave, and earthquake loading conditions. The results show that the GADT method is superior in finding best discovered optimal solutions.

  20. DNA-based watermarks using the DNA-Crypt algorithm

    Directory of Open Access Journals (Sweden)

    Barnekow Angelika

    2007-05-01

    Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  1. DNA-based watermarks using the DNA-Crypt algorithm.

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  2. DNA-based watermarks using the DNA-Crypt algorithm

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  3. Clustering performance comparison using K-means and expectation maximization algorithms.

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  4. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    Science.gov (United States)

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  5. Cable Damage Detection System and Algorithms Using Time Domain Reflectometry

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G A; Robbins, C L; Wade, K A; Souza, P R

    2009-03-24

    This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model

  6. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  7. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  8. Predicting Smoking Status Using Machine Learning Algorithms and Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Charles Frank

    2018-03-01

    Full Text Available Smoking has been proven to negatively affect health in a multitude of ways. As of 2009, smoking has been considered the leading cause of preventable morbidity and mortality in the United States, continuing to plague the country’s overall health. This study aims to investigate the viability and effectiveness of some machine learning algorithms for predicting the smoking status of patients based on their blood tests and vital readings results. The analysis of this study is divided into two parts: In part 1, we use One-way ANOVA analysis with SAS tool to show the statistically significant difference in blood test readings between smokers and non-smokers. The results show that the difference in INR, which measures the effectiveness of anticoagulants, was significant in favor of non-smokers which further confirms the health risks associated with smoking. In part 2, we use five machine learning algorithms: Naïve Bayes, MLP, Logistic regression classifier, J48 and Decision Table to predict the smoking status of patients. To compare the effectiveness of these algorithms we use: Precision, Recall, F-measure and Accuracy measures. The results show that the Logistic algorithm outperformed the four other algorithms with Precision, Recall, F-Measure, and Accuracy of 83%, 83.4%, 83.2%, 83.44%, respectively.

  9. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  10. Meraculous: De Novo Genome Assembly with Short Paired-End Reads

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Jarrod A.; Ho, Isaac; Sunkara, Sirisha; Luo, Shujun; Schroth, Gary P.; Rokhsar, Daniel S.; Salzberg, Steven L.

    2011-08-18

    We describe a new algorithm, meraculous, for whole genome assembly of deep paired-end short reads, and apply it to the assembly of a dataset of paired 75-bp Illumina reads derived from the 15.4 megabase genome of the haploid yeast Pichia stipitis. More than 95% of the genome is recovered, with no errors; half the assembled sequence is in contigs longer than 101 kilobases and in scaffolds longer than 269 kilobases. Incorporating fosmid ends recovers entire chromosomes. Meraculous relies on an efficient and conservative traversal of the subgraph of the k-mer (deBruijn) graph of oligonucleotides with unique high quality extensions in the dataset, avoiding an explicit error correction step as used in other short-read assemblers. A novel memory-efficient hashing scheme is introduced. The resulting contigs are ordered and oriented using paired reads separated by ~280 bp or ~3.2 kbp, and many gaps between contigs can be closed using paired-end placements. Practical issues with the dataset are described, and prospects for assembling larger genomes are discussed.

  11. ALGORITHM OF SAR SATELLITE ATTITUDE MEASUREMENT USING GPS AIDED BY KINEMATIC VECTOR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, in order to improve the accuracy of the Synthetic Aperture Radar (SAR)satellite attitude using Global Positioning System (GPS) wide-band carrier phase, the SAR satellite attitude kinematic vector and Kalman filter are introduced. Introducing the state variable function of GPS attitude determination algorithm in SAR satellite by means of kinematic vector and describing the observation function by the GPS wide-band carrier phase, the paper uses the Kalman filter algorithm to obtian the attitude variables of SAR satellite. Compared the simulation results of Kalman filter algorithm with the least square algorithm and explicit solution, it is indicated that the Kalman filter algorithm is the best.

  12. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  13. An efficient fractal image coding algorithm using unified feature and DCT

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2009-01-01

    Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.

  14. Use of the MULTINEST algorithm for gravitational wave data analysis

    International Nuclear Information System (INIS)

    Feroz, Farhan; Hobson, Michael P; Gair, Jonathan R; Porter, Edward K

    2009-01-01

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  15. Improved quantum backtracking algorithms using effective resistance estimates

    Science.gov (United States)

    Jarret, Michael; Wan, Kianna

    2018-02-01

    We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.

  16. Irregular Applications: Architectures & Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  17. Catheter Calibration Using Template Matching Line Interpolation Algorithm

    National Research Council Canada - National Science Library

    Nagy, L

    2001-01-01

    ..., such as: image resolution, type of the calibration, algorithm used for contour detection, size of the FOV, other parameters of the image The studied calibration method is the one using catheter size...

  18. Synthesis of Thinned Concentric Circular Antenna Arrays Using Modified TLBO Algorithm

    Directory of Open Access Journals (Sweden)

    Zailei Luo

    2015-01-01

    Full Text Available Teaching-learning-based optimization (TLBO algorithm is a new kind of stochastic metaheuristic algorithm which has been proven effective and powerful in many engineering optimization problems. This paper describes the application of a modified version of TLBO algorithm, MTLBO, for synthesis of thinned concentric circular antenna arrays (CCAAs. The MTLBO is adjusted for CCAA design according to the geometry arrangement of antenna elements. CCAAs with uniform interelement spacing fixed at half wavelength have been considered for thinning using MTLBO algorithm. For practical purpose, this paper demonstrated SLL reduction of thinned CCAAs in the whole regular and extended space other than the phi = 0° plane alone. The uniformly and nonuniformly excited CCAAs have been discussed, respectively, during the simulation process. The proposed MTLBO is very easy to be implemented and requires fewer algorithm specified parameters, which is suitable for concentric circular antenna array synthesis. Numerical results clearly show the superiority of MTLBO algorithm in finding optimum solutions compared to particle swarm optimization algorithm and firefly algorithm.

  19. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  20. Using Genetic Algorithms for Navigation Planning in Dynamic Environments

    Directory of Open Access Journals (Sweden)

    Ferhat Uçan

    2012-01-01

    Full Text Available Navigation planning can be considered as a combination of searching and executing the most convenient flight path from an initial waypoint to a destination waypoint. Generally the aim is to follow the flight path, which provides minimum fuel consumption for the air vehicle. For dynamic environments, constraints change dynamically during flight. This is a special case of dynamic path planning. As the main concern of this paper is flight planning, the conditions and objectives that are most probable to be used in navigation problem are considered. In this paper, the genetic algorithm solution of the dynamic flight planning problem is explained. The evolutionary dynamic navigation planning algorithm is developed for compensating the existing deficiencies of the other approaches. The existing fully dynamic algorithms process unit changes to topology one modification at a time, but when there are several such operations occurring in the environment simultaneously, the algorithms are quite inefficient. The proposed algorithm may respond to the concurrent constraint updates in a shorter time for dynamic environment. The most secure navigation of the air vehicle is planned and executed so that the fuel consumption is minimum.

  1. Machine Learning in Production Systems Design Using Genetic Algorithms

    OpenAIRE

    Abu Qudeiri Jaber; Yamamoto Hidehiko Rizauddin Ramli

    2008-01-01

    To create a solution for a specific problem in machine learning, the solution is constructed from the data or by use a search method. Genetic algorithms are a model of machine learning that can be used to find nearest optimal solution. While the great advantage of genetic algorithms is the fact that they find a solution through evolution, this is also the biggest disadvantage. Evolution is inductive, in nature life does not evolve towards a good solution but it evolves aw...

  2. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    Science.gov (United States)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  3. A Collision-Free G2 Continuous Path-Smoothing Algorithm Using Quadratic Polynomial Interpolation

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2014-12-01

    Full Text Available Most path-planning algorithms are used to obtain a collision-free path without considering continuity. On the other hand, a continuous path is needed for stable movement. In this paper, the searched path was converted into a G2 continuous path using the modified quadratic polynomial and membership function interpolation algorithm. It is simple, unique and provides a good geometric interpretation. In addition, a collision-checking and improvement algorithm is proposed. The collision-checking algorithm can check the collisions of a smoothed path. If collisions are detected, the collision improvement algorithm modifies the collision path to a collision-free path. The collision improvement algorithm uses a geometric method. This method uses the perpendicular line between a collision position and the collision piecewise linear path. The sub-waypoint is added, and the QPMI algorithm is applied again. As a result, the collision-smoothed path is converted into a collision-free smooth path without changing the continuity.

  4. Algorithms and Software Architecture for the Production of DEM Data From LIDAR, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Diamond Data Systems (DDS) proposes the development of a new, advanced architecture, algorithms and software to support the end-to-end processing of LIDAR data to...

  5. Earthquake—explosion discrimination using genetic algorithm-based boosting approach

    Science.gov (United States)

    Orlic, Niksa; Loncaric, Sven

    2010-02-01

    An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.

  6. Reliability Based Spare Parts Management Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Rahul Upadhyay

    2015-08-01

    Full Text Available Effective and efficient inventory management is the key to the economic sustainability of capital intensive modern industries. Inventory grows exponentially with complexity and size of the equipment fleet. Substantial amount of capital is required for maintaining an inventory and therefore its optimization is beneficial for smooth operation of the project at minimum cost of inventory. The size and hence the cost of the inventory is influenced by a large no of factors. This makes the optimization problem complex. This work presents a model to solve the problem of optimization of spare parts inventory. The novelty of this study lies with the fact that the developed method could tackle not only the artificial test case but also a real-world industrial problem. Various investigators developed several methods and semi-analytical tools for obtaining optimum solutions for this problem. In this study non-traditional optimization tool namely genetic algorithms GA are utilized. Apart from this Coxs regression analysis is also used to incorporate the effect of some environmental factors on the demand of spares. It shows the efficacy of the applicability of non-traditional optimization tool like GA to solve these problems. This research illustrates the proposed model with the analysis of data taken from a fleet of dumper operated in a large surface coal mine. The optimum time schedules so suggested by this GA-based model are found to be cost effective. A sensitivity analysis is also conducted for this industrial problem. Objective function is developed and the factors like the effect of season and production pressure overloading towards financial year-ending is included in the equations. Statistical analysis of the collected operational and performance data were carried out with the help of Easy-Fit Ver-5.5.The analysis gives the shape and scale parameter of theoretical Weibull distribution. The Coxs regression coefficient corresponding to excessive loading

  7. Using a vision cognitive algorithm to schedule virtual machines

    Directory of Open Access Journals (Sweden)

    Zhao Jiaqi

    2014-09-01

    Full Text Available Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM scheduling problem on the cloud. Our primary concern with VM scheduling is the energy consumption, because the largest part of a cloud center operation cost goes to the kilowatts used. We designed a scheduling algorithm that allocates an incoming virtual machine instance on the host machine, which results in the lowest energy consumption of the entire system. More specifically, we developed a new algorithm, called vision cognition, to solve the global optimization problem. This algorithm is inspired by the observation of how human eyes see directly the smallest/largest item without comparing them pairwisely. We theoretically proved that the algorithm works correctly and converges fast. Practically, we validated the novel algorithm, together with the scheduling concept, using a simulation approach. The adopted cloud simulator models different cloud infrastructures with various properties and detailed runtime information that can usually not be acquired from real clouds. The experimental results demonstrate the benefit of our approach in terms of reducing the cloud center energy consumption

  8. Disaggregating Within- and Between-Person Effects of Social Identification on Subjective and Endocrinological Stress Reactions in a Real-Life Stress Situation.

    Science.gov (United States)

    Ketturat, Charlene; Frisch, Johanna U; Ullrich, Johannes; Häusser, Jan A; van Dick, Rolf; Mojzisch, Andreas

    2016-02-01

    Several experimental and cross-sectional studies have established the stress-buffering effect of social identification, yet few longitudinal studies have been conducted within this area of research. This study is the first to make use of a multilevel approach to disaggregate between- and within-person effects of social identification on subjective and endocrinological stress reactions. Specifically, we conducted a study with 85 prospective students during their 1-day aptitude test for a university sports program. Ad hoc groups were formed, in which students completed several tests in various disciplines together. At four points in time, salivary cortisol, subjective strain, and identification with their group were measured. Results of multilevel analyses show a significant within-person effect of social identification: The more students identified with their group, the less stress they experienced and the lower their cortisol response was. Between-person effects were not significant. Advantages of using multilevel approaches within this field of research are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.

  9. Detecting Hijacked Journals by Using Classification Algorithms.

    Science.gov (United States)

    Andoohgin Shahri, Mona; Jazi, Mohammad Davarpanah; Borchardt, Glenn; Dadkhah, Mehdi

    2018-04-01

    Invalid journals are recent challenges in the academic world and many researchers are unacquainted with the phenomenon. The number of victims appears to be accelerating. Researchers might be suspicious of predatory journals because they have unfamiliar names, but hijacked journals are imitations of well-known, reputable journals whose websites have been hijacked. Hijacked journals issue calls for papers via generally laudatory emails that delude researchers into paying exorbitant page charges for publication in a nonexistent journal. This paper presents a method for detecting hijacked journals by using a classification algorithm. The number of published articles exposing hijacked journals is limited and most of them use simple techniques that are limited to specific journals. Hence we needed to amass Internet addresses and pertinent data for analyzing this type of attack. We inspected the websites of 104 scientific journals by using a classification algorithm that used criteria common to reputable journals. We then prepared a decision tree that we used to test five journals we knew were authentic and five we knew were hijacked.

  10. The Solving of Problems in Chemistry: the more open-ended problems

    Science.gov (United States)

    Reid, Norman; Yang, Mei-Jung

    2002-01-01

    Most problem solving in chemistry tends to be algorithmic in nature, while problems in life tend to be very open ended. This paper offers a simple classification of problems and seeks to explore the many factors which may be important in the successful solving of problems. It considers the place of procedures and algorithms. It analyses the role of long-term memory, not only in terms of what is known, but how that knowledge was acquired. It notes the great importance of the limitations of working memory space and the importance of confidence which comes from experience. Finally, various psychological factors are discussed. This paper argues that solving open-ended problems is extremely important in education and that offering learners experience of this in a group work context is a helpful way forward.

  11. Optimal Grid Scheduling Using Improved Artificial Bee Colony Algorithm

    OpenAIRE

    T. Vigneswari; M. A. Maluk Mohamed

    2015-01-01

    Job Scheduling plays an important role for efficient utilization of grid resources available across different domains and geographical zones. Scheduling of jobs is challenging and NPcomplete. Evolutionary / Swarm Intelligence algorithms have been extensively used to address the NP problem in grid scheduling. Artificial Bee Colony (ABC) has been proposed for optimization problems based on foraging behaviour of bees. This work proposes a modified ABC algorithm, Cluster Hete...

  12. Fixed node diffusion Monte Carlo using a genetic algorithm: a study of the CO-(4)He(N) complex, N = 1…10.

    Science.gov (United States)

    Ramilowski, Jordan A; Farrelly, David

    2012-06-14

    The diffusion Monte Carlo (DMC) method is a widely used algorithm for computing both ground and excited states of many-particle systems; for states without nodes the algorithm is numerically exact. In the presence of nodes approximations must be introduced, for example, the fixed-node approximation. Recently we have developed a genetic algorithm (GA) based approach which allows the computation of nodal surfaces on-the-fly [Ramilowski and Farrelly, Phys. Chem. Chem. Phys., 2010, 12, 12450]. Here GA-DMC is applied to the computation of rovibrational states of CO-(4)He(N) complexes with N≤ 10. These complexes have been the subject of recent high resolution microwave and millimeter-wave studies which traced the onset of microscopic superfluidity in a doped (4)He droplet, one atom at a time, up to N = 10 [Surin et al., Phys. Rev. Lett., 2008, 101, 233401; Raston et al., Phys. Chem. Chem. Phys., 2010, 12, 8260]. The frequencies of the a-type (microwave) series, which correlate with end-over-end rotation in the CO-(4)He dimer, decrease from N = 1 to 3 and then smoothly increase. This signifies the transition from a molecular complex to a quantum solvated system. The frequencies of the b-type (millimeter-wave) series, which evolves from free rotation of the rigid CO molecule, initially increase from N = 0 to N∼ 6 before starting to decrease with increasing N. An interesting feature of the b-type series, originally observed in the high resolution infra-red (IR) experiments of Tang and McKellar [J. Chem. Phys., 2003, 119, 754] is that, for N = 7, two lines are observed. The GA-DMC algorithm is found to be in good agreement with experimental results and possibly detects the small (∼0.7 cm(-1)) splitting in the b-series line at N = 7. Advantages and disadvantages of GA-DMC are discussed.

  13. Algorithmic test design using classical item parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.; Adema, Jos J.

    Two optimalization models for the construction of tests with a maximal value of coefficient alpha are given. Both models have a linear form and can be solved by using a branch-and-bound algorithm. The first model assumes an item bank calibrated under the Rasch model and can be used, for instance,

  14. A new collage steganographic algorithm using cartoon design

    Science.gov (United States)

    Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip

    2014-02-01

    Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.

  15. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  16. Phase Grouping Line Extraction Algorithm Using Overlapped Partition

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2015-07-01

    Full Text Available Aiming at solving the problem of fracture at the discontinuities area and the challenges of line fitting in each partition, an innovative line extraction algorithm is proposed based on phase grouping using overlapped partition. The proposed algorithm adopted dual partition steps, which will generate overlapped eight partitions. Between the two steps, the middle axis in the first step coincides with the border lines in the other step. Firstly, the connected edge points that share the same phase gradients are merged into the line candidates, and fitted into line segments. Then to remedy the break lines at the border areas, the break segments in the second partition steps are refitted. The proposed algorithm is robust and does not need any parameter tuning. Experiments with various datasets have confirmed that the method is not only capable of handling the linear features, but also powerful enough in handling the curve features.

  17. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  18. Energy prediction using spatiotemporal pattern networks

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Zhanhong; Liu, Chao; Akintayo, Adedotun; Henze, Gregor P.; Sarkar, Soumik

    2017-11-01

    This paper presents a novel data-driven technique based on the spatiotemporal pattern network (STPN) for energy/power prediction for complex dynamical systems. Built on symbolic dynamical filtering, the STPN framework is used to capture not only the individual system characteristics but also the pair-wise causal dependencies among different sub-systems. To quantify causal dependencies, a mutual information based metric is presented and an energy prediction approach is subsequently proposed based on the STPN framework. To validate the proposed scheme, two case studies are presented, one involving wind turbine power prediction (supply side energy) using the Western Wind Integration data set generated by the National Renewable Energy Laboratory (NREL) for identifying spatiotemporal characteristics, and the other, residential electric energy disaggregation (demand side energy) using the Building America 2010 data set from NREL for exploring temporal features. In the energy disaggregation context, convex programming techniques beyond the STPN framework are developed and applied to achieve improved disaggregation performance.

  19. Optimization of wind farm turbines layout using an evolutive algorithm

    International Nuclear Information System (INIS)

    Gonzalez, Javier Serrano; Santos, Jesus Riquelme; Payan, Manuel Burgos; Gonzalez Rodriguez, Angel G.; Mora, Jose Castro

    2010-01-01

    The optimum wind farm configuration problem is discussed in this paper and an evolutive algorithm to optimize the wind farm layout is proposed. The algorithm's optimization process is based on a global wind farm cost model using the initial investment and the present value of the yearly net cash flow during the entire wind-farm life span. The proposed algorithm calculates the yearly income due to the sale of the net generated energy taking into account the individual wind turbine loss of production due to wake decay effects and it can deal with areas or terrains with non-uniform load-bearing capacity soil and different roughness length for every wind direction or restrictions such as forbidden areas or limitations in the number of wind turbines or the investment. The results are first favorably compared with those previously published and a second collection of test cases is used to proof the performance and suitability of the proposed evolutive algorithm to find the optimum wind farm configuration. (author)

  20. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  1. An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data

    Science.gov (United States)

    Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.

    2012-01-01

    An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.

  2. Random noise suppression of seismic data using non-local Bayes algorithm

    Science.gov (United States)

    Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying

    2018-02-01

    For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.

  3. Automatic learning algorithm for the MD-logic artificial pancreas system.

    Science.gov (United States)

    Miller, Shahar; Nimri, Revital; Atlas, Eran; Grunberg, Eli A; Phillip, Moshe

    2011-10-01

    Applying real-time learning into an artificial pancreas system could effectively track the unpredictable behavior of glucose-insulin dynamics and adjust insulin treatment accordingly. We describe a novel learning algorithm and its performance when integrated into the MD-Logic Artificial Pancreas (MDLAP) system developed by the Diabetes Technology Center, Schneider Children's Medical Center of Israel, Petah Tikva, Israel. The algorithm was designed to establish an initial patient profile using open-loop data (Initial Learning Algorithm component) and then make periodic adjustments during closed-loop operation (Runtime Learning Algorithm component). The MDLAP system, integrated with the learning algorithm, was tested in seven different experiments using the University of Virginia/Padova simulator, comprising adults, adolescents, and children. The experiments included simulations using the open-loop and closed-loop control strategy under nominal and varying insulin sensitivity conditions. The learning algorithm was automatically activated at the end of the open-loop segment and after every day of the closed-loop operation. Metabolic control parameters achieved at selected time points were compared. The percentage of time glucose levels were maintained within 70-180 mg/dL for children and adolescents significantly improved when open-loop was compared with day 6 of closed-loop control (Psignificantly reduced by approximately sevenfold (Psignificant reduction in the Low Blood Glucose Index (P<0.001). The new algorithm was effective in characterizing the patient profiles from open-loop data and in adjusting treatment to provide better glycemic control during closed-loop control in both conditions. These findings warrant corroboratory clinical trials.

  4. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  5. Using a Quadtree Algorithm To Assess Line of Sight

    Science.gov (United States)

    Gonzalez, Joseph; Chamberlain, Robert; Tailor, Eric; Gutt, Gary

    2006-01-01

    A matched pair of computer algorithms determines whether line of sight (LOS) is obstructed by terrain. These algorithms were originally designed for use in conjunction with combat-simulation software in military training exercises, but could also be used for such commercial purposes as evaluating lines of sight for antennas or determining what can be seen from a "room with a view." The quadtree preparation algorithm operates on an array of digital elevation data and only needs to be run once for a terrain region, which can be quite large. Relatively little computation time is needed, as each elevation value is considered only one and one-third times. The LOS assessment algorithm uses that quadtree to answer LOS queries. To determine whether LOS is obstructed, a piecewise-planar (or higher-order) terrain skin is computationally draped over the digital elevation data. Adjustments are made to compensate for curvature of the Earth and for refraction of the LOS by the atmosphere. Average computing time appears to be proportional to the number of queries times the logarithm of the number of elevation data points. Accuracy is as high as is possible for the available elevation data, and symmetric results are assured. In the simulation, the LOS query program runs as a separate process, thereby making more random-access memory available for other computations.

  6. Control of the lighting system using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Čongradac Velimir D.

    2012-01-01

    Full Text Available The manufacturing, distribution and use of electricity are of fundamental importance for the social life and they have the biggest influence on the environment associated with any human activity. The energy needed for building lighting makes up 20-40% of the total consumption. This paper displays the development of the mathematical model and genetic algorithm for the control of dimmable lighting on problems of regulating the level of internal lighting and increase of energetic efficiency using daylight. A series of experiments using the optimization algorithm on the realized model confirmed very high savings in electricity consumption.

  7. Density-independent algorithm for sensing moisture content of sawdust based on reflection measurements

    Science.gov (United States)

    A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...

  8. Use of the MULTINEST algorithm for gravitational wave data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Feroz, Farhan; Hobson, Michael P [Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Gair, Jonathan R [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Porter, Edward K [APC, UMR 7164, Universite Paris 7 Denis Diderot, 10, rue Alice Domon et Leonie Duquet, 75205 Paris Cedex 13 (France)

    2009-11-07

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  9. TEACHING ALGORITHMIZATION AND PROGRAMMING USING PYTHON LANGUAGE

    Directory of Open Access Journals (Sweden)

    M. Lvov

    2014-07-01

    Full Text Available The article describes requirements to educational programming languages and considers the use of Python as the first programming language. The issues of introduction of this programming language into teaching and replacing Pascal by Python are examined. The advantages of such approach are regarded. The comparison of popular programming languages is represented from the point of view of their convenience of use for teaching algorithmization and programming. Python supports lots of programming paradigms: structural, object-oriented, functional, imperative and aspect-oriented, and learning can be started without any preparation. There is one more advantage of the language: all algorithms are written easily and structurally in Python. Therefore, due to all mentioned above, it is possible to affirm that Python pretends to become a decent replacement for educational programming language PASCAL both at schools and on the first courses of higher education establishments.

  10. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  11. A decoupled power flow algorithm using particle swarm optimization technique

    International Nuclear Information System (INIS)

    Acharjee, P.; Goswami, S.K.

    2009-01-01

    A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.

  12. Solving the SAT problem using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Arunava Bhattacharjee

    2017-08-01

    Full Text Available In this paper we propose our genetic algorithm for solving the SAT problem. We introduce various crossover and mutation techniques and then make a comparative analysis between them in order to find out which techniques are the best suited for solving a SAT instance. Before the genetic algorithm is applied to an instance it is better to seek for unit and pure literals in the given formula and then try to eradicate them. This can considerably reduce the search space, and to demonstrate this we tested our algorithm on some random SAT instances. However, to analyse the various crossover and mutation techniques and also to evaluate the optimality of our algorithm we performed extensive experiments on benchmark instances of the SAT problem. We also estimated the ideal crossover length that would maximise the chances to solve a given SAT instance.

  13. Preprocessing Algorithm for Deciphering Historical Inscriptions Using String Metric

    Directory of Open Access Journals (Sweden)

    Lorand Lehel Toth

    2016-07-01

    Full Text Available The article presents the improvements in the preprocessing part of the deciphering method (shortly preprocessing algorithm for historical inscriptions of unknown origin. Glyphs used in historical inscriptions changed through time; therefore, various versions of the same script may contain different glyphs for each grapheme. The purpose of the preprocessing algorithm is reducing the running time of the deciphering process by filtering out the less probable interpretations of the examined inscription. However, the first version of the preprocessing algorithm leads incorrect outcome or no result in the output in certain cases. Therefore, its improved version was developed to find the most similar words in the dictionary by relaying the search conditions more accurately, but still computationally effectively. Moreover, a sophisticated similarity metric used to determine the possible meaning of the unknown inscription is introduced. The results of the evaluations are also detailed.

  14. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  15. Estimating the chance of success in IVF treatment using a ranking algorithm.

    Science.gov (United States)

    Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna

    2015-09-01

    In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.

  16. End-Use Efficiency to Lower Carbon Emissions

    International Nuclear Information System (INIS)

    Marnay, Chris; Osborn, Julie; Webber, Carrie

    2001-01-01

    Compelling evidence demonstrating the warming trend in global temperatures and the mechanism behind it, namely the anthropogenic emissions of carbon dioxide and other greenhouse gases (GHG), has spurred an international effort to reduce emissions of these gases. Despite improving efficiency of the U.S. economy in terms of energy cost per dollar of GDP since the signing of the Kyoto Protocol, energy consumption and carbon emissions are continuing to rise as the economy expands. This growing gap further emphasizes the importance of improving energy use efficiency as a component in the U.S. climate change mitigation program. The end-use efficiency research activities at Berkeley Lab incorporate residential, commercial, industrial, and transportation sectors. This paper focuses on two successful U.S. programs that address end-use efficiency in residential and commercial demand: energy efficient performance standards established by the Department of Energy (DOE) and the Environmental Protection Agency's (EPA's) ENERGY STAR(registered trademark) program

  17. Algorithms for Zero-Dimensional Ideals Using Linear Recurrent Sequences

    DEFF Research Database (Denmark)

    Neiger, Vincent; Rahkooy, Hamid; Schost, Éric

    2017-01-01

    Inspired by Faugére and Mou´s sparse FGLM algorithm, we show how using linear recurrent multi-dimensional sequences can allow one to perform operations such as the primary decomposition of an ideal, by computing of the annihilator of one or several such sequences.......Inspired by Faugére and Mou´s sparse FGLM algorithm, we show how using linear recurrent multi-dimensional sequences can allow one to perform operations such as the primary decomposition of an ideal, by computing of the annihilator of one or several such sequences....

  18. A disaggregated analysis of the environmental Kuznets curve for industrial CO_2 emissions in China

    International Nuclear Information System (INIS)

    Wang, Yuan; Zhang, Chen; Lu, Aitong; Li, Li; He, Yanmin; ToJo, Junji; Zhu, Xiaodong

    2017-01-01

    Highlights: • The existence of EKC hypothesis for industrial carbon emissions is tested for China. • A semi-parametric panel regression is used along with the STIRPAT model. • The validity of the EKC hypothesis varies across industry sectors. • The EKC relation to income exists in the electricity and heat production sector. • The EKC relation to urbanization exists in the manufacturing sector. - Abstract: The present study concentrates on a Chinese context and attempts to explicitly examine the impacts of economic growth and urbanization on various industrial carbon emissions through investigation of the existence of an environmental Kuznets curve. Within the Stochastic Impacts by Regression on Population, Affluence and Technology framework, this is the first attempt to simultaneously explore the income/urbanization and disaggregated industrial carbon dioxide emissions nexus, using panel data together with semi-parametric panel fixed effects regression. Our dataset is referred to a provincial panel of China spanning the period 2000–2013. With this information, we find evidence in support of an inverted U-shaped curve relationship between economic growth and carbon dioxide emissions in the electricity and heat production sector, but a similar inference only for urbanization and those emissions in the manufacturing sector. The heterogeneity in the EKC relationship across industry sectors implies that there is urgent need to design more specific policies related to carbon emissions reduction for various industry sectors. Also, these findings contribute to advancing the emerging literature on the development-pollution nexus.

  19. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  20. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.

    Science.gov (United States)

    Kim, Jinkwon; Min, Se Dong; Lee, Myoungho

    2011-06-27

    Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.