WorldWideScience

Sample records for pre-launch algorithm development

  1. Aquarius Salinity Retrieval Algorithm: Final Pre-Launch Version

    Science.gov (United States)

    Wentz, Frank J.; Le Vine, David M.

    2011-01-01

    This document provides the theoretical basis for the Aquarius salinity retrieval algorithm. The inputs to the algorithm are the Aquarius antenna temperature (T(sub A)) measurements along with a number of NCEP operational products and pre-computed tables of space radiation coming from the galaxy and sun. The output is sea-surface salinity and many intermediate variables required for the salinity calculation. This revision of the Algorithm Theoretical Basis Document (ATBD) is intended to be the final pre-launch version.

  2. The pre-launch status of TanSat Mission: Instrument, Retrieval algorithm, Flux inversion and Validation

    Science.gov (United States)

    Liu, Yi; Yin, Zengshan; Yang, Zhongdong; Zheng, Yuquan; Yan, Changxiang; Tian, Xiangjun; Yang, Dongxu

    2016-04-01

    After 5 years development, The Chinese carbon dioxide observation satellite (TanSat), the first scientific experimental CO2 satellite of China, step into the pre-launch phase. The characters of pre-launch carbon dioxide spectrometer have been optimized during the laboratory test and calibration. Radiometric calibration shows a SNR of 440 (O2A 0.76um band), 300 (CO2 1.61um band) and 180 (CO2 2.06um band) on average in the typical radiance condition. Instrument line shape was calibrated automatically in using a well design testing system with laser control and record. After a series of test and calibration in laboratory, the instrumental performances meet the design requirements. TanSat will be launched on August 2016. The optimal estimation theory was involved in TanSat XCO2 retrieval algorithm in a full physics way with simulation of the radiance transfer in atmosphere. Gas absorption, aerosol and cirrus scattering and surface reflectance associate with wavelength dispersion have been considered in inversion for better correction the interference errors to XCO2. In order to simulate the radiance transfer precisely and efficiently, we develop a fast vector radiative transfer simulation method. Application of TanSat algorithm on GOSAT observation (ATANGO) is appropriate to evaluate the performance of algorithm. Validated with TCCON measurements, the ATANGO product achieves a 1.5 ppm precision. A Chinese carbon cycle data- assimilation system Tan-Tracker is developed based on the atmospheric chemical transport model GEOS-Chem. Tan-Tracker is a dual-pass data-assimilation system in which both CO2 concentrations and CO2 fluxes are simultaneously assimilated from atmospheric observations. A validation network has been established around China to support a series of CO2 satellite of China, which include 3 IFS-125HR and 4 Optical Spectrum Analyzer etc.

  3. A Pre-launch Analysis of NASA's SMAP Mission Data

    Science.gov (United States)

    Escobar, V. M.; Brown, M. E.

    2012-12-01

    an email-based review of expert end-users and earth science researchers to eliciting how pre-launch activities and research is being conducted in thematic group's organizations. Our focus through the SMAP Applications Program will be to (1) improve the missions understanding of the SMAP user community requirements, (2) document and communicate the perceived challenges and advantages to the mission scientists, and (3) facilitate the movement of science into policy and decision making arenas. We will analyze the data of this review to understand the perceived benefits to pre-launch efforts, user engagement and define areas were the connection between science development and user engagement can continue to improve and further benefit future mission pre launch efforts. The research will facilitate collaborative opportunities between agencies, broadening the fields of science where soil moisture observation data can be applied.

  4. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    Science.gov (United States)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  5. Apollo Director Phillips Monitors Apollo 11 Pre-Launch Activities

    Science.gov (United States)

    1969-01-01

    From the Kennedy Space Flight Center (KSC) control room, Apollo Program Director Lieutenant General Samuel C. Phillips monitors pre-launch activities for Apollo 11. The Apollo 11 mission, the first lunar landing mission, launched from the KSC in Florida via the Marshall Space Flight Center (MSFC) developed Saturn V launch vehicle on July 16, 1969 and safely returned to Earth on July 24, 1969. Aboard the space craft were astronauts Neil A. Armstrong, commander; Michael Collins, Command Module (CM) pilot; and Edwin E. (Buzz) Aldrin Jr., Lunar Module (LM) pilot. The CM, 'Columbia', piloted by Collins, remained in a parking orbit around the Moon while the LM, 'Eagle'', carrying astronauts Armstrong and Aldrin, landed on the Moon. On July 20, 1969, Armstrong was the first human to ever stand on the lunar surface, followed by Aldrin. During 2½ hours of surface exploration, the crew collected 47 pounds of lunar surface material for analysis back on Earth. With the success of Apollo 11, the national objective to land men on the Moon and return them safely to Earth had been accomplished.

  6. JPSS-1 VIIRS pre-launch radiometric performance

    Science.gov (United States)

    Oudrari, Hassan; McIntire, Jeff; Xiong, Xiaoxiong; Butler, James; Efremova, Boryana; Ji, Qiang; Lee, Shihyan; Schwarting, Tom

    2015-09-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board the first Joint Polar Satellite System (JPSS) completed its sensor level testing on December 2014. The JPSS-1 (J1) mission is scheduled to launch in December 2016, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. VIIRS instrument was designed to provide measurements of the globe twice daily. It is a wide-swath (3,040 km) cross-track scanning radiometer with spatial resolutions of 370 and 740 m at nadir for imaging and moderate bands, respectively. It covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands [0.412 μm to 12.01 μm]. VIIRS observations are used to generate 22 environmental data products (EDRs). This paper will briefly describe J1 VIIRS characterization and calibration performance and methodologies executed during the pre-launch testing phases by the independent government team, to generate the at-launch baseline radiometric performance, and the metrics needed to populate the sensor data record (SDR) Look-Up-Tables (LUTs). This paper will also provide an assessment of the sensor pre-launch radiometric performance, such as the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field and stray light responses. A set of performance metrics generated during the pre-launch testing program will be compared to the SNPP VIIRS pre-launch performance.

  7. Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8

    Science.gov (United States)

    Martin, Rodney Alexander

    2010-01-01

    This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust

  8. Design and Flight Performance of the Orion Pre-Launch Navigation System

    Science.gov (United States)

    Zanetti, Renato

    2016-01-01

    Launched in December 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion vehicle's Exploration Flight Test-1 (EFT-1) successfully completed the objective to test the prelaunch and entry components of the system. Orion's pre-launch absolute navigation design is presented, together with its EFT-1 performance.

  9. JPSS-1 VIIRS Radiometric Characterization and Calibration Based on Pre-Launch Testing

    Directory of Open Access Journals (Sweden)

    Hassan Oudrari

    2016-01-01

    Full Text Available The Visible Infrared Imaging Radiometer Suite (VIIRS on-board the first Joint Polar Satellite System (JPSS completed its sensor level testing on December 2014. The JPSS-1 (J1 mission is scheduled to launch in December 2016, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP mission. VIIRS instrument has 22 spectral bands covering the spectrum between 0.4 and 12.6 μm. It is a cross-track scanning radiometer capable of providing global measurements twice daily, through observations at two spatial resolutions, 375 m and 750 m at nadir for the imaging and moderate bands, respectively. This paper will briefly describe J1 VIIRS characterization and calibration performance and methodologies executed during the pre-launch testing phases by the government independent team to generate the at-launch baseline radiometric performance and the metrics needed to populate the sensor data record (SDR Look-Up-Tables (LUTs. This paper will also provide an assessment of the sensor pre-launch radiometric performance, such as the sensor signal to noise ratios (SNRs, radiance dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, spectral performance, response-vs-scan (RVS, and scattered light response. A set of performance metrics generated during the pre-launch testing program will be compared to both the VIIRS sensor specification and the SNPP VIIRS pre-launch performance.

  10. Pre-Launch Calibration and Performance Study of the Polarcube 3u Temperature Sounding Radiometer Mission

    Science.gov (United States)

    Periasamy, L.; Gasiewski, A. J.; Sanders, B. T.; Rouw, C.; Alvarenga, G.; Gallaher, D. W.

    2016-12-01

    The positive impact of passive microwave observations of tropospheric temperature, water vapor and surface variables on short-term weather forecasts has been clearly demonstrated in recent forecast anomaly growth studies. The development of a fleet of such passive microwave sensors especially at V-band and higher frequencies in low earth orbit using 3U and 6U CubeSats could help accomplish the aforementioned objectives at low system cost and risk as well as provide for regularly updated radiometer technology. The University of Colorado's 3U CubeSat, PolarCube is intended to serve as a demonstrator for such a fleet of passive sounders and imagers. PolarCube supports MiniRad, an eight channel, double sideband 118.7503 GHz passive microwave sounder. The mission is focused primarily on sounding in Arctic and Antarctic regions with the following key remote sensing science and engineering objectives: (i) Collect coincident tropospheric temperature profiles above sea ice, open polar ocean, and partially open areas to develop joint sea ice concentration and lower tropospheric temperature mapping capabilities in clear and cloudy atmospheric conditions. This goal will be accomplished in conjunction with data from existing passive microwave sensors operating at complementary bands; and (ii) Assess the capabilities of small passive microwave satellite sensors for environmental monitoring in support of the future development of inexpensive Earth science missions. Performance data of the payload/spacecraft from pre-launch calibration will be presented. This will include- (i) characterization of the antenna sub-system comprising of an offset 3D printed feedhorn and spinning parabolic reflector and impact of the antenna efficiencies on radiometer performance, (ii) characterization of MiniRad's RF front-end and IF back-end with respect to temperature fluctuations and their impact on atmospheric temperature weighting functions and receiver sensitivity, (iii) results from roof

  11. Pre-Launch Assessment of User Needs for SWOT Mission Data Products

    Science.gov (United States)

    Srinivasan, M. M.; Peterson, C. A.; Doorn, B.

    2015-12-01

    In order to effectively address the applications requirements of future Surface Water and Ocean Topography (SWOT) mission data users, we must understand their needs with respect to latency, spatial scales, technical capabilities, and other practical considerations. We have developed the 1st SWOT User Survey for broad distribution to the SWOT applications community to provide the SWOT Project with an understanding of and improved ability to support users needs. Actionable knowledge for specific applications may be realized when we can determine the margins of user requirements for data products and access. The SWOT Applications team will be launching a SWOT Early Adopters program and are interested in identifying a broad community of users who will participate in pre-launch applications activities including meetings, briefings, and workshops. The SWOT applications program is designed to connect mission scientists to end users and leverage the scientific research and data management tools with operational decision-making for different thematic users and data requirements. SWOT is scheduled to launch in 2020, so simulated hydrology and ocean data sets have been and will continued to be developed by science team members and the SWOT Project in order to determine how the data will represent the physical Earth systems targeted by the mission. SWOT will produce the first global survey of Earth's surface water by measuring sea surface height and the heights, slopes, and inundated areas of rivers, lakes, and wetlands. These coastal, lake and river measurements will be used for monitoring the hydrologic cycle, flooding, and climate impacts of a changing environment. The oceanographic measurements will enhance understanding of submesoscale processes and extend the capabilities of ocean state and climate prediction models.

  12. The Thermal Infrared Sensor (TIRS on Landsat 8: Design Overview and Pre-Launch Characterization

    Directory of Open Access Journals (Sweden)

    Dennis C. Reuter

    2015-01-01

    Full Text Available The Thermal Infrared Sensor (TIRS on Landsat 8 is the latest thermal sensor in that series of missions. Unlike the previous single-channel sensors, TIRS uses two channels to cover the 10–12.5 micron band. It is also a pushbroom imager; a departure from the previous whiskbroom approach. Nevertheless, the instrument requirements are defined such that data continuity is maintained. This paper describes the design of the TIRS instrument, the results of pre-launch calibration measurements and shows an example of initial on-orbit science performance compared to Landsat 7.

  13. Pre-Launch Absolute Calibration of CCD/CBERS-2B Sensor

    Science.gov (United States)

    Ponzoni, Flávio Jorge; Albuquerque, Bráulio Fonseca Carneiro

    2008-01-01

    Pre-launch absolute calibration coefficients for the CCD/CBERS-2B sensor have been calculated from radiometric measurements performed in a satellite integration and test hall in the Chinese Academy of Space Technology (CAST) headquarters, located in Beijing, China. An illuminated integrating sphere was positioned in the test hall facilities to allow the CCD/CBERS-2B imagery of the entire sphere aperture. Calibration images were recorded and a relative calibration procedure adopted exclusively in Brazil was applied to equalize the detectors responses. Averages of digital numbers (DN) from these images were determined and correlated to their respective radiance levels in order to calculate the absolute calibration coefficients. It has been the first time these pre-launch absolute calibration coefficients have been calculated considering the Brazilian image processing criteria. Now it will be possible to compare them to those that will be calculated from vicarious calibration campaigns. This comparison will permit the CCD/CBERS-2B monitoring and the frequently data updating to the user community. PMID:27873886

  14. Pre-Launch Radiometric Characterization of JPSS-1 VIIRS Thermal Emissive Bands

    Directory of Open Access Journals (Sweden)

    Jeff McIntire

    2016-01-01

    Full Text Available Pre-launch characterization and calibration of the thermal emissive spectral bands on the Joint Polar Satellite System (JPSS-1 Visible Infrared Imaging Radiometer Suite (VIIRS is critical to ensure high quality data products for environmental and climate data records post-launch. A comprehensive test program was conducted at the Raytheon El Segundo facility in 2013–2014, including extensive environmental testing. This work is focused on the thermal band radiometric performance and stability, including evaluation of a number of sensor performance metrics and estimation of uncertainties. Analysis has shown that JPSS-1 VIIRS thermal bands perform very well in relation to their design specifications, and comparisons to the Suomi National Polar-orbiting Partnership (SNPP VIIRS instrument have shown their performance to be comparable.

  15. Pre-Launch Noise Characterization of the Landsat-7 Enhanced Thematic Mapper Plus (ETM Plus)

    Science.gov (United States)

    Pedelty, J. A.; Markham, B. L.; Barker, J. L.; Seiferth, J. C.

    1999-01-01

    A noise characterization of the Landsat-7 Enhanced Thematic Mapper Plus (ETM+) instrument was performed as part of a near-real time performance assessment and health monitoring program. Perl'ormance data for the integrated Landsat-7 spacecraft and ETM+ were collected before, during, and after the spacecraft thermal vacuum testing program at the Lockheed Martin Missiles and Space (LMMS) facilities in Valley Forge, PA. The Landsat-7 spacecraft and ETM+ instrument were successfully launched on April 15, 1999. The spacecraft and ETM+ are now nearing the end of the on orbit engineering checkout phase, and Landsat-7 is expected to be declared operational on or about July 15, 1999. A preliminary post-launch noise characterization was performed and compared with the pre-launch characterization. In general the overall noise levels in the ETM+ are at or below the specification levels. Coherent noise is seen in most bands, but is only operationally significant when imaging in (he panchromatic band (band 8). This coherent noise has an amplitude as high as approximately 3 DN (peak-to-peak, high gain) at the Nyquist rate of 104 kHz, and causes the noise levels in panchromatic band images at times to exceed the total noise specification by up to approximately 10%. However, this 104 kHz noise is now much weaker than it was prior to the successful repair of the ETM+ power supplies that was completed in May 1998. Weak and stable coherent noise at approximately 5 kHz is seen in all bands in the prime focal plane (bands 1-4 and 8) with the prime (side A) electronics. Very strong coherent noise at approximately 20 kHz is seen in a few detectors of bands 1 and 8, but this noise is almost entirely in the turn-around region between scans when the ETM+ is not imaging the Earth. Strong coherent noise was seen in 2 detectors of band 5 during some of the pre-launch testing; however, this noise seems to be temperature dependent, and has not been seen in the current on orbit environment. Strong

  16. Planck pre-launch status: calibration of the Low Frequency Instrument flight model radiometers

    CERN Document Server

    Villa, F; Sandri, M; Meinhold, P; Poutanen, T; Battaglia, P; Franceschet, C; Hughes, N; Laaninen, M; Lapolla, P; Bersanelli, M; Butler, R C; Cuttaia, F; D'Arcangelo, O; Frailis, M; Franceschi, E; Galeotta, S; Gregorio, A; Leonardi, R; Lowe, S R; Mandolesi, N; Maris, M; Mendes, L; Mennella, A; Morgante, G; Stringhetti, L; Tomasi, M; Valenziano, L; Zacchei, A; Zonca, A; Aja, B; Artal, E; Balasini, M; Bernardino, T; Blackhurst, E; Boschini, L; Cappellini, B; Cavaliere, F; Colin, A; Colombo, F; Davis, R J; De La Fuente, L; Edgeley, J; Gaier, T; Galtress, A; Hoyland, R; Jukkala, P; Kettle, D; Kilpia, V-H; Lawrence, C R; Lawson, D; Leahy, J P; Leutenegger, P; Levin, S; Maino, D; Malaspina, M; Mediavilla, A; Miccolis, M; Pagan, L; Pascual, J P; Pasian, F; Pecora, M; Pospieszalski, M; Roddis, N; Salmon, M J; Seiffert, M; Silvestri, R; Simonetto, A; Sjoman, P; Sozzi, C; Tuovinen, J; Varis, J; Wilkinson, A; Winder, F

    2010-01-01

    The Low Frequency Instrument (LFI) on-board the ESA Planck satellite carries eleven radiometer subsystems, called Radiometer Chain Assemblies (RCAs), each composed of a pair of pseudo-correlation receivers. We describe the on-ground calibration campaign performed to qualify the flight model RCAs and to measure their pre-launch performances. Each RCA was calibrated in a dedicated flight-like cryogenic environment with the radiometer front-end cooled to 20K and the back-end at 300K, and with an external input load cooled to 4K. A matched load simulating a blackbody at different temperatures was placed in front of the sky horn to derive basic radiometer properties such as noise temperature, gain, and noise performance, e.g. 1/f noise. The spectral response of each detector was measured as was their susceptibility to thermal variation. All eleven LFI RCAs were calibrated. Instrumental parameters measured in these tests, such as noise temperature, bandwidth, radiometer isolation, and linearity, provide essential i...

  17. An Overview of JPSS-1 VIIRS Pre-Launch Testing and Performanc

    Science.gov (United States)

    Xiong, X.; McIntire, J.; Oudrari, H.; Thome, K.; Butler, J. J.; Ji, Q.; Schwarting, T.

    2015-12-01

    The Visible-Infrared Imaging Radiometer Suite (VIIRS) is a key instrument for the Suomi National Polar-orbiting Partnership (S-NPP) satellite launched in 2011 and future Joint Polar Satellite System (JPSS) satellites. The JPSS-1 (J1) spacecraft is scheduled to launch in January 2017. VIIRS instrument was designed to provide measurements of the globe twice daily. It is a cross-track scanning radiometer using a rotating telescope with spatial resolutions of 375 and 750 m at nadir for its imaging and moderate bands, respectively. It has 22 spectral bands covering wavelengths from 0.412 to 12.01 μm, including 14 reflective solar bands (RSB), 7 thermal emissive bands (TEB), and 1 day-night band (DNB). VIIRS observations are used to generate 22 environmental data products (EDRs), enabling a wide range of applications. This paper describes J1 VIIRS pre-launch testing program, instrument calibration and characterization strategies, and its projected performance based on independent analyses made by the NASA VIIRS Characterization Support Team (VCST). It also discusses the effort made the joint government team to produce sensor at-launch baseline performance parameters and the metrics needed to populate the Look-Up-Tables (LUTs) needed for the sensor data records (SDR) production. Sensor performance to be illustrated in this paper include signal-to-noise ratios (SNRs), dynamic range, spatial and spectral performance, response versus scan-angle (RVS), and polarization sensitivity.

  18. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  19. Non-Intrusive Techniques of Inspections During the Pre-Launch Phase of Space Vehicle

    Science.gov (United States)

    Thirumalainambi, Rejkumar; Bardina, Jorge E.

    2005-01-01

    This paper addresses a method of non-intrusive local inspection of surface and sub-surface conditions, interfaces, laminations and seals in both space vehicle and ground operations with an integrated suite of imaging sensors during pre-launch operations. It employs an advanced Raman spectrophotometer with additional spectrophotometers and lidar mounted on a flying robot to constantly monitor the space hardware as well as inner surface of the vehicle and ground operations hardware. This paper addresses a team of micro flying robots with necessary sensors and photometers to monitor the entire space vehicle internally and externally. The micro flying robots can reach altitude with least amount of energy, where astronauts have difficulty in reaching and monitoring the materials and subsurface faults. The micro flying robot has an embedded fault detection system which acts as an advisory system and in many cases micro flying robots act as a Supervisor to fix the problems. As missions expand to a sustainable presence in the Moon, and extend for durations longer than one year in lunar outpost, the effectiveness of the instrumentation and hardware has to be revolutionized if NASA is to meet high levels of mission safety, reliability, and overall success. The micro flying robot uses contra-rotating propellers powered by an ultra-thin, ultrasonic motor with currently the world's highest power weight ratio, and is balanced in mid-air by means of the world's first stabilizing mechanism using a linear actuator. The essence of micromechatronics has been brought together in high-density mounting technology to minimize the size and weight. The robot can take suitable payloads of photometers, embedded chips for image analysis and micro pumps for sealing cracks or fixing other material problems. This paper also highlights advantages that this type of non-intrusive techniques offer over costly and monolithic traditional techniques.

  20. The Orbiting Carbon Observatory (OCO-2): spectrometer performance evaluation using pre-launch direct sun measurements

    Science.gov (United States)

    Frankenberg, C.; Pollock, R.; Lee, R. A. M.; Rosenberg, R.; Blavier, J.-F.; Crisp, D.; O'Dell, C. W.; Osterman, G. B.; Roehl, C.; Wennberg, P. O.; Wunch, D.

    2015-01-01

    The Orbiting Carbon Observatory-2 (OCO-2), launched on 2 July 2014, is a NASA mission designed to measure the column-averaged CO2 dry air mole fraction, XCO2. Towards that goal, it will collect spectra of reflected sunlight in narrow spectral ranges centered at 0.76, 1.6 and 2.0 μm with a resolving power (λ/Δ λ) of 20 000. These spectra will be used in an optimal estimation framework to retrieve XCO2. About 100 000 cloud free soundings of XCO2 each day will allow estimates of net CO2 fluxes on regional to continental scales to be determined. Here, we evaluate the OCO-2 spectrometer performance using pre-launch data acquired during instrument thermal vacuum tests in April 2012. A heliostat and a diffuser plate were used to feed direct sunlight into the OCO-2 instrument and spectra were recorded. These spectra were compared to those collected concurrently from a nearby high-resolution Fourier Transform Spectrometer that was part of the Total Carbon Column Observing Network (TCCON). Using the launch-ready OCO-2 calibration and spectroscopic parameters, we performed total column scaling fits to all spectral bands and compared these to TCCON results. On 20 April, we detected a CO2 plume from the Los Angeles basin at the JPL site with strongly enhanced short-term variability on the order of 1% (3-4 ppm). We also found good (< 0.5 ppm) inter-footprint consistency in retrieved XCO2. The variations in spectral fitting residuals are consistent with signal-to-noise estimates from instrument calibration, while average residuals are systematic and mostly attributable to remaining errors in our knowledge of the CO2 and O2 spectroscopic parameters. A few remaining inconsistencies observed during the tests may be attributable to the specific instrument setup on the ground and will be re-evaluated with in-orbit data.

  1. JPSS-1 VIIRS Pre-Launch Response Versus Scan Angle Testing and Performance

    Directory of Open Access Journals (Sweden)

    David Moyer

    2016-02-01

    Full Text Available The Visible Infrared Imaging Radiometer Suite (VIIRS instruments on-board both the Suomi National Polar-orbiting Partnership (S-NPP and the first Joint Polar Satellite System (JPSS-1 spacecraft, with launch dates of October 2011 and December 2016 respectively, are cross-track scanners with an angular swath of ±56.06°. A four-mirror Rotating Telescope Assembly (RTA is used for scanning combined with a Half Angle Mirror (HAM that directs light exiting from the RTA into the aft-optics. It has 14 Reflective Solar Bands (RSBs, seven Thermal Emissive Bands (TEBs and a panchromatic Day Night Band (DNB. There are three internal calibration targets, the Solar Diffuser, the BlackBody and the Space View, that have fixed scan angles within the internal cavity of VIIRS. VIIRS has calibration requirements of 2% on RSB reflectance and as tight as 0.4% on TEB radiance that requires the sensor’s gain change across the scan or Response Versus Scan angle (RVS to be well quantified. A flow down of the top level calibration requirements put constraints on the characterization of the RVS to 0.2%–0.3% but there are no specified limitations on the magnitude of response change across scan. The RVS change across scan angle can vary significantly between bands with the RSBs having smaller changes of ~2% and some TEBs having ~10% variation. Within a band, the RVS has both detector and HAM side dependencies that vary across scan. Errors in the RVS characterization will contribute to image banding and striping artifacts if their magnitudes are above the noise level of the detectors. The RVS was characterized pre-launch for both S-NPP and JPSS-1 VIIRS and a comparison of the RVS curves between these two sensors will be discussed.

  2. The Orbiting Carbon Observatory (OCO-2: spectrometer performance evaluation using pre-launch direct sun measurements

    Directory of Open Access Journals (Sweden)

    C. Frankenberg

    2014-07-01

    Full Text Available The Orbiting Carbon Observatory-2 (OCO-2, launched on 2 July 2014, is a NASA mission designed to measure the column-averaged CO2 dry air mole fraction, XCO2. Towards that goal, it will collect spectra of reflected sun-light in narrow spectral ranges centered at 0.76, 1.6 and 2.0 μm with a resolving power (λ/Δ λ of 20 000. These spectra will be used in an optimal estimation framework to retrieve XCO2. About 100 000 cloud free soundings of XCO2 each day will allow estimates of net CO2 fluxes on regional to continental scales to be determined. Here, we evaluate the OCO-2 spectrometer performance using pre-launch data acquired during instrument thermal vacuum tests in April 2012. A heliostat and a diffuser plate were used to feed direct sunlight into the OCO-2 instrument and spectra were recorded. These spectra were compared to those collected concurrently from a nearby high-resolution Fourier Transform Spectrometer that was part of the Total Carbon Column Observing Network (TCCON. Using the launch-ready OCO-2 calibration and spectroscopic parameters, we performed total column scaling fits to all spectral bands and compared these to TCCON results. On 20 April, we detected a CO2 plume from the Los Angeles basin at the JPL site with strongly enhanced short-term variability on the order of 1% (3–4 ppm. We also found good (2. The variations in spectral fitting residuals are consistent with signal-to-noise estimates from instrument calibration, while average residuals are systematic and mostly attributable to remaining errors in our knowledge of the CO2 and O2 spectroscopic parameters. A few remaining inconsistencies observed during TVAC may be attributable to the specific instrument setup on the ground and will be re-evaluated with in-orbit data, when the instrument is expected to be in a much more stable environment.

  3. ESA ExoMars: Pre-launch PanCam Geometric Modeling and Accuracy Assessment

    Science.gov (United States)

    Li, D.; Li, R.; Yilmaz, A.

    2014-08-01

    ExoMars is the flagship mission of the European Space Agency (ESA) Aurora Programme. The mobile scientific platform, or rover, will carry a drill and a suite of instruments dedicated to exobiology and geochemistry research. As the ExoMars rover is designed to travel kilometres over the Martian surface, high-precision rover localization and topographic mapping will be critical for traverse path planning and safe planetary surface operations. For such purposes, the ExoMars rover Panoramic Camera system (PanCam) will acquire images that are processed into an imagery network providing vision information for photogrammetric algorithms to localize the rover and generate 3-D mapping products. Since the design of the ExoMars PanCam will influence localization and mapping accuracy, quantitative error analysis of the PanCam design will improve scientists' awareness of the achievable level of accuracy, and enable the PanCam design team to optimize its design to achieve the highest possible level of localization and mapping accuracy. Based on photogrammetric principles and uncertainty propagation theory, we have developed a method to theoretically analyze how mapping and localization accuracy would be affected by various factors, such as length of stereo hard-baseline, focal length, and pixel size, etc.

  4. Computer simulation of Saturn 5 response to pre-launch wind loads

    Science.gov (United States)

    Coffin, T.

    1970-01-01

    A digital computer program is described which was developed to estimate Saturn 5 response to prelaunch wind conditions at Cape Kennedy. The program computes displacement and bending moment statistics as a function of parameters defining the atmospheric environment. A sample problem is provided to illustrate utilization of the program.

  5. Planetary Protection Concerns During Pre-Launch Radioisotope Power System Final Integration Activities

    Science.gov (United States)

    Chen, Fei; McKay, Terri; Spry, James A.; Colozza, Anthony J.; DiStefano, Salvador

    2012-01-01

    The Advanced Stirling Radioisotope Generator (ASRG) is a next-generation radioisotope-based power system that is currently being developed as an alternative to the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG). Power sources such as these may be needed for proposed missions to solar system planets and bodies that have challenging Planetary Protection (PP) requirements (e.g. Mars, Europa, Enceladus) that may support NASA s search for life, remnants of past life, and the precursors of life. One concern is that the heat from the ASRG could potentially create a region in which liquid water may occur. As advised by the NASA Planetary Protection Officer, when deploying an ASRG to Mars, the current COSPAR/NASA PP policy should be followed for Category IVc mission. Thus, sterilization processing of the ASRG to achieve bioburden reduction would be essential to meet the Planetary Protection requirements. Due to thermal constraints and associated low temperature limits of elements of the ASRG, vapor hydrogen peroxide (VHP) was suggested as a candidate alternative sterilization process to complement dry heat microbial reduction (DHMR) for the assembled ASRG. The following proposed sterilization plan for the ASRG anticipates a mission Category IVc level of cleanliness. This plan provides a scenario in which VHP is used as the final sterilization process. Keywords: Advanced Stirling Radioisotope Generator (ASRG), Planetary Protection (PP), Vapor hydrogen peroxide (VHP) sterilization.

  6. Planck pre-launch status: Design and description of the Low Frequency Instrument

    CERN Document Server

    Bersanelli, M; Butler, R C; Mennella, A; Villa, F; Aja, B; Artal, E; Artina, E; Baccigalupi, C; Balasini, M; Baldan, G; Banday, A; Bastia, P; Battaglia, P; Bernardino, T; Blackhurst, E; Boschini, L; Burigana, C; Cafagna, G; Cappellini, B; Cavaliere, F; Colombo, F; Crone, G; Cuttaia, F; D'Arcangelo, O; Danese, L; Davies, R D; Davis, R J; De Angelis, L; De Gasperis, G C; De La Fuente, L; De Rosa, A; De Zotti, G; Falvella, M C; Ferrari, F; Ferretti, R; Figini, L; Fogliani, S; Franceschet, C; Franceschi, E; Gaier, T; Garavaglia, S; Gomez, F; Gorski, K; Gregorio, A; Guzzi, P; Herreros, J M; Hildebrandt, S R; Hoyland, R; Hughes, N; Janssen, M; Jukkala, P; Kettle, D; Kilpia, V H; Laaninen, M; Lapolla, P M; Lawrence, C R; Leahy, J P; Leonardi, R; Leutenegger, P; Levin, S; Lilje, P B; Lowe, S R; Lubin, D Lawson P M; Maino, D; Malaspina, M; Maris, M; Marti-Canales, J; Martinez-Gonzalez, E; Mediavilla, A; Meinhold, P; Miccolis, M; Morgante, G; Natoli, P; Nesti, R; Pagan, L; Paine, C; Partridge, B; Pascual, J P; Pasian, F; Pearson, D; Pecora, M; Perrotta, F; Platania, P; Pospieszalski, M; Poutanen, T; Prina, M; Rebolo, R; Roddis, N; Rubino-Martin, J A; Salmon, n M J; Sandri, M; Seiffert, M; Silvestri, R; Simonetto, A; Sjoman, P; Smoot, G F; Sozzi, C; Stringhetti, L; Taddei, E; Tauber, J; Terenzi, L; Tomasi, M; Tuovinen, J; Valenziano, L; Varis, J; Vittorio, N; Wade, L A; Wilkinson, A; Winder, F; Zacchei, A; Zonca, A

    2010-01-01

    In this paper we present the Low Frequency Instrument (LFI), designed and developed as part of the Planck space mission, the ESA program dedicated to precision imaging of the cosmic microwave background (CMB). Planck-LFI will observe the full sky in intensity and polarisation in three frequency bands centred at 30, 44 and 70 GHz, while higher frequencies (100-850 GHz) will be covered by the HFI instrument. The LFI is an array of microwave radiometers based on state-of-the-art Indium Phosphide cryogenic HEMT amplifiers implemented in a differential system using blackbody loads as reference signals. The front-end is cooled to 20K for optimal sensitivity and the reference loads are cooled to 4K to minimise low frequency noise. We provide an overview of the LFI, discuss the leading scientific requirements and describe the design solutions adopted for the various hardware subsystems. The main drivers of the radiometric, optical and thermal design are discussed, including the stringent requirements on sensitivity, ...

  7. Pre-launch Estimates for GLAST Sensitivity to Dark Matter Annihilation Signals

    Energy Technology Data Exchange (ETDEWEB)

    Baltz, E.A.; Berenji, B.; /SLAC /KIPAC, Menlo Park; Bertone, G.; /Paris, Inst. Astrophys.; Bergstrom, L.; /Stockholm U.; Bloom, E.; /SLAC /KIPAC, Menlo Park; Bringmann, T.; /Stockholm U.; Chiang, J.; Cohen-Tanugi, J.; /SLAC /KIPAC, Menlo Park; Conrad, J.; /Stockholm U.; Edmonds, Y.; /SLAC /KIPAC, Menlo Park; Edsjo, J.; /Stockholm U.; Godfrey, G.; /SLAC /KIPAC, Menlo Park; Hughes, R.E.; /Ohio State U.; Johnson, R.P.; /UC, Santa Cruz; Lionetto, A.; /Rome U.,Tor Vergata /INFN, Rome2; Moiseev, A.A.; /CRESST; Morselli, A.; /Rome U.,Tor Vergata /INFN, Rome2; Moskalenko, I.V.; /Stanford U., HEPL /KIPAC, Menlo Park; Nuss, E.; /Montpellier U.; Ormes, J.F.; /Denver U.; Rando, R.; /INFN, Padua /Ohio State U. /Stockholm U. /Ohio State U. /Garching, Max Planck Inst., MPE /SLAC /KIPAC, Menlo Park /Ohio State U.

    2009-05-15

    We investigate the sensitivity of the Gamma-ray Large Area Space Telescope (GLAST) to indirectly detect weakly interacting massive particles (WIMPs) through the {gamma}-ray signal that their pair annihilation produces. WIMPs are among the favorite candidates to explain the compelling evidence that about 80% of the mass in the Universe is non-baryonic dark matter (DM). They are serendipitously motivated by various extensions of the standard model of particle physics such as Supersymmetry and Universal Extra Dimensions (UED). With its unprecedented sensitivity and its very large energy range (20 MeV to more than 300 GeV) the main instrument on board the GLAST satellite, the Large Area Telescope (LAT), will open a new window of discovery. As our estimates show, the LAT will be able to detect an indirect DM signature for a large class of WIMP models given a cuspy profile for the DM distribution. Using the current state of the art Monte Carlo and event reconstruction software developed within the LAT collaboration, we present preliminary sensitivity studies for several possible sources inside and outside the Galaxy. We also discuss the potential of the LAT to detect UED via the electron/positron channel. Diffuse background modeling and other background issues that will be important in setting limits or seeing a signal are presented.

  8. Crowd Behavior Algorithm Development for COMBAT XXI

    Science.gov (United States)

    2017-05-30

    time to scenario development for CXXI scenario integraters. Report Organization This report is organized into literature review, analysis, results, and...TRAC-M-TR-17-027 30 May 2017 Crowd Behavior Algorithm Development for COMBATXXI TRADOC Analysis Center 700 Dyer Road Monterey, California 93943-0692...30 May 2017 Crowd Behavior Algorithm Development for COMBATXXI LTC Casey Connors Dr. Steven Hall Dr. Imre Balogh Terry Norbraten TRADOC Analysis

  9. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b

  10. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  11. Scheduling Algorithm for Complex Product Development

    Institute of Scientific and Technical Information of China (English)

    LIUMin; ZHANGLong; WUCheng

    2004-01-01

    This paper describes the Complex product development project scheduling problem (CPDPSP) with a great number of activities, complicated resource, precedence and calendar constraints. By the conversion of precedence constraint relations, the CPDPSP is simplified. Then, according to the predictive control principle, we propose a new scheduling algorithm Based on prediction (BoP-procedure). In order to get the problem characteristics coming from resource status and precedence constraints of the scheduling problem at the scheduling time, a sub-project is constructed on the basis of a sub-AoN (Activity on node) graph of the project. Then, we use the modified GDH-procedure to solve the sub-project scheduling problem and to obtain the maximum feasible active subset for determining the activity group which satisfies resource, precedence and calendar constraints and has the highest scheduling priority at the scheduling time. Additionaily, we make a great number of numerical computations and compare the performance of BoP-procedure algorithm with those of other scheduling algorithms. Computation results show that the BoP-procedure algorithm is more suitable for the CPDPSP. At last, we discuss briefly future research work in the CPDPSP.

  12. Connected-Health Algorithm: Development and Evaluation.

    Science.gov (United States)

    Vlahu-Gjorgievska, Elena; Koceski, Saso; Kulev, Igor; Trajkovik, Vladimir

    2016-04-01

    Nowadays, there is a growing interest towards the adoption of novel ICT technologies in the field of medical monitoring and personal health care systems. This paper proposes design of a connected health algorithm inspired from social computing paradigm. The purpose of the algorithm is to give a recommendation for performing a specific activity that will improve user's health, based on his health condition and set of knowledge derived from the history of the user and users with similar attitudes to him. The algorithm could help users to have bigger confidence in choosing their physical activities that will improve their health. The proposed algorithm has been experimentally validated using real data collected from a community of 1000 active users. The results showed that the recommended physical activity, contributed towards weight loss of at least 0.5 kg, is found in the first half of the ordered list of recommendations, generated by the algorithm, with the probability > 0.6 with 1 % level of significance.

  13. The pre-launch Planck Sky Model: a model of sky emission at submillimetre to centimetre wavelengths

    Science.gov (United States)

    Delabrouille, J.; Betoule, M.; Melin, J.-B.; Miville-Deschênes, M.-A.; Gonzalez-Nuevo, J.; Le Jeune, M.; Castex, G.; de Zotti, G.; Basak, S.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Bernard, J.-P.; Bouchet, F. R.; Clements, D. L.; da Silva, A.; Dickinson, C.; Dodu, F.; Dolag, K.; Elsner, F.; Fauvet, L.; Faÿ, G.; Giardino, G.; Leach, S.; Lesgourgues, J.; Liguori, M.; Macías-Pérez, J. F.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Montier, L.; Mottet, S.; Paladini, R.; Partridge, B.; Piffaretti, R.; Prezeau, G.; Prunet, S.; Ricciardi, S.; Roman, M.; Schaefer, B.; Toffolatti, L.

    2013-05-01

    We present the Planck Sky Model (PSM), a parametric model for generating all-sky, few arcminute resolution maps of sky emission at submillimetre to centimetre wavelengths, in both intensity and polarisation. Several options are implemented to model the cosmic microwave background, Galactic diffuse emission (synchrotron, free-free, thermal and spinning dust, CO lines), Galactic H ii regions, extragalactic radio sources, dusty galaxies, and thermal and kinetic Sunyaev-Zeldovich signals from clusters of galaxies. Each component is simulated by means of educated interpolations/extrapolations of data sets available at the time of the launch of the Planck mission, complemented by state-of-the-art models of the emission. Distinctive features of the simulations are spatially varying spectral properties of synchrotron and dust; different spectral parameters for each point source; modelling of the clustering properties of extragalactic sources and of the power spectrum of fluctuations in the cosmic infrared background. The PSM enables the production of random realisations of the sky emission, constrained to match observational data within their uncertainties. It is implemented in a software package that is regularly updated with incoming information from observations. The model is expected to serve as a useful tool for optimising planned microwave and sub-millimetre surveys and testing data processing and analysis pipelines. It is, in particular, used to develop and validate data analysis pipelines within the Planck collaboration. A version of the software that can be used for simulating the observations for a variety of experiments is made available on a dedicated website.

  14. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  15. A Unified Approach for Developing Efficient Algorithmic Programs

    Institute of Scientific and Technical Information of China (English)

    薛锦云

    1997-01-01

    A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented.An algorithm(represented by recurrence and initiation)is separated from program,and special attention is paid to algorithm manipulation rather than proram calculus.An algorithm is exactly a set of mathematical formulae.It is easier for formal erivation and proof.After getting efficient and correct algorithm,a trivial transformation is used to get a final rogram,The approach covers several known algorithm design techniques,e.g.dynamic programming,greedy,divide-and-conquer and enumeration,etc.The techniques of partition and recurrence are not new.Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach.Recurrence is used in algorithm analysis,in developing loop invariants and dynamic programming approach.The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmicprograms.

  16. Probabilistic structural analysis algorithm development for computational efficiency

    Science.gov (United States)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  17. A new algorithm for designing developable Bézier surfaces

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xing-wang; WANG Guo-jin

    2006-01-01

    A new algorithm is presented that generates developable Bézier surfaces through a Bézier curve called a directrix. The algorithm is based on differential geometry theory on necessary and sufficient conditions for a surface which is developable, and on degree evaluation formula for parameter curves and linear independence for Bernstein basis. No nonlinear characteristic equations have to be solved. Moreover the vertex for a cone and the edge of regression for a tangent surface can be obtained easily.Aumann's algorithm for developable surfaces is a special case of this paper.

  18. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  19. Development of a data assimilation algorithm

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2008-01-01

    assimilation technique is applied. Therefore, it is important to study the interplay between the three components of the variational data assimilation techniques as well as to apply powerful parallel computers in the computations. Some results obtained in the search for a good combination of numerical methods......, splitting techniques and optimization algorithms will be reported. Parallel techniques described in [V.N. Alexandrov, W. Owczarz, P.G. Thomsen, Z. Zlatev, Parallel runs of a large air pollution model on a grid of Sun computers, Mathematics and Computers in Simulation, 65 (2004) 557–577] are used in the runs....... Modules from a particular large-scale mathematical model, the Unified Danish Eulerian Model (UNI-DEM), are used in the experiments. The mathematical background of UNI-DEM is discussed in [V.N. Alexandrov,W. Owczarz, P.G. Thomsen, Z. Zlatev, Parallel runs of a large air pollution model on a grid of Sun...

  20. Developer Tools for Evaluating Multi-Objective Algorithms

    Science.gov (United States)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  1. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  2. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    Science.gov (United States)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  3. Development and Testing of Data Mining Algorithms for Earth Observation

    Science.gov (United States)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  4. Development & Performance Analysis of Korean WADGPS Positioning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Kim Do-yoon; Kee Chang-don

    2003-01-01

    Today, many countries are developing their own WADGPS-type systems. The U.S. WAAS is already available for non-aviation users and its full operation is expected by the end of 2003. The European EGNOS and the Japanese MSAS is also in progress. China now propels the SNAS (Satellite Navigation Augmentation System) project,and India made the plan for its GAGAN (GPS And GEO Augmented Navigation) project. Recently, the Ministry of Maritime Affairs and Fisheries of Korea decided to develop Korean WADGPS. It is a very first step for the implementation of practical system. Till now, we have contrived the algorithm for the WADGPS in Korea & East Asia and evaluated its performance by simulations. In this paper, we complemented the positioning algorithm for the actual data processing. We analyzed the performance of the proposed algorithm with the actual data from the reference stations of Korean NDGPS network, which covers almost the whole country.

  5. Robust Algorithm Development for Application of Pinch Analysis on HEN

    Directory of Open Access Journals (Sweden)

    Ritesh Sojitra

    2016-10-01

    Full Text Available Since its genesis, Pinch Analysis is continuously evolving and its application is widening, reaching new horizons. The original concept of pinch approach was quite clear and, because of flexibility of this approach, innumerable applications have been developed in the industry. Consequently, a designer gets thoroughly muddled among these flexibilities. Hence, there was a need for a rigorous and robust model which could guide the optimisation engineer on deciding the applicability of the pinch approach and direct sequential step of procedure in predefined workflow, so that the precision of approach is ensured. Exploring the various options of a novice hands-on algorithm development that can be coded and interfaced with GUI and keeping in mind the difficulties faced by designers, an effort was made to formulate a new algorithm for the optimisation activity. As such, the work aims at easing out application hurdles and providing hands-on information to the Developer for use during preparation of new application tools. This paper presents a new algorithm, the application which ensures the Developer does not violate basic pinch rules. To achieve this, intermittent check gates are provided in the algorithm, which eliminate violation of predefined basic pinch rules, design philosophy, and Engineering Standards and ensure that constraints are adequately considered. On the other side, its sequential instruction to develop the pinch analysis and reiteration promises Maximum Energy Recovery (MER.

  6. Development of a robust algorithm to compute reactive azeotropes

    Directory of Open Access Journals (Sweden)

    M. H. M. Reis

    2006-09-01

    Full Text Available In this paper, a novel approach for establishing the route for process intensification through the application of two developed softwares to characterize reactive mixtures is presented. A robust algorithm was developed to build up reactive phase diagrams and to predict the existence and the location of reactive azeotropes. The proposed algorithm does not depend on initial estimates and is able to compute all reactive azeotropes present in the mixture. It also allows verifying if there are no azeotropes, which are the major troubles in this kind of programming. An additional software was developed in order to calculate reactive residue curve maps. Results obtained with the developed program were compared with the published in the literature for several mixtures, showing the efficiency and robustness of the developed softwares.

  7. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  8. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  9. Genetic Algorithms for Development of New Financial Products

    OpenAIRE

    Eder de Oliveira Abensur

    2007-01-01

    New Product Development (NPD) is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA) as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i) determining the essential variables of the finan...

  10. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  11. Datasets for radiation network algorithm development and testing

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Sen, Satyabrata [ORNL; Berry, M. L.. [New Jersey Institute of Technology; Wu, Qishi [University of Memphis; Grieme, M. [New Jersey Institute of Technology; Brooks, Richard R [ORNL; Cordone, G. [Clemson University

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  12. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  13. Oscillation Detection Algorithm Development Summary Report and Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  14. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems.

  15. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  16. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  17. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  18. The Promise and Pitfalls of Algorithmic Governance for Developing Societies

    Directory of Open Access Journals (Sweden)

    Rick SEARLE

    2016-06-01

    Full Text Available Many democracies in an early stage of development, such as Nigeria, experience a period of endemic corruption and difficulty providing needed public services. The careful use of algorithms may be of use in helping new democracies transition to a more objective, equitable, and accountable form of governance, though technology should not be viewed as a panacea for structural problems or without challenges of its own.

  19. Development of algorithm for single axis sun tracking system

    Science.gov (United States)

    Yi, Lim Zi; Singh, Balbir Singh Mahinder; Ching, Dennis Ling Chuan; Jin, Calvin Low Eu

    2016-11-01

    The output power from a solar panel depends on the amount of sunlight that is intercepted by the photovoltaic (PV) solar panel. The value of solar irradiance varies due to the change of position of sun and the local meteorological conditions. This causes the output power of a PV based solar electricity generating system (SEGS) to fluctuate as well. In this paper, the focus is on the integration of solar tracking system with performance analyzer system through the development of an algorithm for optimizing the performance of SEGS. The proposed algorithm displays real-time processed data that would enable users to understand the trend of the SEGS output for maintenance prediction and optimization purposes.

  20. Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration pads

    Directory of Open Access Journals (Sweden)

    Francisco Carlos P. Bizarria

    2009-06-01

    Full Text Available The Satellite Launch Vehicle developed in Brazil employs a specialized unit at the launch center known as the Movable Integration Tower. On that tower, fixed and movable work floors are installed for use by specialists, at predefined periods of time, to carry out tests mainly related to the pre-launch phase of that vehicle. Outside of those periods it is necessary to detect unexpected movements of platforms and unauthorized people on the site. Within that context, this work presents an evaluation of different resolutions of convolution mask and tolerances in the efficiency of a proposed algorithm to supervise scenery changes on these work floors. The results obtained from this evaluation are satisfactory and show that the proposed algorithm is suitable for the purpose for which it is intended.

  1. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Science.gov (United States)

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  2. Algorithm for automatic forced spirometry quality assessment: technological developments.

    Directory of Open Access Journals (Sweden)

    Umberto Melia

    Full Text Available We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1 an initial version using the standard FS curves recommended by the ATS; and, (2 a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95% and sensitivity (96%. The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  3. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  4. Development of a validation algorithm for 'present on admission' flagging

    Directory of Open Access Journals (Sweden)

    Cheng Diana

    2009-12-01

    Full Text Available Abstract Background The use of routine hospital data for understanding patterns of adverse outcomes has been limited in the past by the fact that pre-existing and post-admission conditions have been indistinguishable. The use of a 'Present on Admission' (or POA indicator to distinguish pre-existing or co-morbid conditions from those arising during the episode of care has been advocated in the US for many years as a tool to support quality assurance activities and improve the accuracy of risk adjustment methodologies. The USA, Australia and Canada now all assign a flag to indicate the timing of onset of diagnoses. For quality improvement purposes, it is the 'not-POA' diagnoses (that is, those acquired in hospital that are of interest. Methods Our objective was to develop an algorithm for assessing the validity of assignment of 'not-POA' flags. We undertook expert review of the International Classification of Diseases, 10th Revision, Australian Modification (ICD-10-AM to identify conditions that could not be plausibly hospital-acquired. The resulting computer algorithm was tested against all diagnoses flagged as complications in the Victorian (Australia Admitted Episodes Dataset, 2005/06. Measures reported include rates of appropriate assignment of the new Australian 'Condition Onset' flag by ICD chapter, and patterns of invalid flagging. Results Of 18,418 diagnosis codes reviewed, 93.4% (n = 17,195 reflected agreement on status for flagging by at least 2 of 3 reviewers (including 64.4% unanimous agreement; Fleiss' Kappa: 0.61. In tests of the new algorithm, 96.14% of all hospital-acquired diagnosis codes flagged were found to be valid in the Victorian records analysed. A lower proportion of individual codes was judged to be acceptably flagged (76.2%, but this reflected a high proportion of codes used Conclusion An indicator variable about the timing of occurrence of diagnoses can greatly expand the use of routinely coded data for hospital quality

  5. Development of a New Fractal Algorithm to Predict Quality Traits of MRI Loins

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Amigo, José Manuel

    2017-01-01

    to analyze MRI could be another possibility for this purpose. In this paper, a new fractal algorithm is developed, to obtain features from MRI based on fractal characteristics. This algorithm is called OPFTA (One Point Fractal Texture Algorithm). Three fractal algorithms were tested in this study: CFA...... (Classical fractal algorithm), FTA (Fractal texture algorithm) and OPFTA. The results obtained by means of these three fractal algorithms were correlated to the results obtained by means of physico-chemical methods. OPFTA and FTA achieved correlation coefficients higher than 0.75 and CFA reached low...

  6. Development of Navigation Control Algorithm for AGV Using D* search Algorithm

    Directory of Open Access Journals (Sweden)

    Jeong Geun Kim

    2013-06-01

    Full Text Available In this paper, we present a navigation control algorithm for Automatic Guided Vehicles (AGV that move in industrial environments including static and moving obstacles using D* algorithm. This algorithm has ability to get paths planning in unknown, partially known and changing environments efficiently. To apply the D* search algorithm, the grid map represent the known environment is generated. By using the laser scanner LMS-151 and laser navigation sensor NAV-200, the grid map is updated according to the changing of environment and obstacles. When the AGV finds some new map information such as new unknown obstacles, it adds the information to its map and re-plans a new shortest path from its current coordinates to the given goal coordinates. It repeats the process until it reaches the goal coordinates. This algorithm is verified through simulation and experiment. The simulation and experimental results show that the algorithm can be used to move the AGV successfully to reach the goal position while it avoids unknown moving and static obstacles. [Keywords— navigation control algorithm; Automatic Guided Vehicles (AGV; D* search algorithm

  7. Development of wind turbine control algorithms for industrial use

    Energy Technology Data Exchange (ETDEWEB)

    Van Engelen, T.G.; Van der Hooft, E.L; Schaak, P. [ECN Wind, Petten (Netherlands)

    2001-09-01

    A tool has been developed for design of industry-ready control algorithms. These pertain to the prevailing wind turbine type: variable speed, active pitch to vane. Main control objectives are rotor speed regulation, energy yield optimisation and structural fatigue reduction. These objectives are satisfied through individually tunable control loops. The split-up in loops for power control and damping of tower and drive-train resonance is allowed by the use of dedicated filters. Time domain simulation results from the design tool show high-performance power regulation by feed forward of the estimated wind speed and enhanced damping in sideward tower bending by generator torque control. The tool for control design has been validated through extensive test runs with the authorised aerodynamic code PHATAS-IV. 7 refs.

  8. Further development of an improved altimeter wind speed algorithm

    Science.gov (United States)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  9. Development of computer algorithms for radiation treatment planning.

    Science.gov (United States)

    Cunningham, J R

    1989-06-01

    As a result of an analysis of data relating tissue response to radiation absorbed dose the ICRU has recommended a target for accuracy of +/- 5 for dose delivery in radiation therapy. This is a difficult overall objective to achieve because of the many steps that make up a course of radiotherapy. The calculation of absorbed dose is only one of the steps and so to achieve an overall accuracy of better than +/- 5% the accuracy in dose calculation must be better yet. The physics behind the problem is sufficiently complicated so that no exact method of calculation has been found and consequently approximate solutions must be used. The development of computer algorithms for this task involves the search for better and better approximate solutions. To achieve the desired target of accuracy a fairly sophisticated calculation procedure must be used. Only when this is done can we hope to further improve our knowledge of the way in which tissues respond to radiation treatments.

  10. QAP collaborates in development of the sick child algorithm.

    Science.gov (United States)

    1994-01-01

    Algorithms which specify procedures for proper diagnosis and treatment of common diseases have been available to primary health care services in less developed countries for the past decade. Whereas each algorithm has usually been limited to a single ailment, children often present with the need for more comprehensive assessment and treatment. Treating just one illness in these children leads to incomplete treatment or missed opportunities for preventive services. To address this problem, the World Health Organization has recently developed a Sick Child Algorithm (SCA) for children aged 2 months-5 years. In addition to specifying case management procedures for acute respiratory illness, diarrhea/dehydration, fever, otitis, and malnutrition, the SCA prompts a check of the child's immunization status. The specificity and sensitivity of this SCA were field-tested in Kenya and the Gambia. In Kenya, the Malaria Branch of the US Centers for Disease Control and Prevention tested the SCA under typical conditions in Siaya District. The Quality Assurance Project of the Center for Human Services carried out a parallel facility-based systems analysis at the request of the Malaria Branch. The assessment which took place in September-October 1993, took the form of observations of provider/patient interactions, provider interviews, and verification of supplies and equipment in 19 rural health facilities to determine how current practices compare to actions prescribed by the SCA. This will reveal the type and amount of technical support needed to achieve conformity to the SCA's clinical practice recommendations. The data will allow officials to devise the proper training programs and will predict quality improvements likely to be achieved through adoption of the SCA in terms of effective case treatment and fewer missed immunization opportunities. Preliminary analysis indicates that the primary health care delivery in Siya deviates in several significant respects from performance

  11. Developing a corpus to verify the performance of a tone labelling algorithm

    CSIR Research Space (South Africa)

    Raborife, M

    2011-11-01

    Full Text Available The authors report on a study that involved the development of a corpus used to verify the performance of two tone labelling algorithms, with one algorithm being an improvement on the other. These algorithms were developed for speech synthesis...

  12. Development of hybrid genetic algorithms for product line designs.

    Science.gov (United States)

    Balakrishnan, P V Sundar; Gupta, Rakesh; Jacob, Varghese S

    2004-02-01

    In this paper, we investigate the efficacy of artificial intelligence (AI) based meta-heuristic techniques namely genetic algorithms (GAs), for the product line design problem. This work extends previously developed methods for the single product design problem. We conduct a large scale simulation study to determine the effectiveness of such an AI based technique for providing good solutions and bench mark the performance of this against the current dominant approach of beam search (BS). We investigate the potential advantages of pursuing the avenue of developing hybrid models and then implement and study such hybrid models using two very distinct approaches: namely, seeding the initial GA population with the BS solution, and employing the BS solution as part of the GA operator's process. We go on to examine the impact of two alternate string representation formats on the quality of the solutions obtained by the above proposed techniques. We also explicitly investigate a critical managerial factor of attribute importance in terms of its impact on the solutions obtained by the alternate modeling procedures. The alternate techniques are then evaluated, using statistical analysis of variance, on a fairy large number of data sets, as to the quality of the solutions obtained with respect to the state-of-the-art benchmark and in terms of their ability to provide multiple, unique product line options.

  13. Algorithm development for Prognostics and Health Management (PHM).

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Campbell, James E.; Doser, Adele Beatrice; Lowder, Kelly S.

    2003-10-01

    This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

  14. Quantification of distention in CT colonography: development and validation of three computer algorithms.

    Science.gov (United States)

    Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F

    2002-02-01

    Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.

  15. Outcomes analysis in epistaxis management: development of a therapeutic algorithm.

    Science.gov (United States)

    Shargorodsky, Josef; Bleier, Benjamin S; Holbrook, Eric H; Cohen, Jeffrey M; Busaba, Nicolas; Metson, Ralph; Gray, Stacey T

    2013-09-01

    This study explored the outcomes of epistaxis treatment modalities to optimize management and enable the development of a therapeutic algorithm. Case series with chart review. Tertiary care hospital. Adult patients presenting between 2005 and 2011 with epistaxis underwent cauterization, tamponade, and/or proximal vascular control. Outcomes of treatment modalities were compared. Multivariate logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals (CIs), adjusting for coagulopathy, hypertension, and bleeding site. The population included 147 patients (94 men, 53 women). For initial epistaxis, nondissolvable packing demonstrated the highest initial treatment failure rate of 57.4% (OR, 3.37; 95% CI, 1.33-8.59 compared with cautery). No significant differences were noted among initial posterior epistaxis treatment modalities. Length of nondissolvable pack placement for 3, 4, or 5 days had no significant impact on recurrence. Among patients who failed initial management, those who next underwent cautery or proximal vascular control required a significantly shorter inpatient stay of 5.3 vs 6.8 days compared with those who underwent packing (OR, 0.16; 95% CI, 0.04-0.68). There were no treatment failures following surgical arterial ligation. Initial management of anterior epistaxis with chemical cautery had a higher success rate and a lower number of total required interventions than did nondissolvable packing. Duration of packing did not affect recurrence. In patients who failed initially, progression to cautery or proximal vascular control led to significantly shorter inpatient stays than did packing.

  16. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  17. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  18. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  19. Developing a Learning Algorithm-Generated Empirical Relaxer

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Wayne [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Math; Kallman, Josh [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Toreja, Allen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallagher, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Laney, Dan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  20. Parallel and Distributed Genetic Algorithm with Multiple-Objectives to Improve and Develop of Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Khalil Ibrahim Mohammad Abuzanouneh

    2016-05-01

    Full Text Available In this paper, we argue that the timetabling problem reflects the problem of scheduling university courses, So you must specify the range of time periods and a group of instructors for a range of lectures to check a set of constraints and reduce the cost of other constraints ,this is the problem called NP-hard, it is a class of problems that are informally, it’s mean that necessary operations to solve the problem will increases exponentially and directly proportional to the size of the problem, The construction of timetable is most complicated problem that was facing many universities, and increased by size of the university data and overlapping disciplines between colleges, and when a traditional algorithm (EA is unable to provide satisfactory results, a distributed EA (dEA, which deploys the population on distributed systems ,it also offers an opportunity to solve extremely high dimensional problems through distributed coevolution using a divide-and-conquer mechanism, Further, the distributed environment allows a dEA to maintain population diversity, thereby avoiding local optima and also facilitating multi-objective search, by employing different distributed models to parallelize the processing of EAs, we designed a genetic algorithm suitable for Universities environment and the constraints facing it when building timetable for lectures.

  1. Algorithm Development for the Two-Fluid Plasma Model

    Science.gov (United States)

    2009-02-17

    of m=0 sausage instabilities in an axisymmetric Z-pinch", Physics of Plasmas 13, 082310 (2006). • A. Hakim and U. Shumlak, "Two-fluid physics and...accurate as the solution variables. The high-order representation of the solution variables satisfies the accuracy requirement to preserve the...here. [2] It also illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic

  2. Algorithm Development for the Multi-Fluid Plasma Model

    Science.gov (United States)

    2011-05-30

    ities of a Hall-MHD wave increase without bound with wave number. The large wave speeds increases the stiffness of the equation system making accu- rate...illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic plasma shock serves to...Nonlinear full two-fluid study of m = 0 sausage instabilities in an axisymmetric Z pinch. Physics of Plasmas, 13(8):082310, 2006. [5] A. Hakim and U. Shumlak

  3. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    Science.gov (United States)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  4. A Developed Algorithm of Apriori Based on Association Analysis

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; CHEN Jiangping; BIAN Fuling

    2004-01-01

    A method for mining frequent itemsets by evaluating their probability of supports based on association analysis is presented. This paper obtains the probability of every 1-itemset by scanning the database, then evaluates the probability of every 2-itemset, every 3-itemset, every k-itemset from the frequent 1-itemsets and gains all the candidate frequent itemsets. This paper also scans the database for verifying the support of the candidate frequent itemsets. Last, the frequent itemsets are mined. The method reduces a lot of time of scanning database and shortens the computation time of the algorithm.

  5. Update on Development of Mesh Generation Algorithms in MeshKit

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Vanderzee, Evan [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKit are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.

  6. Editorial Commentary: The Importance of Developing an Algorithm When Diagnosing Hip Pain.

    Science.gov (United States)

    Coleman, Struan H

    2016-08-01

    The differential diagnosis of groin pain is broad and complex. Therefore, it is essential to develop an algorithm when differentiating the hip as a cause of groin pain from other sources. Selective injections in and around the hip can be helpful when making the diagnosis but are only one part of the algorithm.

  7. Development of Online Cognitive and Algorithm Tests as Assessment Tools in Introductory Computer Science Courses

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul

    2012-01-01

    This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…

  8. Design patterns for the development of electronic health record-driven phenotype extraction algorithms.

    Science.gov (United States)

    Rasmussen, Luke V; Thompson, Will K; Pacheco, Jennifer A; Kho, Abel N; Carrell, David S; Pathak, Jyotishman; Peissig, Peggy L; Tromp, Gerard; Denny, Joshua C; Starren, Justin B

    2014-10-01

    Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    Science.gov (United States)

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  10. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    Science.gov (United States)

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  11. Developing a paradigm of drug innovation: an evaluation algorithm.

    Science.gov (United States)

    Caprino, Luciano; Russo, Pierluigi

    2006-11-01

    Assessment of drug innovation is a burning issue because it involves so many different perspectives, mainly those of patients, decision- and policy-makers, regulatory authorities and pharmaceutical companies. Moreover, the innovative value of a new medicine is usually an intrinsic property of the compound, but it also depends on the specific context in which the medicine is introduced and the availability of other medicines for treating the same clinical condition. Thus, a model designed to assess drug innovation should be able to capture the intrinsic properties of a compound (which usually emerge during R&D) and/or modification of its innovative value with time. Here we describe the innovation assessment algorithm (IAA), a simulation model for assessing drug innovation. IAA provides a score of drug innovation by assessing information generated during both the pre-marketing and the post-marketing authorization phase.

  12. Ice classification algorithm development and verification for the Alaska SAR Facility using aircraft imagery

    Science.gov (United States)

    Holt, Benjamin; Kwok, Ronald; Rignot, Eric

    1989-01-01

    The Alaska SAR Facility (ASF) at the University of Alaska, Fairbanks is a NASA program designed to receive, process, and archive SAR data from ERS-1 and to support investigations that will use this regional data. As part of ASF, specialized subsystems and algorithms to produce certain geophysical products from the SAR data are under development. Of particular interest are ice motion, ice classification, and ice concentration. This work focuses on the algorithm under development for ice classification, and the verification of the algorithm using C-band aircraft SAR imagery recently acquired over the Alaskan arctic.

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Seok Yoon [Dept. of Medical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)

    2009-03-15

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  15. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    Science.gov (United States)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  16. Development and validation of an algorithm to identify planned readmissions from claims data

    Science.gov (United States)

    Horwitz, Leora I.; Grady, Jacqueline N.; Cohen, Dorothy; Lin, Zhenqiu; Volpe, Mark; Ngo, Chi; Masica, Andrew L.; Long, Theodore; Wang, Jessica; Keenan, Megan; Montague, Julia; Suter, Lisa G.; Ross, Joseph S.; Drye, Elizabeth E.; Krumholz, Harlan M.; Bernheim, Susannah M.

    2017-01-01

    Background It is desirable not to include planned readmissions in readmission measures because they represent deliberate, scheduled care. Objectives To develop an algorithm to identify planned readmissions, describe its performance characteristics and identify improvements. Design Consensus-driven algorithm development and chart review validation study at 7 acute care hospitals in 2 health systems. Patients For development, all discharges qualifying for the publicly-reported hospital-wide readmission measure. For validation, all qualifying same-hospital readmissions that were characterized by the algorithm as planned, and a random sampling of same-hospital readmissions that were characterized as unplanned. Measurements We calculated weighted sensitivity and specificity, and positive and negative predictive values of the algorithm (version 2.1), compared to gold standard chart review. Results In consultation with 27 experts, we developed an algorithm that characterizes 7.8% of readmissions as planned. For validation we reviewed 634 readmissions. The weighted sensitivity of the algorithm was 45.1% overall; 50.9% in large teaching centers and 40.2% in smaller community hospitals. The weighted specificity was 95.9%, positive predictive value was 51.6% and negative predictive value was 94.7%. We identified 4 minor changes to improve algorithm performance. The revised algorithm had a weighted sensitivity 49.8% (57.1% at large hospitals), weighted specificity 96.5%, positive predictive value 58.7%, and negative predictive value 94.5%. Positive predictive value was poor for the two most common potentially planned procedures: diagnostic cardiac catheterization (25%) and procedures involving cardiac devices (33%). Conclusions An administrative claims-based algorithm to identify planned readmissions is feasible and can facilitate public reporting of primarily unplanned readmissions. PMID:26149225

  17. An Algorithm to Identify the Development of Lymphedema After Breast Cancer Treatment

    Science.gov (United States)

    Yen, Tina W.F.; Laud, Purushuttom W.; Sparapani, Rodney A.; Li, Jianing; Nattinger, Ann B.

    2014-01-01

    Purpose Large, population-based studies are needed to better understand lymphedema, a major source of morbidity among breast cancer survivors. One challenge is identifying lymphedema in a consistent fashion. We sought to develop and validate an algorithm using Medicare claims to identify lymphedema after breast cancer surgery. Methods From a population-based cohort of 2,597 elderly (65+) women who underwent incident breast cancer surgery in 2003 and completed annual telephone surveys through 2008, two algorithms were developed using Medicare claims from half of the cohort and validated in the remaining half. A lymphedema-positive case was defined by patient report. Results A simple two ICD-9 code algorithm had 69% sensitivity, 96% specificity, positive predictive value >75% if prevalence of lymphedema is >16%, negative predictive value >90%, and area under receiver operating characteristic curve (AUC) of 0.82 (95% CI: 0.80 – 0.85). A more sophisticated, multi-step algorithm utilizing diagnostic and treatment codes, logistic regression methods, and a reclassification step performed similarly to the two-code algorithm. Conclusions Given the similar performance of the two validated algorithms, the ease of implementing the simple algorithm and the fact that the simple algorithm does not include treatment codes, we recommend that this two-code algorithm be validated in and applied to other population-based breast cancer cohorts. Implications for Cancer Survivors This validated lymphedema algorithm will facilitate the conduct of large, population-based studies in key areas (incidence rates, risk factors, prevention measures, treatment and cost/economic analyses) that are critical to advancing our understanding and management of this challenging and debilitating chronic disease. PMID:25187004

  18. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  19. Development and validation of algorithms for the detection of statin myopathy signals from electronic medical records.

    Science.gov (United States)

    Chan, S L; Tham, M Y; Tan, S H; Loke, C; Foo, Bpq; Fan, Y; Ang, P S; Brunham, L R; Sung, C

    2017-05-01

    The purpose of this study was to develop and validate sensitive algorithms to detect hospitalized statin-induced myopathy (SIM) cases from electronic medical records (EMRs). We developed four algorithms on a training set of 31,211 patient records from a large tertiary hospital. We determined the performance of these algorithms against manually curated records. The best algorithm used a combination of elevated creatine kinase (>4× the upper limit of normal (ULN)), discharge summary, diagnosis, and absence of statin in discharge medications. This algorithm achieved a positive predictive value of 52-71% and a sensitivity of 72-78% on two validation sets of >30,000 records each. Using this algorithm, the incidence of SIM was estimated at 0.18%. This algorithm captured three times more rhabdomyolysis cases than spontaneous reports (95% vs. 30% of manually curated gold standard cases). Our results show the potential power of utilizing data and text mining of EMRs to enhance pharmacovigilance activities. © 2016 American Society for Clinical Pharmacology and Therapeutics.

  20. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged Qualitative methods were applied using semi-structured interviews of 30 UK doctors and nurses working with young children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  1. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  2. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  3. Development and Evaluation of High-Performance Decorrelation Algorithms for the Nonalternating 3D Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Quiles FJ

    2007-01-01

    Full Text Available We introduce and evaluate the implementations of three parallel video-sequences decorrelation algorithms. The proposed algorithms are based on the nonalternating classic three-dimensional wavelet transform (3D-WT. The parallel implementations of the algorithms are developed and tested on a shared memory system, an SGI origin 3800 supercomputer making use of a message-passing paradigm. We evaluate and analyze the performance of the implementations in terms of the response time and speed-up factor by varying the number of processors and various video coding parameters. The key points enabling the development of highly efficient implementations rely on the partitioning of the video sequences into groups of frames and a workload distribution strategy supplemented by the use of parallel I/O primitives, for better exploiting the inherent features of the application and computing platform. We also evaluate the effectiveness of our algorithms in terms of the first-order entropy.

  4. Development and Validation of an Automatic Segmentation Algorithm for Quantification of Intracerebral Hemorrhage.

    Science.gov (United States)

    Scherer, Moritz; Cordes, Jonas; Younsi, Alexander; Sahin, Yasemin-Aylin; Götz, Michael; Möhlenbruch, Markus; Stock, Christian; Bösel, Julian; Unterberg, Andreas; Maier-Hein, Klaus; Orakcioglu, Berk

    2016-11-01

    ABC/2 is still widely accepted for volume estimations in spontaneous intracerebral hemorrhage (ICH) despite known limitations, which potentially accounts for controversial outcome-study results. The aim of this study was to establish and validate an automatic segmentation algorithm, allowing for quick and accurate quantification of ICH. A segmentation algorithm implementing first- and second-order statistics, texture, and threshold features was trained on manual segmentations with a random-forest methodology. Quantitative data of the algorithm, manual segmentations, and ABC/2 were evaluated for agreement in a study sample (n=28) and validated in an independent sample not used for algorithm training (n=30). ABC/2 volumes were significantly larger compared with either manual or algorithm values, whereas no significant differences were found between the latter (Pcorrelation coefficient 0.95 [lower 95% confidence interval 0.91]) and superior to ABC/2 (concordance correlation coefficient 0.77 [95% confidence interval 0.64]). Validation confirmed agreement in an independent sample (algorithm concordance correlation coefficient 0.99 [95% confidence interval 0.98], ABC/2 concordance correlation coefficient 0.82 [95% confidence interval 0.72]). The algorithm was closer to respective manual segmentations than ABC/2 in 52/58 cases (89.7%). An automatic segmentation algorithm for volumetric analysis of spontaneous ICH was developed and validated in this study. Algorithm measurements showed strong agreement with manual segmentations, whereas ABC/2 exhibited its limitations, yielding inaccurate overestimations of ICH volume. The refined, yet time-efficient, quantification of ICH by the algorithm may facilitate evaluation of clot volume as an outcome predictor and trigger for surgical interventions in the clinical setting. © 2016 American Heart Association, Inc.

  5. An Ontology Based Reuse Algorithm towards Process Planning in Software Development

    Directory of Open Access Journals (Sweden)

    Shilpa Sharma

    2011-09-01

    Full Text Available The process planning task for specified design provisions in software development can be significantly developed by referencing the knowledge reuse scheme. Reuse is considered to be one of the most promising techniques to improve software excellence and productivity. Reuse during software development depends much on the existing design knowledge in meta-model, a “read only” repository of information. We have proposed, an ontology based reuse algorithm towards process planning in software development. According to the common conceptual base facilitated by ontology and the characteristics of knowledge, the concepts and the entities are represented into meta-model and endeavor prospects. The relations between these prospects and its linkage knowledge are used to construct an ontology based reuse algorithm. In addition, our experiment illustrates realization of process planning in software development by practicing this algorithm. Subsequently, its benefits are delineated.

  6. Applications of feature selection. [development of classification algorithms for LANDSAT data

    Science.gov (United States)

    Guseman, L. F., Jr.

    1976-01-01

    The use of satellite-acquired (LANDSAT) multispectral scanner (MSS) data to conduct an inventory of some crop of economic interest such as wheat over a large geographical area is considered in relation to the development of accurate and efficient algorithms for data classification. The dimension of the measurement space and the computational load for a classification algorithm is increased by the use of multitemporal measurements. Feature selection/combination techniques used to reduce the dimensionality of the problem are described.

  7. The use of virtual acoustics in the evaluation and development of binaural hearing aid algorithms

    OpenAIRE

    Rychtarikova, Monika; Van den Bogaert, Tim; Vermeir, Gerrit; Eneman, Koen; Lauriks, Walter; Moonen, Marc; Wouters, Jan

    2008-01-01

    The development of noise reduction algorithms for hearing aids HA is not longer only related to the improvement of signal to noise ratio, but also to the quality of hearing, e.g., binaural aspects of hearing. This is very important for the recognition of the localization of sound sources but also for an improved speech intelligibility in noisy situations due to spatial release from masking effects. New design and signal processing algorithms for binaural HA’s need to be test...

  8. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams. However

  9. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  10. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  11. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams. However

  12. Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK

    Energy Technology Data Exchange (ETDEWEB)

    Felici, F., E-mail: f.felici@tue.nl [École Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Suisse, 1015 Lausanne (Switzerland); Eindhoven University of Technology, Department of Mechanical Engineering, Control Systems Technology Group, P.O. Box 513, 5600MB Eindhoven (Netherlands); Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P. [École Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Suisse, 1015 Lausanne (Switzerland); Hommen, G. [FOM-Institute DIFFER, Association EURATOM-FOM, Nieuwegein (Netherlands); Eindhoven University of Technology, Department of Mechanical Engineering, Control Systems Technology Group, P.O. Box 513, 5600MB Eindhoven (Netherlands); Karpushov, A.; Piras, F.; Pitzschke, A. [École Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Suisse, 1015 Lausanne (Switzerland); Romero, J. [National Laboratory of Fusion, EURATOM-CIEMAT, Madrid (Spain); Sevillano, G. [Department of Automatic Control and Systems Engineering, Bilbao University of the Basque Country, Bilbao (Spain); Sauter, O.; Vijvers, W. [École Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Suisse, 1015 Lausanne (Switzerland)

    2014-03-15

    Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments.

  13. Development and application of unified algorithms for problems in computational science

    Science.gov (United States)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  14. Developing NASA's VIIRS LST and Emissivity EDRs using a physics based Temperature Emissivity Separation (TES) algorithm

    Science.gov (United States)

    Islam, T.; Hulley, G. C.; Malakar, N.; Hook, S. J.

    2015-12-01

    Land Surface Temperature and Emissivity (LST&E) data are acknowledged as critical Environmental Data Records (EDRs) by the NASA Earth Science Division. The current operational LST EDR for the recently launched Suomi National Polar-orbiting Partnership's (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) payload utilizes a split-window algorithm that relies on previously-generated fixed emissivity dependent coefficients and does not produce a dynamically varying and multi-spectral land surface emissivity product. Furthermore, this algorithm deviates from its MODIS counterpart (MOD11) resulting in a discontinuity in the MODIS/VIIRS LST time series. This study presents an alternative physics based algorithm for generation of the NASA VIIRS LST&E EDR in order to provide continuity with its MODIS counterpart algorithm (MOD21). The algorithm, known as temperature emissivity separation (TES) algorithm, uses a fast radiative transfer model - Radiative Transfer for (A)TOVS (RTTOV) in combination with an emissivity calibration model to isolate the surface radiance contribution retrieving temperature and emissivity. Further, a new water-vapor scaling (WVS) method is developed and implemented to improve the atmospheric correction process within the TES system. An independent assessment of the VIIRS LST&E outputs is performed against in situ LST measurements and laboratory measured emissivity spectra samples over dedicated validation sites in the Southwest USA. Emissivity retrievals are also validated with the latest ASTER Global Emissivity Database Version 4 (GEDv4). An overview and current status of the algorithm as well as the validation results will be discussed.

  15. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    Science.gov (United States)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  16. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  17. A Novel Algorithm of Forecasting the Potential Development of Generation in the Distribution Grid

    Directory of Open Access Journals (Sweden)

    Michał Bajor

    2014-06-01

    Full Text Available The paper presents a novel method of forecasting the potential for the development of various types of generation, including renewable, connecting to the distribution grid. The proposed algorithm is based on the idea of identifying different factors influencing the possibility of developing various types of generation in different time horizons. Descriptions of subsequent stages of the forecasting procedure, used terms and the software implementing the algorithm, developed by the authors, are also included in the paper. Finally, comments regarding the reliability of the results obtained using the method are described.

  18. Development and Evaluation of the National Cancer Institute's Dietary Screener Questionnaire Scoring Algorithms.

    Science.gov (United States)

    Thompson, Frances E; Midthune, Douglas; Kahle, Lisa; Dodd, Kevin W

    2017-06-01

    Background: Methods for improving the utility of short dietary assessment instruments are needed.Objective: We sought to describe the development of the NHANES Dietary Screener Questionnaire (DSQ) and its scoring algorithms and performance.Methods: The 19-item DSQ assesses intakes of fruits and vegetables, whole grains, added sugars, dairy, fiber, and calcium. Two nonconsecutive 24-h dietary recalls and the DSQ were administered in NHANES 2009-2010 to respondents aged 2-69 y (n = 7588). The DSQ frequency responses, coupled with sex- and age-specific portion size information, were regressed on intake from 24-h recalls by using the National Cancer Institute usual intake method to obtain scoring algorithms to estimate mean and prevalences of reaching 2 a priori threshold levels. The resulting scoring algorithms were applied to the DSQ and compared with intakes estimated with the 24-h recall data only. The stability of the derived scoring algorithms was evaluated in repeated sampling. Finally, scoring algorithms were applied to screener data, and these estimates were compared with those from multiple 24-h recalls in 3 external studies.Results: The DSQ and its scoring algorithms produced estimates of mean intake and prevalence that agreed closely with those from multiple 24-h recalls. The scoring algorithms were stable in repeated sampling. Differences in the means were algorithms is an advance in the use of screeners. However, because these algorithms may not be generalizable to all studies, a pilot study in the proposed study population is advisable. Although more precise instruments such as 24-h dietary recalls are recommended in most research, the NHANES DSQ provides a less burdensome alternative when time and resources are constrained and interest is in a limited set of dietary factors. © 2017 American Society for Nutrition.

  19. Development of a fire detection algorithm for the COMS (Communication Ocean and Meteorological Satellite)

    Science.gov (United States)

    Kim, Goo; Kim, Dae Sun; Lee, Yang-Won

    2013-10-01

    The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.

  20. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  1. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    Energy Technology Data Exchange (ETDEWEB)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  2. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    Energy Technology Data Exchange (ETDEWEB)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.

  3. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    Science.gov (United States)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  4. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  5. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  6. Developing Fire Detection Algorithms by Geostationary Orbiting Platforms and Machine Learning Techniques

    Science.gov (United States)

    Salvador, Pablo; Sanz, Julia; Garcia, Miguel; Casanova, Jose Luis

    2016-08-01

    Fires in general and forest fires specific are a major concern in terms of economical and biological loses. Remote sensing technologies have been focusing on developing several algorithms, adapted to a large kind of sensors, platforms and regions in order to obtain hotspots as faster as possible. The aim of this study is to establish an automatic methodology to develop hotspots detection algorithms with Spinning Enhanced Visible and Infrared Imager (SEVIRI) sensor on board Meteosat Second Generation platform (MSG) based on machine learning techniques that can be exportable to others geostationary platforms and sensors and to any area of the Earth. The sensitivity (SE), specificity (SP) and accuracy (AC) parameters have been analyzed in order to develop the final machine learning algorithm taking into account the preferences and final use of the predicted data.

  7. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  8. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  9. Development of moving target detection algorithm using ADSP TS201 DSP Processor

    Directory of Open Access Journals (Sweden)

    Babu rao Kodavati

    2010-08-01

    Full Text Available This paper presents detect the presence of a target within a specified range(2 to 30m. The present work generally relates to a radar system and more particularly, to improve range resolution (3 m and minimum detection time (2 msec. Speed and accuracy are two important evaluation indicators in target detecting system. The challenges in developing the algorithm is finding the Doppler frequency and give caution signal to chief at an optimum instant of time to cause target kill. Time management serves to maintain a priority queue of all the tasks. In this work we have taken up issue of developing an algorithm using ADSP TS 201 DSP Processor.

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    Directory of Open Access Journals (Sweden)

    Zhipeng Gui

    Full Text Available Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1 an Integer Linear Programming (ILP based algorithm from combinational optimization perspective; 2 a K-Means and Kernighan-Lin combined heuristic algorithm (K&K integrating geometric and coordinate-free methods by merging local and global partitioning; 3 an automatic seeded region growing based geometric and local partitioning algorithm (ASRG. The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric

  11. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  12. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  13. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    Science.gov (United States)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  14. Development and validation of evolutionary algorithm software as an optimization tool for biological and environmental applications.

    Science.gov (United States)

    Sys, K; Boon, N; Verstraete, W

    2004-06-01

    A flexible, extendable tool for the optimization of (micro)biological processes and protocols using evolutionary algorithms was developed. It has been tested using three different theoretical optimization problems: 2 two-dimensional problems, one with three maxima and one with five maxima and a river autopurification optimization problem with boundary conditions. For each problem, different evolutionary parameter settings were used for the optimization. For each combination of evolutionary parameters, 15 generations were run 20 times. It has been shown that in all cases, the evolutionary algorithm gave rise to valuable results. Generally, the algorithms were able to detect the more stable sub-maximum even if there existed less stable maxima. The latter is, from a practical point of view, generally more desired. The most important factors influencing the convergence process were the parameter value randomization rate and distribution. The developed software, described in this work, is available for free.

  15. Applications and development of new algorithms for displacement analysis using InSAR time series

    Science.gov (United States)

    Osmanoglu, Batuhan

    -dimensional (3-D) phase unwrapping. Chapter 4 focuses on the unwrapping path. Unwrapping algorithms can be divided into two groups, path-dependent and path-independent algorithms. Path-dependent algorithms use local unwrapping functions applied pixel-by-pixel to the dataset. In contrast, path-independent algorithms use global optimization methods such as least squares, and return a unique solution. However, when aliasing and noise are present, path-independent algorithms can underestimate the signal in some areas due to global fitting criteria. Path-dependent algorithms do not underestimate the signal, but, as the name implies, the unwrapping path can affect the result. Comparison between existing path algorithms and a newly developed algorithm based on Fisher information theory was conducted. Results indicate that Fisher information theory does indeed produce lower misfit results for most tested cases. Chapter 5 presents a new time series analysis method based on 3-D unwrapping of SAR data using extended Kalman filters. Existing methods for time series generation using InSAR data employ special filters to combine two-dimensional (2-D) spatial unwrapping with one-dimensional (1-D) temporal unwrapping results. The new method, however, combines observations in azimuth, range and time for repeat pass interferometry. Due to the pixel-by-pixel characteristic of the filter, the unwrapping path is selected based on a quality map. This unwrapping algorithm is the first application of extended Kalman filters to the 3-D unwrapping problem. Time series analyses of InSAR data are used in a variety of applications with different characteristics. Consequently, it is difficult to develop a single algorithm that can provide optimal results in all cases, given that different algorithms possess a unique set of strengths and weaknesses. Nonetheless, filter-based unwrapping algorithms such as the one presented in this dissertation have the capability of joining multiple observations into a uniform

  16. Development of a Multi-Objective Evolutionary Algorithm for Strain-Enhanced Quantum Cascade Lasers

    Directory of Open Access Journals (Sweden)

    David Mueller

    2016-07-01

    Full Text Available An automated design approach using an evolutionary algorithm for the development of quantum cascade lasers (QCLs is presented. Our algorithmic approach merges computational intelligence techniques with the physics of device structures, representing a design methodology that reduces experimental effort and costs. The algorithm was developed to produce QCLs with a three-well, diagonal-transition active region and a five-well injector region. Specifically, we applied this technique to Al x Ga 1 - x As/In y Ga 1 - y As strained active region designs. The algorithmic approach is a non-dominated sorting method using four aggregate objectives: target wavelength, population inversion via longitudinal-optical (LO phonon extraction, injector level coupling, and an optical gain metric. Analysis indicates that the most plausible device candidates are a result of the optical gain metric and a total aggregate of all objectives. However, design limitations exist in many of the resulting candidates, indicating need for additional objective criteria and parameter limits to improve the application of this and other evolutionary algorithm methods.

  17. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    Science.gov (United States)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  18. Development of a Dynamic Operational Scheduling Algorithm for an Independent Micro-Grid with Renewable Energy

    Science.gov (United States)

    Obara, Shin'ya

    A micro-grid with the capacity for sustainable energy is expected to be a distributed energy system that exhibits quite a small environmental impact. In an independent micro-grid, “green energy,” which is typically thought of as unstable, can be utilized effectively by introducing a battery. In the past study, the production-of-electricity prediction algorithm (PAS) of the solar cell was developed. In PAS, a layered neural network is made to learn based on past weather data and the operation plan of the compound system of a solar cell and other energy systems was examined using this prediction algorithm. In this paper, a dynamic operational scheduling algorithm is developed using a neural network (PAS) and a genetic algorithm (GA) to provide predictions for solar cell power output. We also do a case study analysis in which we use this algorithm to plan the operation of a system that connects nine houses in Sapporo to a micro-grid composed of power equipment and a polycrystalline silicon solar cell. In this work, the relationship between the accuracy of output prediction of the solar cell and the operation plan of the micro-grid was clarified. Moreover, we found that operating the micro-grid according to the plan derived with PAS was far superior, in terms of equipment hours of operation, to that using past average weather data.

  19. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  20. Development of a thresholding algorithm for calcium classification at multiple CT energies

    Science.gov (United States)

    Ng, LY.; Alssabbagh, M.; Tajuddin, A. A.; Shuaib, I. L.; Zainon, R.

    2017-05-01

    The objective of this study was to develop a thresholding method for calcium classification with different concentration using single-energy computed tomography. Five different concentrations of calcium chloride were filled in PMMA tubes and placed inside a water-filled PMMA phantom (diameter 10 cm). The phantom was scanned at 70, 80, 100, 120 and 140 kV using a SECT. CARE DOSE 4D was used and the slice thickness was set to 1 mm for all energies. ImageJ software inspired by the National Institute of Health (NIH) was used to measure the CT numbers for each calcium concentration from the CT images. The results were compared with a developed algorithm for verification. The percentage differences between the measured CT numbers obtained from the developed algorithm and the ImageJ show similar results. The multi-thresholding algorithm was found to be able to distinguish different concentrations of calcium chloride. However, it was unable to detect low concentrations of calcium chloride and iron (III) nitrate with CT numbers between 25 HU and 65 HU. The developed thresholding method used in this study may help to differentiate between calcium plaques and other types of plaques in blood vessels as it is proven to have a good ability to detect the high concentration of the calcium chloride. However, the algorithm needs to be improved to solve the limitations of detecting calcium chloride solution which has a similar CT number with iron (III) nitrate solution.

  1. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  2. Detecting Intermittent Steering Activity ; Development of a Phase-detection Algorithm

    NARCIS (Netherlands)

    Silva Peixoto de Aboim Chaves, H.M. da; Pauwelussen, J.J.A.; Mulder, M.; Paassen, M.M. van; Happee, R.; Mulder, M.

    2012-01-01

    Drivers usually maintain an error-neglecting control strategy (passive phase) in keeping their vehicle on the road, only to change to an error-correcting approach (active phase) when the vehicle state becomes inadequate. We developed an algorithm that is capable of detecting whether the driver is cu

  3. DEVELOPMENT OF GENETIC ALGORITHM-BASED METHODOLOGY FOR SCHEDULING OF MOBILE ROBOTS

    DEFF Research Database (Denmark)

    Dang, Vinh Quang

    -time operations of production managers. Hence to deal with large-scale applications, each heuristic based on genetic algorithms is then developed to find near-optimal solutions within a reasonable computation time for each problem. The quality of these solutions is then compared and evaluated by using...

  4. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  5. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  6. Experiences on developing digital down conversion algorithms using Xilinx system generator

    Science.gov (United States)

    Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi

    2013-07-01

    The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.

  7. Development and benefit analysis of a sector design algorithm for terminal dynamic airspace configuration

    Science.gov (United States)

    Sciandra, Vincent

    The National Airspace System (NAS) is the vast network of systems enabling safe and efficient air travel in the United States. It consists of a set of static sectors, each controlled by one or more air traffic controllers. Air traffic control is tasked with ensuring that all flights can depart and arrive on time and in a safe and efficient matter. However, skyrocketing demand will only increase the stress on an already inefficient system, causing massive delays. The current, static configuration of the NAS cannot possibly handle the future demand on the system safely and efficiently, especially since it is projected to triple by 2025. To overcome these issues, the Next Generation of Air Transportation System (NextGen) is being enacted to increase the flexibility of the NAS. A major objective of NextGen is to implement Adaptable Dynamic Airspace Configuration (ADAC) which will dynamically allocate the sectors to best fit the traffic in the area. Dynamically allocating sectors will allow resources such as controllers to be better distributed to meet traffic demands. Currently, most DAC research has involved the en route airspace. This leaves the terminal airspace, which accounts for a large amount of the overall NAS complexity, in need of work. Using a combination of methods used in en route sectorization, this thesis has developed an algorithm for the dynamic allocation of sectors in the terminal airspace. This algorithm will be evaluated using metrics common in the evaluation of dynamic density, which is adapted for the unique challenges of the terminal airspace, and used to measure workload on air traffic controllers. These metrics give a better view of the controller workload than the number of aircraft alone. By comparing the test results with sectors currently used in the NAS using real traffic data, the algorithm xv generated sectors can be quantitatively evaluated for improvement of the current sectorizations. This will be accomplished by testing the

  8. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  9. Development of a Decision Making Algorithm for Traffic Jams Reduction Applied to Intelligent Transportation Systems

    Directory of Open Access Journals (Sweden)

    David Gómez

    2016-01-01

    Full Text Available This paper is aimed at developing a decision making algorithm for traffic jams reduction that can be applied to Intelligent Transportation Systems. To do so, these algorithms must address two main challenges that arise in this context. On one hand, there are uncertainties in the data received from sensor networks produced by incomplete information or because the information loses some of the precision during information processing and display. On the other hand, there is the variability of the context in which these types of systems are operating. More specifically, Analytic Hierarchy Process (AHP algorithm has been adapted to ITS, taking into account the mentioned challenges. After explaining the proposed decision making method, it is validated in a specific scenario: a smart traffic management system.

  10. Development of a doubly weighted Gerchberg-Saxton algorithm for use in multibeam imaging applications.

    Science.gov (United States)

    Poland, Simon P; Krstajić, Nikola; Knight, Robert D; Henderson, Robert K; Ameer-Beg, Simon M

    2014-04-15

    We report on the development of a doubly weighted Gerchberg-Saxton algorithm (DWGS) to enable generation of uniform beamlet arrays with a spatial light modulator (SLM) for use in multiphoton multifocal imaging applications. The algorithm incorporates the WGS algorithm as well as feedback of fluorescence signals from the sample measured with a single-photon avalanche diode (SPAD) detector array. This technique compensates for issues associated with nonuniform illumination onto the SLM, the effects due to aberrations and the variability in gain between detectors within the SPAD array to generate a uniformly illuminated multiphoton fluorescence image. We demonstrate the use of the DWGS with a number of beamlet array patterns to image muscle fibers of a 5-day-old fixed zebrafish larvae.

  11. A collaborative approach to developing an electronic health record phenotyping algorithm for drug-induced liver injury.

    Science.gov (United States)

    Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua

    2013-12-01

    To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). We analyzed types and causes of differences in DILI case definitions provided by two institutions-Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms.

  12. Development and implementation of an algorithm for detection of protein complexes in large interaction networks

    Directory of Open Access Journals (Sweden)

    Kanaya Shigehiko

    2006-04-01

    Full Text Available Abstract Background After complete sequencing of a number of genomes the focus has now turned to proteomics. Advanced proteomics technologies such as two-hybrid assay, mass spectrometry etc. are producing huge data sets of protein-protein interactions which can be portrayed as networks, and one of the burning issues is to find protein complexes in such networks. The enormous size of protein-protein interaction (PPI networks warrants development of efficient computational methods for extraction of significant complexes. Results This paper presents an algorithm for detection of protein complexes in large interaction networks. In a PPI network, a node represents a protein and an edge represents an interaction. The input to the algorithm is the associated matrix of an interaction network and the outputs are protein complexes. The complexes are determined by way of finding clusters, i. e. the densely connected regions in the network. We also show and analyze some protein complexes generated by the proposed algorithm from typical PPI networks of Escherichia coli and Saccharomyces cerevisiae. A comparison between a PPI and a random network is also performed in the context of the proposed algorithm. Conclusion The proposed algorithm makes it possible to detect clusters of proteins in PPI networks which mostly represent molecular biological functional units. Therefore, protein complexes determined solely based on interaction data can help us to predict the functions of proteins, and they are also useful to understand and explain certain biological processes.

  13. Developing an atrial activity-based algorithm for detection of atrial fibrillation.

    Science.gov (United States)

    Ladavich, Steven; Ghoraani, Behnaz

    2014-01-01

    In this study we propose a novel atrial activity-based method for atrial fibrillation (AF) identification that detects the absence of normal sinus rhythm (SR) P-waves from the surface ECG. The proposed algorithm extracts nine features from P-waves during SR and develops a statistical model to describe the distribution of the features. The Expectation-Maximization algorithm is applied to a training set to create a multivariate Gaussian Mixture Model (GMM) of the feature space. This model is used to identify P-wave absence (PWA) and, in turn, AF. An optional post-processing stage, which takes a majority vote of successive outputs, is applied to improve classier performance. The algorithm was tested on 20 records in the MIT-BIH Atrial Fibrillation Database. Classification combining seven beats showed a sensitivity of 99.28%, a specificity of 90.21%. The presented algorithm has a classification performance comparable to current Heartrate-based algorithms yet is rate-independent and capable of making an AF determination in a few beats.

  14. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  15. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    Science.gov (United States)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  16. An overview on recent radiation transport algorithm development for optical tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Charette, Andre [Groupe de Recherche en Ingenierie des Procedes et Systemes, Universite du Quebec a Chicoutimi, Chicoutimi, QC, G7H 2B1 (Canada)], E-mail: Andre_Charette@uqac.ca; Boulanger, Joan [Laboratoire des Turbines a Gaz, Institut pour la Recherche Aerospatiale-Conseil National de Recherche du Canada, Ottawa, ON, K1A 0R6 (Canada); Kim, Hyun K [Department of Biomedical Engineering, Columbia University, New York, NY 10027 (United States)

    2008-11-15

    Optical tomography belongs to the promising set of non-invasive methods for probing applications of semi-transparent media. This covers a wide range of fields. Nowadays, it is mainly driven by medical imaging in search of new less aggressive and affordable diagnostic means. This paper aims at presenting the most recent research accomplished in the authors' laboratories as well as that of collaborative institutions concerning the development of imaging algorithms. The light transport modelling is not a difficult question as it used to be. Research is now focused on data treatment and reconstruction. Since the turn of the century, the rapid expansion of low cost computing has permitted the development of enhanced imaging algorithms with great potential. Some of these developments are already on the verge of clinical applications. This paper presents these developments and also provides some insights on still unresolved challenges. Intrinsic difficulties are identified and promising directions for solutions are discussed.

  17. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  18. Development of potential methods for testing congestion control algorithm implemented in vehicle to vehicle communications.

    Science.gov (United States)

    Hsu, Chung-Jen; Fikentscher, Joshua; Kreeb, Robert

    2017-03-21

    Objective A channel congestion problem might occur when the traffic density increases since the number of basic safety messages carried on the communication channel also increases in vehicle-to-vehicle communications. A remedy algorithm proposed in SAE J2945/1 is designed to address the channel congestion issue by decreasing transmission frequency and radiated power. This study is to develop potential test procedures for evaluating or validating the congestion control algorithm. Methods Simulations of a reference unit transmitting at a higher frequency are implemented to emulate a number of Onboard Equipment (OBE) transmitting at the normal interval of 100 milliseconds (10 Hz). When the transmitting interval is reduced to 1.25 milliseconds (800 Hz), the reference unit emulates 80 vehicles transmitting at 10 Hz. By increasing the number of reference units transmitting at 800 Hz in the simulations, the corresponding channel busy percentages are obtained. An algorithm for GPS data generation of virtual vehicles is developed for facilitating the validation of transmission intervals in the congestion control algorithm. Results Channel busy percentage is the channel busy time over a specified period of time. Three or four reference units are needed to generate channel busy percentages between 50% and 80%, and five reference units can generate channel busy percentages above 80%. The proposed test procedures can verify the operation of congestion control algorithm when channel busy percentages are between 50% and 80%, and above 80%. By using GPS data generation algorithm, the test procedures can also verify the transmission intervals when traffic densities are 80 and 200 vehicles in the radius of 100 m. A suite of test tools with functional requirements is also proposed for facilitating the implementation of test procedures. Conclusions The potential test procedures for congestion control algorithm are developed based on the simulation results of channel busy

  19. Developing a synergy algorithm for land surface temperature: the SEN4LST project

    Science.gov (United States)

    Sobrino, Jose A.; Jimenez, Juan C.; Ghent, Darren J.

    2013-04-01

    Land surface Temperature (LST) is one of the key parameters in the physics of land-surface processes on regional and global scales, combining the results of all surface-atmosphere interactions and energy fluxes between the surface and the atmosphere. An adequate characterization of LST distribution and its temporal evolution requires measurements with detailed spatial and temporal frequencies. With the advent of the Sentinel 2 (S2) and 3 (S3) series of satellites a unique opportunity exists to go beyond the current state of the art of single instrument algorithms. The Synergistic Use of The Sentinel Missions For Estimating And Monitoring Land Surface Temperature (SEN4LST) project aims at developing techniques to fully utilize synergy between S2 and S3 instruments in order to improve LST retrievals. In the framework of the SEN4LST project, three LST retrieval algorithms were proposed using the thermal infrared bands of the Sea and Land Surface Temperature Retrieval (SLSTR) instrument on board the S3 platform: split-window (SW), dual-angle (DA) and a combined algorithm using both split-window and dual-angle techniques (SW-DA). One of the objectives of the project is to select the best algorithm to generate LST products from the synergy between S2/S3 instruments. In this sense, validation is a critical step in the selection process for the best performing candidate algorithm. A unique match-up database constructed at University of Leicester (UoL) of in situ observations from over twenty ground stations and corresponding brightness temperature (BT) and LST match-ups from multi-sensor overpasses is utilised for validating the candidate algorithms. Furthermore, their performance is also evaluated against the standard ESA LST product and the enhanced offline UoL LST product. In addition, a simulation dataset is constructed using 17 synthetic images of LST and the radiative transfer model MODTRAN carried under 66 different atmospheric conditions. Each candidate LST

  20. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    Science.gov (United States)

    Asebedo, Antonio Ray

    Nitrogen (N) management is one of the most recognizable components of farming both within and outside the world of agriculture. Interest over the past decade has greatly increased in improving N management systems in corn (Zea mays) and winter wheat (Triticum aestivum ) to have high NUE, high yield, and be environmentally sustainable. Nine winter wheat experiments were conducted across seven locations from 2011 through 2013. The objectives of this study were to evaluate the impacts of fall-winter, Feekes 4, Feekes 7, and Feekes 9 N applications on winter wheat grain yield, grain protein, and total grain N uptake. Nitrogen treatments were applied as single or split applications in the fall-winter, and top-dressed in the spring at Feekes 4, Feekes 7, and Feekes 9 with applied N rates ranging from 0 to 134 kg ha-1. Results indicate that Feekes 7 and 9 N applications provide more optimal combinations of grain yield, grain protein levels, and fertilizer N recovered in the grain when compared to comparable rates of N applied in the fall-winter or at Feekes 4. Winter wheat N management studies from 2006 through 2013 were utilized to develop sensor-based N recommendation algorithms for winter wheat in Kansas. Algorithm RosieKat v.2.6 was designed for multiple N application strategies and utilized N reference strips for establishing N response potential. Algorithm NRS v1.5 addressed single top-dress N applications and does not require a N reference strip. In 2013, field validations of both algorithms were conducted at eight locations across Kansas. Results show algorithm RK v2.6 consistently provided highly efficient N recommendations for improving NUE, while achieving high grain yield and grain protein. Without the use of the N reference strip, NRS v1.5 performed statistically equal to the KSU soil test N recommendation in regards to grain yield but with lower applied N rates. Six corn N fertigation experiments were conducted at KSU irrigated experiment fields from 2012

  1. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  2. Development of a noise reduction filter algorithm for pediatric body images in multidetector CT.

    Science.gov (United States)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Tomoshige, Yukihiro; Kurokawa, Takehiro; Nakamura, Yuko; Suzuki, Masayuki

    2010-12-01

    Recently, several types of post-processing image filter which was designed to reduce noise allowing a corresponding dose reduction in CT images have been proposed and these were reported to be useful for noise reduction of CT images of adult patients. However, these have not been reported on adaptation for pediatric patients. Because they are not very effective with small (<20 cm) display fields of view, they could not be used for pediatric (e.g., premature babies and infants) body CT images. In order to solve this restriction, we have developed a new noise reduction filter algorithm which can be applicable for pediatric body CT images. This algorithm is based on a three-dimensional post processing, in which output pixel values are calculated by multi-directional, one-dimensional median filters on original volumetric datasets. The processed directions were selected except in in-plane (axial plane) direction, and consequently the in-plane spatial resolution was not affected by the filter. Also, in other directions, the spatial resolutions including slice thickness were almost maintained due to a characteristic of non-linear filtering of the median filter. From the results of phantom studies, the proposed algorithm could reduce standard deviation values as a noise index by up to 30% without affecting the spatial resolution of all directions, and therefore, contrast-to-noise ratio was improved by up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of pediatric body CT images.

  3. jClustering, an Open Framework for the Development of 4D Clustering Algorithms

    Science.gov (United States)

    Mateos-Pérez, José María; García-Villalba, Carmen; Pascau, Javier; Desco, Manuel; Vaquero, Juan J.

    2013-01-01

    We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License) to allow modification if necessary. PMID:23990913

  4. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  5. Development of the atmospheric correction algorithm for the next generation geostationary ocean color sensor data

    Science.gov (United States)

    Lee, Kwon-Ho; Kim, Wonkook

    2017-04-01

    The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.

  6. Development of a polarimetric radar based hydrometeor classification algorithm for winter precipitation

    Science.gov (United States)

    Thompson, Elizabeth Jennifer

    The nation-wide WSR-88D radar network is currently being upgraded for dual-polarized technology. While many convective, warm-season fuzzy-logic hydrometeor classification algorithms based on this new suite of radar variables and temperature have been refined, less progress has been made thus far in developing hydrometeor classification algorithms for winter precipitation. Unlike previous studies, the focus of this work is to exploit the discriminatory power of polarimetric variables to distinguish the most common precipitation types found in winter storms without the use of temperature as an additional variable. For the first time, detailed electromagnetic scattering of plates, dendrites, dry aggregated snowflakes, rain, freezing rain, and sleet are conducted at X-, C-, and S-band wavelengths. These physics-based results are used to determine the characteristic radar variable ranges associated with each precipitation type. A variable weighting system was also implemented in the algorithm's decision process to capitalize on the strengths of specific dual-polarimetric variables to discriminate between certain classes of hydrometeors, such as wet snow to indicate the melting layer. This algorithm was tested on observations during three different winter storms in Colorado and Oklahoma with the dual-wavelength X- and S-band CSU-CHILL, C-band OU-PRIME, and X-band CASA IP1 polarimetric radars. The algorithm showed success at all three frequencies, but was slightly more reliable at X-band because of the algorithm's strong dependence on KDP. While plates were rarely distinguished from dendrites, the latter were satisfactorily differentiated from dry aggregated snowflakes and wet snow. Sleet and freezing rain could not be distinguished from rain or light rain based on polarimetric variables alone. However, high-resolution radar observations illustrated the refreezing process of raindrops into ice pellets, which has been documented before but not yet explained. Persistent

  7. Development of a novel algorithm to determine adherence to chronic pain treatment guidelines using administrative claims

    Science.gov (United States)

    Margolis, Jay M; Princic, Nicole; Smith, David M; Abraham, Lucy; Cappelleri, Joseph C; Shah, Sonali N; Park, Peter W

    2017-01-01

    Objective To develop a claims-based algorithm for identifying patients who are adherent versus nonadherent to published guidelines for chronic pain management. Methods Using medical and pharmacy health care claims from the MarketScan® Commercial and Medicare Supplemental Databases, patients were selected during July 1, 2010, to June 30, 2012, with the following chronic pain conditions: osteoarthritis (OA), gout (GT), painful diabetic peripheral neuropathy (pDPN), post-herpetic neuralgia (PHN), and fibromyalgia (FM). Patients newly diagnosed with 12 months of continuous medical and pharmacy benefits both before and after initial diagnosis (index date) were categorized as adherent, nonadherent, or unsure according to the guidelines-based algorithm using disease-specific pain medication classes grouped as first-line, later-line, or not recommended. Descriptive and multivariate analyses compared patient outcomes with algorithm-derived categorization endpoints. Results A total of 441,465 OA patients, 76,361 GT patients, 10,645 pDPN, 4,010 PHN patients, and 150,321 FM patients were included in the development of the algorithm. Patients found adherent to guidelines included 51.1% for OA, 25% for GT, 59.5% for pDPN, 54.9% for PHN, and 33.5% for FM. The majority (~90%) of patients adherent to the guidelines initiated therapy with prescriptions for first-line pain medications written for a minimum of 30 days. Patients found nonadherent to guidelines included 30.7% for OA, 6.8% for GT, 34.9% for pDPN, 23.1% for PHN, and 34.7% for FM. Conclusion This novel algorithm used real-world pharmacotherapy treatment patterns to evaluate adherence to pain management guidelines in five chronic pain conditions. Findings suggest that one-third to one-half of patients are managed according to guidelines. This method may have valuable applications for health care payers and providers analyzing treatment guideline adherence. PMID:28223842

  8. Development of an algorithm for analysing the electronic measurement of medication adherence in routine HIV care.

    Science.gov (United States)

    Rotzinger, Aurélie; Cavassini, Matthias; Bugnon, Olivier; Schneider, Marie Paule

    2016-10-01

    Background Medication adherence is crucial for successful treatment. Various methods exist for measuring adherence, including electronic drug monitoring, pharmacy refills, pill count, and interviews. These methods are not equivalent, and no method can be considered as the gold standard. A combination of methods is therefore recommended. Objective To develop an algorithm for the management of routinely collected adherence data and to compare persistence and implementation curves using post-algorithm data (reconciled data) versus raw electronic drug monitoring data. Setting A community pharmacy located within a university medical outpatient clinic in Lausanne, Switzerland. Methods The algorithm was developed to take advantage of the strengths of each available adherence measurement method, with electronic drug monitoring as a cornerstone to capture the dynamics of patient behaviour, pill count as a complementary objective method to detect any discrepancy between the number of openings measured by electronic monitoring and the number of pills ingested per opening, and annotated interviews to interpret the discrepancy. The algorithm was tested using data from patients taking lopinavir/r and having participated in an adherence-enhancing programme for more than 3 months. Main outcome measure Adherence was calculated as the percentage of persistent patients (persistence) and the proportion of days with correct dosing over time (implementation) from inclusion to the end of the median follow-up period. Results A 10-step algorithm was established. Among 2041 analysed inter-visit periods, 496 (24 %) were classified as inaccurate, among which 372 (75 %) could be reconciled. The average implementation values were 85 % (raw data) and 91 % (reconciled data) (p setting of a medication adherence clinic. Electronic drug monitoring underestimates medication adherence, affecting subsequent analysis of routinely collected adherence data. To ensure a set of reliable electronic

  9. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  10. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements.

  11. MEMS-based sensing and algorithm development for fall detection and gait analysis

    Science.gov (United States)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  12. Digital Technologies in Providing Development of Algorithms Surgical Treatment of Supraventricular Arrhythmias

    Directory of Open Access Journals (Sweden)

    Evtushenko Vladimir

    2016-01-01

    Full Text Available The aim of the study was the development and clinical application of patient selection algorithm for surgical treatment of longlasting persistent atrial fibrillation. The study included 235 patients with acquired heart disease and coronary artery disease, which in the period from 1999 to 2015 performed surgical treatment of long-term persistent atrial fibrillation (RF “MAZE III” procedure in conjunction with the correction of the underlying heart disease. The patients were divided into 2 groups according to the method of operation: the group 1 – 135 patients (76 women and 59 men who have applied an integrated approach to surgery for atrial fibrillation, including penetrating method of RF effects on atrial myocardium and the study of the function of the sinus node before and after the operation (these patients were operated on from 2008 to 2015. The group 2 – 100 patients (62 women and 38 men with a “classical” method of monopolar RF “MAZE III”, which the sinus node function was not studied. We used the combined (epi- and endocardial method of RF «MAZE». This algorithm is decreasing of possible permanent pacemaker postoperatively. The initial sinus node function in these patients, measured using the original method, the basic line of this algorithm was taken. The results showed that use this algorithm for selection of patients allows significantly reduce the possibility of pacemaker implantation in the postoperative period.

  13. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    Science.gov (United States)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  14. Development of Cloud and Precipitation Property Retrieval Algorithms and Measurement Simulators from ASR Data

    Energy Technology Data Exchange (ETDEWEB)

    Mace, Gerald G. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Atmospheric Sciences

    2016-02-10

    What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectra along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.

  15. THE DEVELOPMENT OF AN EBook WITH DYNAMIC CONTENT FOR THE INTRODUCTION OF ALGORITHMS and PROGRAMMING

    Directory of Open Access Journals (Sweden)

    Gürcan Çetin

    2016-12-01

    Full Text Available It is very important that the content of Algorithms and Programming course is understood by Computer Engineering students. The eBook designed in this study provides a better explanation of the flow diagrams and programming logic of the algorithms used in the introduction to programming, as well as the abstract processing steps in the computer memory and CPU during programming are animated and visualized by means of computer animations and simulations. The EPUB 3.0 based training content, developed by using animation and interactive content, is expected to create new opportunities for students at anytime and anywhere access. This work also includes the development process of an EPUB 3.0 based eBook for use on computers or mobile devices.

  16. Development of Regional TSS Algorithm over Penang using Modis Terra (250 M Surface Reflectance Product

    Directory of Open Access Journals (Sweden)

    Amin Abd Rahman Mat

    2016-09-01

    Full Text Available Total suspended sediment (TSS plays a significant role in the environment. Many researchers show that TSS has a high correlation with the red portion of the visible light spectrum. The correlation is highly dependent on geography of the study area. The aim of this study was to develop specific algorithms utilizing corrected MODIS Terra 250-m surface reflectance (Rrs product (MOD09 to map TSS over the Penang coastal area. Field measurements of TSS were performed during two cruise trips that were conducted on 8 December 2008 and 29 January 2010 over the Penang coastal area. The relationship between TSS and the surface reflectance of MOD09 was analysed using regression analysis. The developed algorithm showed that Rrs are highly correlated with the in-situ TSS with R2 is 0.838. The result shows that the Rrs product could be used to estimate TSS over the Penang area.

  17. Development of Regional TSS Algorithm over Penang using Modis Terra (250 M) Surface Reflectance Product

    OpenAIRE

    Amin Abd Rahman Mat; Abdullah Khiruddin; Lim Hwee San; Embong Muhd Fauzi; Ahmad Fadhli; Yaacob Rosnan

    2016-01-01

    Total suspended sediment (TSS) plays a significant role in the environment. Many researchers show that TSS has a high correlation with the red portion of the visible light spectrum. The correlation is highly dependent on geography of the study area. The aim of this study was to develop specific algorithms utilizing corrected MODIS Terra 250-m surface reflectance (Rrs) product (MOD09) to map TSS over the Penang coastal area. Field measurements of TSS were performed during two cruise trips that...

  18. DEVELOPMENT OF ALGORITHMS OF NUMERICAL PROJECT OPTIMIZATION FOR THE CONSTRUCTION AND RECONSTRUCTION OF ENGINEERING STRUCTURES

    Directory of Open Access Journals (Sweden)

    MENEJLJUK О. І.

    2016-08-01

    Full Text Available Raising of problem. The paper analyzes the numerical optimization methods of construction projects and reconstruction of engineering structures. Purpose. Possible ways of modeling organizational and technological solutions in construction are presented. Based on the analysis the most effective method of optimization by experimental and statistical modeling with application of modern computer programs in the field of project management and mathematical statistics is selected. Conclusion. An algorithm for solving optimization problems by means of experimental and statistical modeling is developed.

  19. The development of controller and navigation algorithm for underwater wall crawler

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyung Suck; Kim, Kyung Hoon; Kim, Min Young [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-01-01

    In this project, the control system of a underwater robotic vehicle(URV) for underwater wall inspection in the nuclear reactor pool or the related facilities has been developed. The following 4-sub projects have been studied for this project: (1) Development of the controller and motor driver for the URV (2) Development of the control algorithm for the tracking control of the URV (3) Development of the localization system (4) Underwater experiments of the developed system. First, the dynamic characteristic of thruster with the DC servo-motor was analyzed experimentally. Second the controller board using the INTEL 80C196 was designed and constructed, and the software for the communication and motor control is developed. Third the PWM motor-driver was developed. Fourth the localization system using the laser scanner and inclinometer was developed and tested in the pool. Fifth the dynamics of the URV was studied and the proper control algorithms for the URV was proposed. Lastly the validation of the integrated system was experimentally performed. (author). 27 refs., 51 figs., 8 tabs.

  20. Development of an algorithm for identifying rheumatoid arthritis in the Korean National Health Insurance claims database.

    Science.gov (United States)

    Cho, Soo-Kyung; Sung, Yoon-Kyoung; Choi, Chan-Bum; Kwon, Jeong-Mi; Lee, Eui-Kyung; Bae, Sang-Cheol

    2013-12-01

    This study aimed to develop an identification algorithm for validating the International Classification of Diseases-Tenth diagnostic codes for rheumatoid arthritis (RA) in the Korean National Health Insurance (NHI) claims database. An individual copayment beneficiaries program for rare and intractable diseases, including seropositive RA (M05), began in South Korea in July 2009. Patients registered in this system pay only 10 % of their total medical costs, but registration requires an official report from a doctor documenting that the patient fulfills the 1987 ACR criteria. We regarded patients registered in this system as gold standard RA and examined the validity of several algorithms to define RA diagnosis using diagnostic codes and prescription data. We constructed nine algorithms using two highly specific prescriptions (positive predictive value >90 % and specificity >90 %) and one prescription with high sensitivity (>80 %) and accuracy (>75 %). A total of 59,823 RA patients were included in this validation study. Among them, 50,082 (83.7 %) were registered in the individual copayment beneficiaries program and considered true RA. We tested nine algorithms that incorporated two specific regimens [biologics and leflunomide alone, methotrexate plus leflunomide, or more than 3 disease-modifying anti-rheumatic drugs (DMARDs)] and one sensitive drug (any non-steroidal anti-inflammatory drug (NSAID), any DMARD, or any NSAID plus any DMARD). The algorithm that included biologics, more than 3 DMARDs, and any DMARD yielded the highest accuracy (91.4 %). Patients with RA diagnostic codes with prescription of biologics or any DMARD can be considered as accurate cases of RA in Korean NHI claims database.

  1. Development and evaluation of an articulated registration algorithm for human skeleton registration

    Science.gov (United States)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  2. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  3. Development and validation of an arterial blood gas analysis interpretation algorithm for application in clinical laboratory services.

    Science.gov (United States)

    Park, Sang Hyuk; An, Dongheui; Chang, You Jin; Kim, Hyun Jung; Kim, Kyung Min; Koo, Tai Yeon; Kim, Sollip; Lee, Woochang; Yang, Won Seok; Hong, Sang-Bum; Chun, Sail; Min, Won-Ki

    2011-03-01

    Arterial blood gas analysis (ABGA) is a useful test that estimates the acid-base status of patients. However, numerically reported test results make rapid interpretation difficult. To overcome this problem, we have developed an algorithm that automatically interprets ABGA results, and assessed the validity of this algorithm for applications in clinical laboratory services. The algorithm was developed based on well-established guidelines using three test results (pH, PaCO₂ and [HCO₃⁻]) as variables. Ninety-nine ABGA test results were analysed by the algorithm. The algorithm's interpretations and the interpretations of two representative web-based ABGA interpretation programs were compared with those of two experienced clinicians. The concordance rates between the interpretations of each of the two clinicians and the algorithm were 91.9% and 97.0%, respectively. The web-based programs could not issue definitive interpretations in 15.2% and 25.3% of cases, respectively, but the algorithm issued definitive interpretations in all cases. Of the 10 cases that invoked disagreement among interpretations by the algorithm and the two clinicians, half were interpreted as compensated acid-base disorders by the algorithm but were assessed as normal by at least one of the two clinicians. In no case did the algorithm indicate a normal condition that the clinicians assessed as an abnormal condition. The interpretations of the algorithm showed a higher concordance rate with those of experienced clinicians than did two web-based programs. The algorithm sensitively detected acid-base disorders. The algorithm may be adopted by the clinical laboratory services to provide rapid and definitive interpretations of test results.

  4. Development of the Algorithm for Energy Efficiency Improvement of Bulk Material Transport System

    Directory of Open Access Journals (Sweden)

    Milan Bebic

    2013-06-01

    Full Text Available The paper presents a control strategy for the system of belt conveyors with adjustable speed drives based on the principle of optimum energy consumption. Different algorithms are developed for generating the reference speed of the system of belt conveyors in order to achieve maximum material cross section on the belts and thus reduction of required electrical drive power. Control structures presented in the paper are developed and tested on the detailed mathematical model of the drive system with the rubber belt. The performed analyses indicate that the application of the algorithm based on fuzzy logic control (FLC which incorporates drive torque as an input variable is the proper solution. Therefore, this solution is implemented on the newvariable speed belt conveyor system with remote control on an open pit mine. Results of measurements on the system prove that the applied algorithm based on fuzzy logic control provides minimum electrical energy consumption of the drive under given constraints. The paper also presents the additional analytical verification of the achieved results trough a method based on the sequential quadratic programming for finding a minimum of a nonlinear function of multiple variables under given constraints.

  5. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  6. Development of a 3D modeling algorithm for tunnel deformation monitoring based on terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    Xiongyao Xie

    2017-03-01

    Full Text Available Deformation monitoring is vital for tunnel engineering. Traditional monitoring techniques measure only a few data points, which is insufficient to understand the deformation of the entire tunnel. Terrestrial Laser Scanning (TLS is a newly developed technique that can collect thousands of data points in a few minutes, with promising applications to tunnel deformation monitoring. The raw point cloud collected from TLS cannot display tunnel deformation; therefore, a new 3D modeling algorithm was developed for this purpose. The 3D modeling algorithm includes modules for preprocessing the point cloud, extracting the tunnel axis, performing coordinate transformations, performing noise reduction and generating the 3D model. Measurement results from TLS were compared to the results of total station and numerical simulation, confirming the reliability of TLS for tunnel deformation monitoring. Finally, a case study of the Shanghai West Changjiang Road tunnel is introduced, where TLS was applied to measure shield tunnel deformation over multiple sections. Settlement, segment dislocation and cross section convergence were measured and visualized using the proposed 3D modeling algorithm.

  7. Algorithm development for corticosteroid management in systemic juvenile idiopathic arthritis trial using consensus methodology

    Directory of Open Access Journals (Sweden)

    Ilowite Norman T

    2012-08-01

    Full Text Available Abstract Background The management of background corticosteroid therapy in rheumatology clinical trials poses a major challenge. We describe the consensus methodology used to design an algorithm to standardize changes in corticosteroid dosing during the Randomized Placebo Phase Study of Rilonacept in Systemic Juvenile Idiopathic Arthritis Trial (RAPPORT. Methods The 20 RAPPORT site principal investigators (PIs and 4 topic specialists constituted an expert panel that participated in the consensus process. The panel used a modified Delphi Method consisting of an on-line questionnaire, followed by a one day face-to-face consensus conference. Consensus was defined as ≥ 75% agreement. For items deemed essential but when consensus on critical values was not achieved, simple majority vote drove the final decision. Results The panel identified criteria for initiating or increasing corticosteroids. These included the presence or development of anemia, myocarditis, pericarditis, pleuritis, peritonitis, and either complete or incomplete macrophage activation syndrome (MAS. The panel also identified criteria for tapering corticosteroids which included absence of fever for ≥ 3 days in the previous week, absence of poor physical functioning, and seven laboratory criteria. A tapering schedule was also defined. Conclusion The expert panel established consensus regarding corticosteroid management and an algorithm for steroid dosing that was well accepted and used by RAPPORT investigators. Developed specifically for the RAPPORT trial, further study of the algorithm is needed before recommendation for more general clinical use.

  8. DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION

    Directory of Open Access Journals (Sweden)

    E. B. Romanova

    2016-03-01

    Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different

  9. Development of a novel algorithm to determine adherence to chronic pain treatment guidelines using administrative claims

    Directory of Open Access Journals (Sweden)

    Margolis JM

    2017-02-01

    Full Text Available Jay M Margolis,1 Nicole Princic,2 David M Smith,2 Lucy Abraham,3 Joseph C Cappelleri,4 Sonali N Shah,5 Peter W Park5 1Truven Health Analytics, Bethesda, MD, 2Truven Health Analytics, Cambridge, MA, USA; 3Pfizer Ltd, Tadworth, UK; 4Pfizer Inc, Groton, CT, 5Pfizer Inc, New York, NY, USA Objective: To develop a claims-based algorithm for identifying patients who are adherent versus nonadherent to published guidelines for chronic pain management. Methods: Using medical and pharmacy health care claims from the MarketScan® Commercial and Medicare Supplemental Databases, patients were selected during July 1, 2010, to June 30, 2012, with the following chronic pain conditions: osteoarthritis (OA, gout (GT, painful diabetic peripheral neuropathy (pDPN, post-herpetic neuralgia (PHN, and fibromyalgia (FM. Patients newly diagnosed with 12 months of continuous medical and pharmacy benefits both before and after initial diagnosis (index date were categorized as adherent, nonadherent, or unsure according to the guidelines-based algorithm using disease-specific pain medication classes grouped as first-line, later-line, or not recommended. Descriptive and multivariate analyses compared patient outcomes with algorithm-derived categorization endpoints. Results: A total of 441,465 OA patients, 76,361 GT patients, 10,645 pDPN, 4,010 PHN patients, and 150,321 FM patients were included in the development of the algorithm. Patients found adherent to guidelines included 51.1% for OA, 25% for GT, 59.5% for pDPN, 54.9% for PHN, and 33.5% for FM. The majority (~90% of patients adherent to the guidelines initiated therapy with prescriptions for first-line pain medications written for a minimum of 30 days. Patients found nonadherent to guidelines included 30.7% for OA, 6.8% for GT, 34.9% for pDPN, 23.1% for PHN, and 34.7% for FM. Conclusion: This novel algorithm used real-world pharmacotherapy treatment patterns to evaluate adherence to pain management guidelines in five

  10. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  11. An Effort to Develop an Algorithm to Target Abdominal CT Scans for Patients After Gastric Bypass.

    Science.gov (United States)

    Pernar, Luise I M; Lockridge, Ryan; McCormack, Colleen; Chen, Judy; Shikora, Scott A; Spector, David; Tavakkoli, Ali; Vernon, Ashley H; Robinson, Malcolm K

    2016-10-01

    Abdominal CT (abdCT) scans are frequently ordered for Roux-en-Y gastric bypass (RYGB) patients presenting to the emergency department (ED) with abdominal pain, but often do not reveal intra-abdominal pathology. We aimed to develop an algorithm for rational ordering of abdCTs. We retrospectively reviewed our institution's RYGB patients presenting acutely with abdominal pain, documenting clinical and laboratory data, and scan results. Associations of clinical parameters to abdCT results were examined for outcome predictors. Of 1643 RYGB patients who had surgery between 2005 and 2015, 355 underwent 387 abdCT scans. Based on abdCT, 48 (12 %) patients required surgery and 86 (22 %) another intervention. No clinical or laboratory parameter predicted imaging results. Imaging decisions for RYGB patients do not appear to be amenable to a simple algorithm, and patient work-up should be based on astute clinical judgment.

  12. An Algorithm to Develop Lumped Model for Gunn-Diode Dynamics

    OpenAIRE

    Umesh Kumar

    1998-01-01

    A nonlinear lumped model can be developed for Gunn-Diodes to describe the diffusion effects as the domain travels from cathode to anode of a Gunn-Diode. The model describes the domain extinction and nucleation phenomena. It allows the user to specify arbitrary nonlinear drift velocity V(E) and nonlinear diffusion D(E).The model simulates arbitrary Gunn-Diode circuits operating in any matured high field domain or in the LSA mode.Here we have constructed an algorithm to lead to development of t...

  13. A simulation environment for modeling and development of algorithms for ensembles of mobile microsystems

    Science.gov (United States)

    Fink, Jonathan; Collins, Tom; Kumar, Vijay; Mostofi, Yasamin; Baras, John; Sadler, Brian

    2009-05-01

    The vision for the Micro Autonomous Systems Technologies MAST programis to develop autonomous, multifunctional, collaborative ensembles of agile, mobile microsystems to enhance tactical situational awareness in urban and complex terrain for small unit operations. Central to this vision is the ability to have multiple, heterogeneous autonomous assets to function as a single cohesive unit, that is adaptable, responsive to human commands and resilient to adversarial conditions. This paper represents an effort to develop a simulation environment for studying control, sensing, communication, perception, and planning methodologies and algorithms.

  14. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms

    Science.gov (United States)

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias

    2016-01-01

    Background and Aim Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. Methods We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Results Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. Conclusion The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts. PMID:27936091

  15. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  16. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  17. Microphysical particle properties derived from inversion algorithms developed in the framework of EARLINET

    Science.gov (United States)

    Müller, Detlef; Böckmann, Christine; Kolgotin, Alexei; Schneidenbach, Lars; Chemyakin, Eduard; Rosemann, Julia; Znak, Pavel; Romanov, Anton

    2016-10-01

    We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or ±50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high- and low-absorbing aerosols. On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied. The optical data used in our study cover a range of Ångström exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested

  18. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  19. Rate control system algorithm developed in state space for models with parameter uncertainties

    Directory of Open Access Journals (Sweden)

    Adilson Jesus Teixeira

    2011-09-01

    Full Text Available Researching in weightlessness above the atmosphere needs a payload to carry the experiments. To achieve the weightlessness, the payload uses a rate control system (RCS in order to reduce the centripetal acceleration within the payload. The rate control system normally has actuators that supply a constant force when they are turned on. The development of an algorithm control for this rate control system will be based on the minimum-time problem method in the state space to overcome the payload and actuators dynamics uncertainties of the parameters. This control algorithm uses the initial conditions of optimal trajectories to create intermediate points or to adjust existing points of a switching function. It associated with inequality constraint will form a decision function to turn on or off the actuators. This decision function, for linear time-invariant systems in state space, needs only to test the payload state variables instead of spent effort in solving differential equations and it will be tuned in real time to the payload dynamic. It will be shown, through simulations, the results obtained for some cases of parameters uncertainties that the rate control system algorithm reduced the payload centripetal acceleration below μg level and keep this way with no limit cycle.

  20. Development of an integrated engine-hydro-mechanical transmission control algorithm for a tractor

    Directory of Open Access Journals (Sweden)

    Sunghyun Ahn

    2015-07-01

    Full Text Available This article presents an integrated engine-hydro-mechanical transmission control algorithm for a tractor considering the engine-hydro-mechanical transmission efficiency. First, the hydro-mechanical transmission efficiency was obtained by network analysis based on the hydrostatic unit efficiency constructed from the test. Using the hydro-mechanical transmission efficiency map and the thermal efficiency of the engine, an engine-hydro-mechanical transmission optimal operating line was obtained, which provides higher total system efficiency. Based on the optimal operating line, an integrated engine-hydro-mechanical transmission control algorithm was proposed, which provides higher total powertrain system efficiency. To evaluate the performance of the proposed control algorithm, an AMESim-MATLAB/Simulink-based co-simulator was developed. From the simulation results for the plow working, it was found that the integrated engine-hydro-mechanical transmission control provides improved fuel economy by 7.5% compared with the existing engine optimal operating line control. The performance of the integrated engine-hydro-mechanical transmission control was also validated using the test bench.

  1. GLASS Daytime All-Wave Net Radiation Product: Algorithm Development and Preliminary Validation

    Directory of Open Access Journals (Sweden)

    Bo Jiang

    2016-03-01

    Full Text Available Mapping surface all-wave net radiation (Rn is critically needed for various applications. Several existing Rn products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS daytime Rn product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS model is determined after comparison with three other algorithms. The validation of the GLASS Rn product based on high-quality in situ measurements in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm−2, and an average bias of −17.59 Wm−2. We also compare our product/algorithm with another satellite product (CERES-SYN and two reanalysis products (MERRA and JRA55, and find that the accuracy of the much higher spatial resolution GLASS Rn product is satisfactory. The GLASS Rn product from 2000 to the present is operational and freely available to the public.

  2. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  3. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  4. Dual Ka-band radar field campaign for GPM/DPR algorithm development

    Science.gov (United States)

    Nakagawa, K.; Nishikawa, M.; Nakamura, K.; Komachi, K.; Hanado, H.; Kawamura, S.; Sugitani, S.; Minda, H.; Shimizu, S.; Oki, R.

    2012-04-01

    The Global Precipitation Measurement (GPM) mission is an expanded follow-on mission to TRMM (Tropical Rainfall Measuring Mission) and a GPM core satellite will carry dual frequency precipitation radar (DPR) and a GPM Microwave Imager on board. The DPR, which is being developed by National Institute of Information and Communications Technology (NICT) and Japan Aerospace Exploration Agency (JAXA), consists of two radars; Ku-band precipitation radar (KuPR) and Ka-band radar (KaPR). The DPR is expected to advance precipitation science by expanding the coverage of observations to higher latitudes than those of the TRMM/PR, measuring snow and light rain by the KaPR, and providing drop size distribution information based on the differential attenuation of echoes at two frequencies. In order to secure the quality of precipitation estimates, ground validation (GV) of satellite data and retrieval algorithms is essential. Since end-to-end comparisons between instantaneous precipitation data observed by satellite and ground-based instruments is not enough to improve the algorithms. The error of various physical parameters in the precipitation retrieval algorithms (e.g. attenuation factor, drop size distribution, terminal velocity, density of the snow particles, etc.) will be estimated by the comparison with the ground-based observation data. A dual Ka-band radar system is developed by the JAXA for the GPM/DPR algorithm development. The dual Ka-radar system which consists of two identical Ka-band radars can measure both the specific attenuation and the equivalent radar reflectivity at Ka-band. Those parameters are important particularly for snow measurement. Using the dual Ka-radar system along with other instruments, such as a polarimetric precipitation radar, a wind-profiler radar, ground-based precipitation measurement systems, the uncertainties of the parameters in the DPR algorithm can be reduced. The verification of improvement of rain retrieval with the DPR algorithm is

  5. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography

    Science.gov (United States)

    Srinivasan, Sangeetha; Shetty, Sharan; Natarajan, Viswanathan; Sharma, Tarun; Raman, Rajiv

    2016-01-01

    Purpose To develop a simplified algorithm to identify and refer diabetic retinopathy (DR) from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii) to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus “gold standard” (retinal specialist grading). Methods The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ), areas under the receiver operating characteristic curves (AUCs), sensitivity and specificity were determined, with further comparison between working optometrists and optometry students. Results Mean age of the participants was 22 years (range: 19–43 years), 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ) = 0.696), 62.5% of images as requiring review after 6 months (κ = 0.462), and 51.2% of those requiring review after 1 year (κ = 0.532). The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855) for immediate referral, second highest (0.824) for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral. Conclusions The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images. PMID:27661981

  6. Development of a new time domain-based algorithm for train detection and axle counting

    Science.gov (United States)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  7. Development of Bio-Optical Algorithms for Geostationary Ocean Color Imager

    Science.gov (United States)

    Ryu, J.; Moon, J.; Min, J.; Palanisamy, S.; Han, H.; Ahn, Y.

    2007-12-01

    GOCI, the first Geostationary Ocean Color Imager, shall be operated in a staring-frame capture mode onboard its Communication Ocean and Meteorological Satellite (COMS) and tentatively scheduled for launch in 2008. The mission concept includes eight visible-to-near-infrared bands, 0.5 km pixel resolution, and a coverage region of 2,500 ¢®¢¯ 2,500 km centered at Korea. The GOCI is expected to provide SeaWiFS quality observations for a single study area with imaging interval of 1 hour from 10 am to 5 pm. In the GOCI swath area, the optical properties of the East Sea (typical of Case-I water), the Yellow Sea and East China Sea (typical of Case-II water) are investigated. For developing the GOCI bio-optical algorithms in optically more complex waters, it is necessary to study and understand the optical properties around the Korean Sea. Radiometric measurements were made using WETLabs AC-S, TriOS RAMSES ACC/ARC, and ASD FieldSpec Pro Dual VNIR Spectroradiometer. Seawater samples were collected concurrently with the radiometric measurements at about 300 points around the Korean Sea during 1998 to 2007. The absorption coefficients were determined using Perkin-Elmer Lambda 19 dual-beam spectrophotometer. We analyzed the absorption coefficient of sea water constituents such as phytoplankton, Suspended Sediment (SS) and Dissolved Organic Matter (DOM). Two kinds of chlorophyll algorithms are developed by using statistical regression and fluorescence-based technique considering the bio- optical properties in Case-II waters. Fluorescence measurements were related to in situ Chl-a concentrations to obtain the Flu(681), Flu(688) and Flu(area) algorithms, which were compared with those from standard spectral ratios of the remote sensing reflectance. The single band algorithm for is derived by relationship between Rrs (555) and in situ concentration. The CDOM is estimated by absorption spectra and its slope centered at 440 nm wavelength. These standard algorithms will be

  8. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  9. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    Science.gov (United States)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  10. Probabilistic models, learning algorithms, and response variability: sampling in cognitive development.

    Science.gov (United States)

    Bonawitz, Elizabeth; Denison, Stephanie; Griffiths, Thomas L; Gopnik, Alison

    2014-10-01

    Although probabilistic models of cognitive development have become increasingly prevalent, one challenge is to account for how children might cope with a potentially vast number of possible hypotheses. We propose that children might address this problem by 'sampling' hypotheses from a probability distribution. We discuss empirical results demonstrating signatures of sampling, which offer an explanation for the variability of children's responses. The sampling hypothesis provides an algorithmic account of how children might address computationally intractable problems and suggests a way to make sense of their 'noisy' behavior.

  11. Developing a Direct Search Algorithm for Solving the Capacitated Open Vehicle Routing Problem

    Science.gov (United States)

    Simbolon, Hotman

    2011-06-01

    In open vehicle routing problems, the vehicles are not required to return to the depot after completing service. In this paper, we present the first exact optimization algorithm for the open version of the well-known capacitated vehicle routing problem (CVRP). The strategy of releasing nonbasic variables from their bounds, combined with the "active constraint" method and the notion of superbasics, has been developed for efficiently requirements; this strategy is used to force the appropriate non-integer basic variables to move to their neighborhood integer points. A study of criteria for choosing a nonbasic variable to work with in the integerizing strategy has also been made.

  12. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  13. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  14. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  15. Development of a Medical-text Parsing Algorithm Based on Character Adjacent Probability Distribution for Japanese Radiology Reports

    National Research Council Canada - National Science Library

    N. Nishimoto; S. Terae; M. Uesugi; K. Ogasawara; T. Sakurai

    2008-01-01

    Objectives: The objectives of this study were to investigate the transitional probability distribution of medical term boundaries between characters and to develop a parsing algorithm specifically for medical texts. Methods...

  16. China's experimental pragmatics of "Scientific development" in wind power: Algorithmic struggles over software in wind turbines

    DEFF Research Database (Denmark)

    Kirkegaard, Julia

    2016-01-01

    This article presents a case study on the development of China's wind power market. As China's wind industry has experienced a quality crisis, the Chinese government has intervened to steer the industry towards a turn to quality, indicating a pragmatist and experimental mode of market development....... This increased focus on quality, to ensure the sustainable and scientific development of China's wind energy market, requires improved indigenous Chinese innovation capabilities in wind turbine technology. To shed light on how the turn to quality impacts upon the industry and global competition, this study...... unfold over issues associated with intellectual property rights (IPRs), certification and standardisation of software algorithms. The article concludes that the use of this STS lens makes a fresh contribution to the often path-dependent, structuralist and hierarchical China literature, offering instead...

  17. A fusion algorithm for joins based on collections in Odra (Object Database for Rapid Application development)

    CERN Document Server

    Satish, Laika

    2011-01-01

    In this paper we present the functionality of a currently under development database programming methodology called ODRA (Object Database for Rapid Application development) which works fully on the object oriented principles. The database programming language is called SBQL (Stack based query language). We discuss some concepts in ODRA for e.g. the working of ODRA, how ODRA runtime environment operates, the interoperability of ODRA with .net and java .A view of ODRA's working with web services and xml. Currently the stages under development in ODRA are query optimization. So we present the prior work that is done in ODRA related to Query optimization and we also present a new fusion algorithm of how ODRA can deal with joins based on collections like set, lists, and arrays for query optimization.

  18. The Possibility to Use Genetic Algorithms and Fuzzy Systems in the Development of Tutorial Systems

    Directory of Open Access Journals (Sweden)

    Anca Ioana ANDREESCU

    2006-01-01

    Full Text Available In this paper we are presenting state of the art information methods and techniques that can be applied in the development of efficient tutorial systems and also the possibility to use genetic algorithms and fuzzy systems in the construction of such systems. All this topics have been studied during the development of the research project INFOSOC entitled "Tutorial System based on Eduknowledge for Work Security and Health in SMEs According to the European Union Directives" accomplished by a teaching stuff from the Academy of Economic Studies, Bucharest, in collaboration with the National Institute for Research and Development in Work Security, the National Institute for Small and Middle Enterprises and SC Q’NET International srl.

  19. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    Science.gov (United States)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  20. Developing a Search Algorithm and a Visualization Tool for SNOMED CT

    Directory of Open Access Journals (Sweden)

    Anthony Masi

    2015-04-01

    Full Text Available With electronic health records rising in popularity among hospitals and physicians, the SNOMED CT medical terminology has served as a valuable standard for those looking to exchange a variety of information linked to clinical knowledge bases, information retrieval, and data aggregation. However, SNOMED CT is distributed as a flat file database by the International Health Terminology Standards Development Organization and visualization of data can be a problem. This study describes an algorithm that allows a user to easily search SNOMED CT for identical or partial matches utilizing indexing and wildcard matching through a graphical user interface developed in the cross-platform programming language Java. In addition to this, the algorithm displays corresponding relationships and other relevant information pertaining to the search term. The outcome of this study can serve as a useful visualization tool for those looking to delve into the increasingly standardized world of electronic health records as well as a tool for healthcare providers who may be seeking specific clinical information contained in the SNOMED CT database.

  1. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  2. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    Science.gov (United States)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  3. Remote Sensing of Ocean Color in the Arctic: Algorithm Development and Comparative Validation. Chapter 9

    Science.gov (United States)

    Cota, Glenn F.

    2001-01-01

    The overall goal of this effort is to acquire a large bio-optical database, encompassing most environmental variability in the Arctic, to develop algorithms for phytoplankton biomass and production and other optically active constituents. A large suite of bio-optical and biogeochemical observations have been collected in a variety of high latitude ecosystems at different seasons. The Ocean Research Consortium of the Arctic (ORCA) is a collaborative effort between G.F. Cota of Old Dominion University (ODU), W.G. Harrison and T. Platt of the Bedford Institute of Oceanography (BIO), S. Sathyendranath of Dalhousie University and S. Saitoh of Hokkaido University. ORCA has now conducted 12 cruises and collected over 500 in-water optical profiles plus a variety of ancillary data. Observational suites typically include apparent optical properties (AOPs), inherent optical property (IOPs), and a variety of ancillary observations including sun photometry, biogeochemical profiles, and productivity measurements. All quality-assured data have been submitted to NASA's SeaWIFS Bio-Optical Archive and Storage System (SeaBASS) data archive. Our algorithm development efforts address most of the potential bio-optical data products for the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS), Moderate Resolution Imaging Spectroradiometer (MODIS), and GLI, and provides validation for a specific areas of concern, i.e., high latitudes and coastal waters.

  4. THE ALGORITHM OF THE CASE FORMATION DURING THE DEVELOPMENT OF CLINICAL DISCIPLINES IN MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    Andrey A. Garanin

    2016-01-01

    Full Text Available The aim of the study is to develop the algorithm of formation of the case on discipline «Clinical Medicine». Methods. The methods involve the effectiveness analysis of the self-diagnosed levels of professional and personal abilities of students in the process of self-study. Results. The article deals with the organization of independent work of students of case-method, which is one of the most important and complex active learning methods. When implementing the method of case analysis in the educational process the main job of the teacher focused on the development of individual cases. While developing the case study of medical character the teacher needs to pay special attention to questions of pathogenesis and pathological anatomy for students’ formation of the fundamental clinical thinking allowing to estimate the patient’s condition as a complete organism, taking into account all its features, to understand the relationships of cause and effect arising at development of a concrete disease, to master new and to improve the available techniques of statement of the differential diagnosis. Scientific novelty and practical significance. The structure of a medical case study to be followed in the development of the case on discipline «Clinical Medicine» is proposed. Unification algorithm formation cases is necessary for the full implementation of the introduction in the educational process in the higher medical school as one of the most effective active ways of learning – method of case analysis, in accordance with the requirements that apply to higher professional education modern reforms and, in particular, the introduction of new Federal State Educational Standards. 

  5. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  6. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    Science.gov (United States)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  7. Algorithm Development for Measurement and Prediction the Traffic in Broad Band Integrated Service Networks

    Directory of Open Access Journals (Sweden)

    Mohammad A. Rawajbeh

    2010-01-01

    Full Text Available Problem statement: In this study an effort had been made to develop an algorithm for traffic prediction and measurement for a wide use networks such as B-ISDN. Such algorithm should base on valuable parameters during the various stages of data elements transmission. The offered algorithm was expected to enforce the main sources of congestion problem in network. With this technique the admission decision is made according to the prediction of a few quality of service parameters expected for the new connections. Approach: This research aimed to find out the suitable method for improving the performance and the quality of service during the real time work in wide used networks now days. The improvement of quality of services in B-ISDN can be achieved by a significant estimation of the network state and discovering the most critical situation during the work time. The most repeated problems in such networks are loses of connection, delay time and saturation of communication lines. These problems are known and some solution could be sufficient to deal with, but not for a long time. Results: The proposed solution was based on the need of traffic prediction method in real time to determine the state of network and decide how to deal with. Such method should base on finding the most significant parameters in network during the real time work and use them as prediction variable to predict the situation or the state of network in the future time and then take the appropriate action. Suffering from permanent problems finding the most significant parameters, which can be estimated at the real time to help for solving a raw problems which encountered during the work in networks such as saturation, bottleneck, disconnecting and time delay. Conclusion: The results which achieved by this research was based on the developed algorithm which can be used to predict the traffic in B-ISDN, optimizing the bandwidth and making the bandwidth available to the behaving

  8. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    Science.gov (United States)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  9. The development of Advanced robotic technology - Development of target-tracking algorithm for remote-control robot system

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whan; Kim, Hyong Suk; Yoon, Sook; Lee, Jin Ho; Han, Jeong Soo; Baek, Seong Hyun; Choi, Gap Chu [Chonbuk National University, Chonju (Korea, Republic of)

    1996-07-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers from high radiation environments. Such applications require complete stability of the robot system, then precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. The research consists of two different approaches: target-tracking systems using kalman filters and neural networks. The tracking system under study uses vision sensors to obtain features of targets. A kalman filter model using the moving-position estimation technique is designed, and tested for tracking an object with a circle movement. Attributions of the tracking object are investigated and best features are extracted from the input imagery for the kalman filter model. A neural network tracking system is designed and experimented to trace a robot endeffector. This model is aimed to utilize the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results to the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. 20 refs., 34 figs. (author)

  10. A Development of Self-Organization Algorithm for Fuzzy Logic Controller

    Energy Technology Data Exchange (ETDEWEB)

    Park, Y.M.; Moon, U.C. [Seoul National Univ. (Korea, Republic of). Coll. of Engineering; Lee, K.Y. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Electrical Engineering

    1994-09-01

    This paper proposes a complete design method for an on-line self-organizing fuzzy logic controller without using any plant model. By mimicking the human learning process, the control algorithm finds control rules of a system for which little knowledge has been known. To realize this, a concept of Fuzzy Auto-Regressive Moving Average(FARMA) rule is introduced. In a conventional fuzzy logic control, knowledge on the system supplied by an expert is required in developing control rules. However, the proposed new fuzzy logic controller needs no expert in making control rules. Instead, rules are generated using the history of input-output pairs, and new inference and defuzzification methods are developed. The generated rules are strode in the fuzzy rule space and updated on-line by a self-organizing procedure. The validity of the proposed fuzzy logic control method has been demonstrated numerically in controlling an inverted pendulum. (author). 28 refs., 16 figs.

  11. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  12. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    Directory of Open Access Journals (Sweden)

    E. Biffi

    2010-01-01

    Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.

  13. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis.

    Science.gov (United States)

    Harrison, Andrew M; Thongprayoon, Charat; Kashyap, Rahul; Chute, Christopher G; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly

    2015-02-01

    To develop and test an automated surveillance algorithm (sepsis "sniffer") for the detection of severe sepsis and monitoring failure to recognize and treat severe sepsis in a timely manner. We conducted an observational diagnostic performance study using independent derivation and validation cohorts from an electronic medical record database of the medical intensive care unit (ICU) of a tertiary referral center. All patients aged 18 years and older who were admitted to the medical ICU from January 1 through March 31, 2013 (N=587), were included. The criterion standard for severe sepsis/septic shock was manual review by 2 trained reviewers with a third superreviewer for cases of interobserver disagreement. Critical appraisal of false-positive and false-negative alerts, along with recursive data partitioning, was performed for algorithm optimization. An algorithm based on criteria for suspicion of infection, systemic inflammatory response syndrome, organ hypoperfusion and dysfunction, and shock had a sensitivity of 80% and a specificity of 96% when applied to the validation cohort. In order, low systolic blood pressure, systemic inflammatory response syndrome positivity, and suspicion of infection were determined through recursive data partitioning to be of greatest predictive value. Lastly, 117 alert-positive patients (68% of the 171 patients with severe sepsis) had a delay in recognition and treatment, defined as no lactate and central venous pressure measurement within 2 hours of the alert. The optimized sniffer accurately identified patients with severe sepsis that bedside clinicians failed to recognize and treat in a timely manner. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  14. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    Science.gov (United States)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  15. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    2016-06-26

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator must process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation

  16. Identifying patients likely to have atopic dermatitis: development of a pilot algorithm.

    Science.gov (United States)

    Farage, Miranda A; Bowtell, Philip; Katsarou, Alexandra

    2010-01-01

    A quick method to distinguish people who are predisposed to skin complaints would be useful in a variety of fields. Certain subgroups, such as people with atopic dermatitis, might be more susceptible to skin irritation than the typical consumer and may be more likely to report product-related complaints. To develop a rapid, questionnaire-based algorithm to predict whether or not individuals who report skin complaints have atopic dermatitis. A 9-item questionnaire on self-perceived skin sensitivity and product categories reportedly associated with skin reactions was administered to two groups of patients from a dermatology clinic: one with clinically diagnosed, active atopic dermatitis (n = 25) and a control group of patients with dermatologic complaints unrelated to atopic dermatitis (n = 25). Questionnaire responses were correlated with the patients' clinical diagnoses in order to derive the minimum number of questions needed to best predict the patients' original diagnoses. We demonstrated that responses to a sequence of three targeted questions related to self-perceived skin sensitivity, preference for hypoallergenic products, and reactions to or avoidance of alpha-hydroxy acids were highly predictive of atopic dermatitis among a population of dermatology clinic patients. The predictive algorithm concept may be useful in postmarketing surveillance programs to rapidly assess the possible status of consumers who report frequent or persistent product-related complaints. Further refinement and validation of this concept is planned with samples drawn from the general population and from consumers who report skin complaints associated with personal products.

  17. Development of an algorithm for production of inactivated arbovirus antigens in cell culture.

    Science.gov (United States)

    Goodman, C H; Russell, B J; Velez, J O; Laven, J J; Nicholson, W L; Bagarozzi, D A; Moon, J L; Bedi, K; Johnson, B W

    2014-11-01

    Arboviruses are medically important pathogens that cause human disease ranging from a mild fever to encephalitis. Laboratory diagnosis is essential to differentiate arbovirus infections from other pathogens with similar clinical manifestations. The Arboviral Diseases Branch (ADB) reference laboratory at the CDC Division of Vector-Borne Diseases (DVBD) produces reference antigens used in serological assays such as the virus-specific immunoglobulin M antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Antigen production in cell culture has largely replaced the use of suckling mice; however, the methods are not directly transferable. The development of a cell culture antigen production algorithm for nine arboviruses from the three main arbovirus families, Flaviviridae, Togaviridae, and Bunyaviridae, is described here. Virus cell culture growth and harvest conditions were optimized, inactivation methods were evaluated, and concentration procedures were compared for each virus. Antigen performance was evaluated by the MAC-ELISA at each step of the procedure. The antigen production algorithm is a framework for standardization of methodology and quality control; however, a single antigen production protocol was not applicable to all arboviruses and needed to be optimized for each virus.

  18. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms.

    Science.gov (United States)

    Birkholz, Adam B; Schlegel, H Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  19. Further development of image processing algorithms to improve detectability of defects in Sonic IR NDE

    Science.gov (United States)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2017-02-01

    Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.

  20. Development of Genetic Algorithm Based Macro Mechanical Model for Steel Fibre Reinforced Concrete

    Directory of Open Access Journals (Sweden)

    Gopala Krishna Sastry, K, V.S ,

    2014-01-01

    Full Text Available This paper presents the applicability of hybrid networks that combine Artificial Neural Network (ANN and Genetic Algorithm (GA for predicting the strength properties of Steel Fibre Reinforced concrete (SFRC with different water-cement ratio (0.4,0.45,0.5,0.55, aggregate-cement ratio (3,4,5, % of fibres (0.75,1.0,1.5 and aspect ratio of fibres (40,50,60 as input vectors. Strength properties of SFRC such as compressive strength, flexural strength, split tensile strength and compaction factor are considered as output vector. The network has been trained with data obtained from experimental work. The hybrid neural network model learned the relation between input and output vectors in 1900 iterations. After successful learning GA based BPN model predicted the strength characteristics satisfying all the constrains with an accuracy of about 95%.The various stages involved in the development of genetic algorithm based neural network model are addressed at length in this paper.

  1. Development of a method of ICP algorithm accuracy improvement during shaped profiles and surfaces control

    Directory of Open Access Journals (Sweden)

    V.A. Pechenin

    2014-10-01

    Full Text Available In this paper we propose a method of improvement of operating accuracy of iterative closest point algorithm used for metrology problems solving when determining a location deviation. Compressor blade profiles of a gas turbine engine (GTE were used as an object for application of the method of deviation determining. It is proposed to formulate the problem of the best alignment in the developed method as a multiobjective problem including criteria of minimum of squared distances, normal vectors differences and depth of camber differences at corresponding points of aligned profiles. Variants of resolving the task using an integral criterion including the above-mentioned were considered. Optimization problems were solved using a quasi- Newton method of sequential quadratic programming. The proposed new method of improvement of the registration algorithm based on geometric features showed greater accuracy in comparison with the discussed methods of optimization of a distance between fitting points, especially if a small quantity of measurement points on the profiles was used.

  2. Structural visualization of expert nursing: Development of an assessment and intervention algorithm for delirium following abdominal and thoracic surgeries.

    Science.gov (United States)

    Watanuki, Shigeaki; Takeuchi, Tomiko; Matsuda, Yoshimi; Terauchi, Hidemasa; Takahashi, Yukiko; Goshima, Mitsuko; Nishimoto, Yutaka; Tsuru, Satoko

    2006-01-01

    An assessment and intervention algorithm for delirium following abdominal and thoracic surgeries was developed based upon the current knowledge-base. The sources of information included literature and clinical expertise. The assessment and intervention algorithm was structured and visualized so that patient-tailored and risk-stratified prediction/prevention, assessment, and intervention could be carried out. Accumulation of clinical outcome data is necessary in the future validation study to identify the relative weight of risk factors and clinical utility of the algorithm.

  3. Development and validation of a computerized algorithm for International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI)

    DEFF Research Database (Denmark)

    Walden, K; Bélanger, L M; Biering-Sørensen, F

    2016-01-01

    STUDY DESIGN: Validation study. OBJECTIVES: To describe the development and validation of a computerized application of the international standards for neurological classification of spinal cord injury (ISNCSCI). SETTING: Data from acute and rehabilitation care. METHODS: The Rick Hansen Institute......-validation of the algorithm in phase five using 108 new RHSCIR cases did not identify the need for any further changes, as all discrepancies were due to clinician errors. The web-based application and the algorithm code are freely available at www.isncscialgorithm.com. CONCLUSION: The RHI-ISNCSCI Algorithm provides...... by funding from Health Canada and Western Economic Diversification Canada....

  4. Development of adaptive IIR filtered-e LMS algorithm for active noise control

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; MENG Guang; TENG Pengxiao; CHEN Duanshi

    2003-01-01

    Compared to finite impulse response (FIR) filters, infinite impulse response (IIR)filters can match the system better with much fewer coefficients, and hence the computationload is saved and the performance improves. Therefore, it is attractive to use IIR filters insteadof FIR filters in active noise control (ANC). However, filtered-U LMS (FULMS) algorithm, theIIR filter-based algorithm commonly used so far cannot ensure global convergence. A new IIRfilter based adaptive algorithm, which can ensure global convergence with computation loadonly slightly increasing, is proposed in this paper. The new algorithm is called as filtered-eLMS algorithm since the error signal of which need to be filtered. Simulation results show thatthe FELMS algorithm presents better performance than the FULMS algorithm.

  5. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    Science.gov (United States)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  6. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  7. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3.

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K

    2016-10-01

    Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING,  The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the curve (AUC) to establish the predictive strength of the algorithm. By applying the here

  8. Development of Prediction Model for Endocrine Disorders in the Korean Elderly Using CART Algorithm

    Directory of Open Access Journals (Sweden)

    Haewon Byeon

    2015-09-01

    Full Text Available The aim of the present cross-sectional study was to analyze the factors that affect endocrine disorders in the Korean elderly. The data were taken from the A Study of the Seoul Welfare Panel Study 2010. The subjects were 2111 people (879 males, 1,232 females aged 60 and older living in the community. The dependent variable was defined as the prevalence of endocrine disorders. The explanatory variables were gender, level of education, household income, employment status, marital status, drinking, smoking, BMI, subjective health status, physical activity, experience of stress, and depression. In the Classification and Regression Tree (CART algorithm analysis, subjective health status, BMI, education level, and household income were significantly associated with endocrine disorders in the Korean elderly. The most preferentially involved predictor was subjective health status. The development of guidelines and health education to prevent endocrine disorders is required for taking multiple risk factors into account.

  9. Development of a Collins-type cryocooler floating piston control algorithm

    Science.gov (United States)

    Hogan, Jake; Hannon, Charles L.; Brisson, John

    2012-06-01

    The Collins-type cryocooler uses a floating piston design for the working fluid expansion. The piston floats between a cold volume, where the working fluid is expanded, and a warm volume. The piston is shuttled between opposite ends of the closed cylinder by opening and closing valves connecting several reservoirs at various pressures to the warm volume. Ideally, these pressures should be distributed between the high and low system pressure to gain good control of the piston motion. In this work, a numerical quasi-steady thermodynamic model is developed for the piston cycle. The model determines the steady state pressure distribution of the reservoirs for a given control algorithm. The results are then extended to show how valve timing modifications can be used to overcome helium leakage past the piston during operation.

  10. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    Science.gov (United States)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  11. Game-based programming towards developing algorithmic thinking skills in primary education

    Directory of Open Access Journals (Sweden)

    Hariklia Tsalapatas

    2012-06-01

    Full Text Available This paper presents cMinds, a learning intervention that deploys game-based visual programming towards building analytical, computational, and critical thinking skills in primary education. The proposed learning method exploits the structured nature of programming, which is inherently logical and transcends cultural barriers, towards inclusive learning that exposes learners to algorithmic thinking. A visual programming environment, entitled ‘cMinds Learning Suite’, has been developed aimed for classroom use. Feedback from the deployment of the learning methods and tools in classrooms in several European countries demonstrates elevated learner motivation for engaging in logical learning activities, fostering of creativity and an entrepreneurial spirit, and promotion of problem-solving capacity

  12. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2017-01-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run~2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  13. Developing image processing meta-algorithms with data mining of multiple metrics.

    Science.gov (United States)

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  14. Development of algorithms for detection of mechanical injury on white mushrooms (Agaricus bisporus) using hyperspectral imaging

    Science.gov (United States)

    Gowen, A. A.; O'Donnell, C. P.

    2009-05-01

    White mushrooms were subjected to mechanical injury by controlled shaking in a plastic box at 400 rpm for different times (0, 60, 120, 300 and 600 s). Immediately after shaking, hyperspectral images were obtained using two pushbroom line-scanning hyperspectral imaging instruments, one operating in the wavelength range of 400 - 1000 nm with spectroscopic resolution of 5 nm, the other operating in the wavelength range of 950 - 1700 nm with spectroscopic resolution of 7 nm. Different spectral and spatial pretreatments were investigated to reduce the effect of sample curvature on hyperspectral data. Algorithms based on Chemometric techniques (Principal Component Analysis and Partial Least Squares Discriminant Analysis) and image processing methods (masking, thresholding, morphological operations) were developed for pixel classification in hyperspectral images. In addition, correlation analysis, spectral angle mapping and scaled difference of sample spectra were investigated and compared with the chemometric approaches.

  15. Models and Algorithms for Production Planning and Scheduling in Foundries – Current State and Development Perspectives

    Directory of Open Access Journals (Sweden)

    A. Stawowy

    2012-04-01

    Full Text Available Mathematical programming, constraint programming and computational intelligence techniques, presented in the literature in the field of operations research and production management, are generally inadequate for planning real-life production process. These methods are in fact dedicated to solving the standard problems such as shop floor scheduling or lot-sizing, or their simple combinations such as scheduling with batching. Whereas many real-world production planning problems require the simultaneous solution of several problems (in addition to task scheduling and lot-sizing, the problems such as cutting, workforce scheduling, packing and transport issues, including the problems that are difficult to structure. The article presents examples and classification of production planning and scheduling systems in the foundry industry described in the literature, and also outlines the possible development directions of models and algorithms used in such systems.

  16. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2016-01-01

    ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order of ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction code to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented. In addition, physics performance studies are shown, e.g. a measurement of the fraction of lost tracks in jets with high transverse momentum.

  17. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Gayeski, N.; Armstrong, Peter; Alvira, M.; Gagne, J.; Katipamula, Srinivas

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report for that project.

  18. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    Science.gov (United States)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  19. Development of Algorithms for Control of Motor Boat as Multidimensional Nonlinear Object

    Directory of Open Access Journals (Sweden)

    Gaiduk Anatoliy

    2015-01-01

    Full Text Available In this paper authors develop and research system for motor boat control, that allows to move along the stated paths with the given speed. It is assumed, that boat is equipped by the measuring system that provides current coordinates, linear and angular velocities. Control system is based upon the mathematical model, presented earlier (see references. In order to analytically find the necessary controls, all equations were transformed to Jordan controllable form. Besides solution this transformation also allows to handle model nonlinearities and get required quality of movement along the stated paths. Control system includes algorithms for control of longtitudal velocity and boat course. Research of the proposed control system according to boat design limitations for the values of control variables was performed by simulation in MATLAB. Results of two experiments, different in value of the required velocity are discussed.

  20. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  1. Development and evaluation of an algorithm for computer analysis of maternal heart rate during labor.

    Science.gov (United States)

    Pinto, Paula; Bernardes, João; Costa-Santos, Cristina; Amorim-Costa, Célia; Silva, Maria; Ayres-de-Campos, Diogo

    2014-06-01

    Maternal heart rate (MHR) recordings are morphologically similar and sometimes coincident with fetal heart rate (FHR) recordings and may be useful for maternal-fetal monitoring if appropriately interpreted. However, similarly to FHR, visual interpretation of MHR features may be poorly reproducible. A computer algorithm for on-line MHR analysis was developed based on a previously existing version for FHR analysis. Inter-observer and computer-observer agreement and reliability were assessed in 40 one-hour recordings obtained from 20 women during the last 2h of labor. Agreement and reliability were evaluated for the detection of basal MHR, long-term variability (LTV), accelerations and decelerations, using proportions of agreement (PA) and Kappa statistic (K), with 95% confidence intervals (95% CI). Changes in MHR characteristics between the first and the second hour of the tracings were also evaluated. There was a statistically significant inter-observer and computer-observer agreement and reliability in estimation of basal MHR, accelerations, decelerations and LTV, with PA values ranging from 0.72 (95% CI: 0.62-0.79) to 1.00 (95% CI: 0.99-1.00), and K values ranging from 0.44 (95% CI: 0.28-0.60) to 0.89 (95% CI: 0.82-0.96). Moreover, basal MHR, number of accelerations and LTV were significantly higher in the last hour of labor, when compared to the initial hour. The developed algorithm for on-line computer analysis of MHR recordings provided good to excellent computer-observer agreement and reliability. Moreover, it allowed an objective detection of MHR changes associated with labor progression, providing more information about the interpretation of maternal-fetal monitoring during labor. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation

    Science.gov (United States)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as

  3. Innovative approach in the development of computer assisted algorithm for spine pedicle screw placement.

    Science.gov (United States)

    Solitro, Giovanni F; Amirouche, Farid

    2016-04-01

    Pedicle screws are typically used for fusion, percutaneous fixation, and means of gripping a spinal segment. The screws act as a rigid and stable anchor points to bridge and connect with a rod as part of a construct. The foundation of the fusion is directly related to the placement of these screws. Malposition of pedicle screws causes intraoperative complications such as pedicle fractures and dural lesions and is a contributing factor to fusion failure. Computer assisted spine surgery (CASS) and patient-specific drill templates were developed to reduce this failure rate, but the trajectory of the screws remains a decision driven by anatomical landmarks often not easily defined. Current data shows the need of a robust and reliable technique that prevents screw misplacement. Furthermore, there is a need to enhance screw insertion guides to overcome the distortion of anatomical landmarks, which is viewed as a limiting factor by current techniques. The objective of this study is to develop a method and mathematical lemmas that are fundamental to the development of computer algorithms for pedicle screw placement. Using the proposed methodology, we show how we can generate automated optimal safe screw insertion trajectories based on the identification of a set of intrinsic parameters. The results, obtained from the validation of the proposed method on two full thoracic segments, are similar to previous morphological studies. The simplicity of the method, being pedicle arch based, is applicable to vertebrae where landmarks are either not well defined, altered or distorted.

  4. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    Science.gov (United States)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  5. Requesting wrist radiographs in emergency department triage: developing a training program and diagnostic algorithm.

    Science.gov (United States)

    Streppa, Joanna; Schneidman, Valerie; Biron, Alain D

    2014-01-01

    Crowding is extremely problematic in Canada, as the emergency department (ED) utilization is considerably higher than in any other country. Consequently, an increase has been noted in waiting times for patients who present with injuries of lesser acuity such as wrist injuries. Wrist fractures are the most common broken bone in patients younger than 65 years. Many nurses employed within EDs are requesting wrist radiographs for patients who present with wrist complaints as a norm within their working practice. Significant potential advantages can ensue if EDs adopt a triage nurse-requested radiographic protocol; patients can benefit from a significant time-saving of 36% in ED length of stay (M. Lindley-Jones & B. J Finlayson, 2000)— when nurses initiated radiographs in triage. In addition, the literature suggests that increased rates of patient and staff satisfaction may be achieved, without compromising quality of radiographic request or quality of service (W. Parris,S. McCarthy, A. M. Kelly, & S. Richardson, 1997). Studies have shown that nurses are capable of requesting appropriate radiographs on the basis of a preset protocol. As there are no standardized set of rules for assessing patients, presenting with suspected wrist fractures, a training program as well as a diagnostic algorithm was developed to prepare emergency nurses to appropriately request wrist radiographs. The triage nurse-specific training program includes the following topics: wrist anatomy and physiology, commonly occurring wrist injuries, mechanisms of injury, physical assessment techniques, and types of radiographic images required. The triage nurse algorithm includes the clinical decision-making process. Providing triage nurses with up-to-date evidence-based educational material not only allowed triage nurses to independently assess and request wrist radiographs for patients with potential wrist fractures but also strengthening the link between competent nursing care and better patient

  6. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot.

    Directory of Open Access Journals (Sweden)

    João Paulo Tardivo

    Full Text Available Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD, which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2. Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2-1886.5. The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management.

  7. An algorithm to improve diagnostic accuracy in diabetes in computerised problem orientated medical records (POMR compared with an established algorithm developed in episode orientated records (EOMR

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2015-06-01

    Full Text Available An algorithm that detects errors in diagnosis, classification or coding of diabetes in primary care computerised medial record (CMR systems is currently available.  However, this was developed on CMR systems that are “Episode orientated” medical records (EOMR; and don’t force the user to always code a problem or link data to an existing one.  More strictly problem orientated medical record (POMR systems mandate recording a problem and linking consultation data to them.  

  8. Calibration and Algorithm Development for Estimation of Nitrogen in Wheat Crop Using Tractor Mounted N-Sensor

    Directory of Open Access Journals (Sweden)

    Manjeet Singh

    2015-01-01

    Full Text Available The experiment was planned to investigate the tractor mounted N-sensor (Make Yara International to predict nitrogen (N for wheat crop under different nitrogen levels. It was observed that, for tractor mounted N-sensor, spectrometers can scan about 32% of total area of crop under consideration. An algorithm was developed using a linear relationship between sensor sufficiency index (SIsensor and SISPAD to calculate the Napp as a function of SISPAD. There was a strong correlation among sensor attributes (sensor value, sensor biomass, and sensor NDVI and different N-levels. It was concluded that tillering stage is most prominent stage to predict crop yield as compared to the other stages by using sensor attributes. The algorithms developed for tillering and booting stages are useful for the prediction of N-application rates for wheat crop. N-application rates predicted by algorithm developed and sensor value were almost the same for plots with different levels of N applied.

  9. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    Science.gov (United States)

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1

  10. Planck pre-launch status: The optical system

    DEFF Research Database (Denmark)

    Tauber, J. A.; Nørgaard-Nielsen, Hans Ulrik; Ade, P. A. R.

    2010-01-01

    Planck is a scientific satellite that represents the next milestone in space-based research related to the cosmic microwave background, and in many other astrophysical fields. Planck was launched on 14 May of 2009 and is now operational. The uncertainty in the optical response of its detectors...

  11. Planck pre-launch status: The Planck mission

    DEFF Research Database (Denmark)

    Tauber, J. A.; Mandoles, N.; Puget, J.-L.

    2010-01-01

    The European Space Agency's Planck satellite, launched on 14 May 2009, is the third-generation space experiment in the field of cosmic microwave background (CMB) research. It will image the anisotropies of the CMB over the whole sky, with unprecedented sensitivity ( ~ 2 × 10-6) and angular...

  12. Planck pre-launch status: High Frequency Instrument polarization calibration

    CERN Document Server

    Rosset, C; Ponthieu, N; Ade, P; Catalano, A; Conversi, L; Couchot, F; Crill, B P; Désert, F -X; Ganga, K; Giard, M; Giraud-Héraud, Y; Haïssinski, J; Henrot-Versillé, S; Holmes, W; Jones, W C; Lamarre, J -M; Lange, A; Leroy, C; Macías-Pérez, J; Maffei, B; de Marcillac, P; Miville-Deschênes, M -A; Montier, L; Noviello, F; Pajot, F; Perdereau, O; Piacentini, F; Piat, M; Plaszczynski, S; Pointecouteau, E; Puget, J -L; Ristorcelli, I; Savini, G; Sudiwala, R; Veneziani, M; Yvon, D

    2010-01-01

    The High Frequency Instrument of Planck will map the entire sky in the millimeter and sub-millimeter domain from 100 to 857 GHz with unprecedented sensitivity to polarization ($\\Delta P/T_{\\tiny cmb} \\sim 4\\cdot 10^{-6}$) at 100, 143, 217 and 353 GHz. It will lead to major improvements in our understanding of the Cosmic Microwave Background anisotropies and polarized foreground signals. Planck will make high resolution measurements of the $E$-mode spectrum (up to $\\ell \\sim 1500$) and will also play a prominent role in the search for the faint imprint of primordial gravitational waves on the CMB polarization. This paper addresses the effects of calibration of both temperature (gain) and polarization (polarization efficiency and detector orientation) on polarization measurements. The specific requirements on the polarization parameters of the instrument are set and we report on their pre-flight measurement on HFI bolometers. We present a semi-analytical method that exactly accounts for the scanning strategy of...

  13. Algorithmic Adventures

    CERN Document Server

    Hromkovic, Juraj

    2009-01-01

    Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.

  14. Design and development of guidance navigation and control algorithms for spacecraft rendezvous and docking experimentation

    Science.gov (United States)

    Guglieri, Giorgio; Maroglio, Franco; Pellegrino, Pasquale; Torre, Liliana

    2014-01-01

    This paper presents the design of the GNC system of a ground test-bed for spacecraft rendezvous and docking experiments. The test-bed is developed within the STEPS project (Systems and Technologies for Space Exploration). The facility consists of a flat floor and two scaled vehicles, one active chaser and one “semi-active” target. Rendezvous and docking maneuvers are performed floating on the plane with pierced plates as lifting systems. The system is designed to work both with inertial and non-inertial reference frame, receiving signals from navigation sensors as: accelerometers, gyroscopes, laser meter, radio finder and video camera, and combining them with a digital filter. A Proportional-Integrative-Derivative control law and Pulse Width Modulators are used to command the cold gas thrusters of the chaser, and to follow an assigned trajectory with its specified velocity profile. The design and development of the guidance, navigation and control system and its architecture-including the software algorithms-are detailed in the paper, presenting a performance analysis based on a simulated environment. A complete description of the integrated subsystems is also presented.

  15. Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm.

    Science.gov (United States)

    Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae; Kim, Tae Il; Yi, Byung Ju

    2017-01-01

    Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site.

  16. Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm

    Science.gov (United States)

    Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae

    2017-01-01

    Purpose Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. Materials and Methods The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. Results A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. Conclusion This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site. PMID:27873506

  17. Development and Implementation of an Advanced Power Management Algorithm for Electronic Load Sensing on a Telehandler

    DEFF Research Database (Denmark)

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2010-01-01

    , flow-sharing, prioritization of steering, anti-stall and high pressure protection into electronics. In order to implement these features, the paper presents and tests a general power management algorithm for a telehandler. The algorithm is capable of implementing the above features, while also handling...

  18. Development of an Innovative Algorithm for Aerodynamics-Structure Interaction Using Lattice Boltzmann Method

    Science.gov (United States)

    Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)

    2001-01-01

    The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is

  19. 粒子群优化算法的发展研究%Research development of particle swarm optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    黄文秀

    2014-01-01

    Particle swarm optimization algorithm (referred to as the particle swarm algorithm, PSO) is a new heuristic algorithm for global based on swarm intelligence, the algorithm simple concept, easy implementation, fast convergence speed, fewer parameter, easy programming, widely in recent years of academic research and application research. This paper first introduces the basic principle and working mechanism of PSO, then focuses on the improvement and application of this algorithm are explained, at last the prospect of the development trend of the algorithm.%粒子群优化算法(简称粒子群算法,PSO)是一种新兴的基于群体智能的启发式全局搜索算法,该算法概念简明、实现方便,收敛速度快、参数设置少,易编程,近年来受到学术界的广泛研究和应用。本文首先介绍PSO的基本原理和工作机制,然后着重就该算法研究的改进及应用进行阐述,最后对该算法的发展趋势进行展望。

  20. Development and Performance Analysis of a Lossless Data Reduction Algorithm for VoIP

    Directory of Open Access Journals (Sweden)

    Syed Misbahuddin

    2014-01-01

    Full Text Available VoIP (Voice Over IP is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates.

  1. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    Science.gov (United States)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  2. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    Directory of Open Access Journals (Sweden)

    Hualiang Zhong

    2016-01-01

    Full Text Available Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs, the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung

  3. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    Science.gov (United States)

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  4. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    Science.gov (United States)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  5. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    Science.gov (United States)

    García, A.; Romano, H.; Laciar, E.; Correa, R.

    2011-12-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases "arrhythmias MIT BIH database" and "MIT BIH supraventricular arrhythmias database". A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  6. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  7. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  8. X-band Observations of Waves, Algorithm Development, and Validation High Resolution Wave-Air-Sea Interaction DRI

    Science.gov (United States)

    2012-09-30

    measure wind speed and direction (Jochen Horstman, NURC ), indentify ocean surface fronts, develop wave breaking detection software, develop ocean...5. Provided X-Band radar data, both FLIP and Sproul, to Jochen Horstman at NURC for use in wind retrieval algorithm development. 6. Completed...processing of SIO MET buoy data for sea surface atmospheric conditions. Provided data to Jochen Horstman at NURC . 3 7. Helped define “grand

  9. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  10. Inversion model validation of ground emissivity. Contribution to the development of SMOS algorithm

    CERN Document Server

    Demontoux, François; Ruffié, Gilles; Wigneron, Jean Pierre; Grant, Jennifer; Hernandez, Daniel Medina

    2007-01-01

    SMOS (Soil Moisture and Ocean Salinity), is the second mission of 'Earth Explorer' to be developed within the program 'Living Planet' of the European Space Agency (ESA). This satellite, containing the very first 1.4GHz interferometric radiometer 2D, will carry out the first cartography on a planetary scale of the moisture of the grounds and the salinity of the oceans. The forests are relatively opaque, and the knowledge of moisture remains problematic. The effect of the vegetation can be corrected thanks a simple radiative model. Nevertheless simulations show that the effect of the litter on the emissivity of a system litter + ground is not negligible. Our objective is to highlight the effects of this layer on the total multi layer system. This will make it possible to lead to a simple analytical formulation of a model of litter which can be integrated into the calculation algorithm of SMOS. Radiometer measurements, coupled to dielectric characterizations of samples in laboratory can enable us to characterize...

  11. Development of Ray Tracing Algorithms for Scanning Plane and Transverse Plane Analysis for Satellite Multibeam Application

    Directory of Open Access Journals (Sweden)

    N. H. Abd Rahman

    2014-01-01

    Full Text Available Reflector antennas have been widely used in many areas. In the implementation of parabolic reflector antenna for broadcasting satellite applications, it is essential for the spacecraft antenna to provide precise contoured beam to effectively serve the required region. For this purpose, combinations of more than one beam are required. Therefore, a tool utilizing ray tracing method is developed to calculate precise off-axis beams for multibeam antenna system. In the multibeam system, each beam will be fed from different feed positions to allow the main beam to be radiated at the exact direction on the coverage area. Thus, detailed study on caustics of a parabolic reflector antenna is performed and presented in this paper, which is to investigate the behaviour of the rays and its relation to various antenna parameters. In order to produce accurate data for the analysis, the caustic behaviours are investigated in two distinctive modes: scanning plane and transverse plane. This paper presents the detailed discussions on the derivation of the ray tracing algorithms, the establishment of the equations of caustic loci, and the verification of the method through calculation of radiation pattern.

  12. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  13. Development of Efficient Resource Allocation Algorithm in Chunk Based OFDMA System

    Directory of Open Access Journals (Sweden)

    Yadav Mukesh Kumar

    2016-01-01

    Full Text Available The emerging demand for diverse data applications in next generation wireless networks entails both high data rate wireless connections and intelligent multiuser scheduling designs. The orthogonal frequency division multiple access based system is capable of delivering high speed data rate and can operate in a multipath environment. OFDMA based system dividing an entire channel into many orthogonal narrow band subcarriers. Due to this, it is useful to eliminate inter symbol interferences which is a limit of total available data rates. In this paper, investigation about resource allocation problem for the chunk based Orthogonal Frequency Division Multiple Access (OFDMA wireless multicast systems is done. In this paper, it is expected that the Base Station (BS has multiple antennas in a Distributed Antenna System (DAS. The allocation unit is a group of contiguous subcarriers (chunk in conventional OFDMA systems. The aim of this investigation is to develop an efficient resource allocation algorithm to maximize the total throughput and minimize the average outage probability over a chunk with respect to average Bit Error Rate (BER and total available power.

  14. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, C. Leland [PPPL; Finn, J. M. [LANL; Qin, H. [PPPL; Tang, William M. [PPPL

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  15. Development of Terrain Contour Matching Algorithm for the Aided Inertial Navigation Using Radial Basis Functions

    Science.gov (United States)

    Gong, Hyeon Cheol

    1998-06-01

    We study on a terrain contour matching algorithm using Radial Basis Functions (RBFs) for aided inertial navigation system for position fixing aircraft, cruise missiles or re-entry vehicles. The parameter optimization technique is used for updating the parameters describing the characteristics of an area with modified Gaussian least square differential correction algorithm and the step size limitation filter according to the amount of updates. We have applied the algorithm for matching a sampled area with a target area supposed that the area data are available from Radar Terrain Sensor (RTS) and Reference Altitude Sensor (RAS).

  16. Informing radar retrieval algorithm development using an alternative soil moisture validation technique

    Science.gov (United States)

    Crow, W. T.; Wagner, W.

    2009-12-01

    . Results imply the need for a significant interaction term in vegetation backscatter models in order to match the observed relationship between incidence angle and retrieval skill. Implications for the development of radar retrieval algorithms for the NASA Soil Moisture Active/Passive (SMAP) mission will be discussed.

  17. Developing a laddered algorithm for the management of intractable epistaxis: a risk analysis.

    Science.gov (United States)

    Leung, Randy M; Smith, Timothy L; Rudmik, Luke

    2015-05-01

    For patients with epistaxis in whom initial interventions, such as anterior packing and cauterization, had failed, options including prolonged posterior packing, transnasal endoscopic sphenopalatine artery ligation (TESPAL), and embolization are available. However, it is unclear which interventions should be attempted and in which order. While cost-effectiveness analyses have suggested that TESPAL is the most responsible use of health care resources, physicians must also consider patient risk to maintain a patient-centered decision-making process. To quantify the risk associated with the management of intractable epistaxis. A risk analysis was performed using literature-reported probabilities of treatment failure and adverse event likelihoods in an emergency department and otolaryngology hospital admissions setting. The literature search included articles from 1980 to May 2014. The analysis was modeled for a 50-year-old man with no other medical comorbidities. Severities of complications were modeled based on Environmental Protection Agency recommendations, and health state utilities were monetized based on a willingness to pay $22 500 per quality-adjusted life-year. Six management strategies were developed using posterior packing, TESPAL, and embolization in various sequences (P, T, and E, respectively). Total risk associated with each algorithm quantified in US dollars. Algorithms involving posterior packing and TESPAL as first-line interventions were found to be similarly low risk. The lowest-risk approaches were P-T-E ($2437.99 [range, $1482.83-$6976.40]), T-P-E ($2840.65 [range, $1136.89-$8604.97]), and T-E-P ($2867.82 [range, $1141.05-$9833.96]). Embolization as a first-line treatment raised the total risk significantly owing to the risk of cerebrovascular events (E-T-P, $11 945.42 [range, $3911.43-$31 847.00]; and E-P-T, $11 945.71 [range, $3919.91-$31 767.66]). Laddered approaches using TESPAL and posterior packing appear to provide the lowest

  18. Successive smoothing algorithm for constructing the semiempirical model developed at ONERA to predict unsteady aerodynamic forces. [aeroelasticity in helicopters

    Science.gov (United States)

    Petot, D.; Loiseau, H.

    1982-01-01

    Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.

  19. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  20. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  1. Developing a supervised training algorithm for limited precision feed-forward spiking neural networks

    CERN Document Server

    Stromatias, Evangelos

    2011-01-01

    Spiking neural networks have been referred to as the third generation of artificial neural networks where the information is coded as time of the spikes. There are a number of different spiking neuron models available and they are categorized based on their level of abstraction. In addition, there are two known learning methods, unsupervised and supervised learning. This thesis focuses on supervised learning where a new algorithm is proposed, based on genetic algorithms. The proposed algorithm is able to train both synaptic weights and delays and also allow each neuron to emit multiple spikes thus taking full advantage of the spatial-temporal coding power of the spiking neurons. In addition, limited synaptic precision is applied; only six bits are used to describe and train a synapse, three bits for the weights and three bits for the delays. Two limited precision schemes are investigated. The proposed algorithm is tested on the XOR classification problem where it produces better results for even smaller netwo...

  2. The development of an interactive game-based tool for learning surgical management algorithms via computer.

    Science.gov (United States)

    Mann, Barry D; Eidelson, Benjamin M; Fukuchi, Steven G; Nissman, Steven A; Robertson, Scott; Jardines, Lori

    2002-03-01

    We have previously demonstrated the potential efficacy of a computer-assisted board game as a tool for medical education. The next logical step was to transfer the entire game on to the computer, thus increasing accessibility to students and allowing for a richer and more accurate simulation of patient scenarios. First, a general game model was developed using Microsoft Visual Basic. A breast module was then created using 3-D models, radiographs, and pathology and cytology images. The game was further improved by the addition of an animated facilitator, who directs the players via gestures and speech. Thirty-three students played the breast module in a variety of team configurations. After playing the game, the students completed surveys regarding its value as both an educational tool and as a form of entertainment. 10-question tests were also administered before and after playing the game, as a preliminary investigation into its impact on student learning. After playing the game, mean test scores increased from 6.43 (SEM +/- 0.30) to 7.14 (SEM +/- 0.30; P = 0.006). The results of the five-question survey were extremely positive. Students generally agreed that the game concept has value in increasing general knowledge regarding the subject matter of breast disease and that the idea of following simultaneously the work-up of numerous patients with similar problems is a helpful way to learn a work-up algorithm. Postgame surveys demonstrate the efficacy of our computer game model as a tool for surgical education. The game is an example of problem based learning because it provides students with an initial set of problems and requires them to collect information and reason on their own in order to solve the problems. Individual game modules can be developed to cover material from different diagnostic areas.

  3. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    Science.gov (United States)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.

  4. The development of algorithms for parallel knowledge discovery using graphics accelerators

    Science.gov (United States)

    Zieliński, Paweł; Mulawka, Jan

    2011-10-01

    The paper broaches topics of selected knowledge discovery algorithms. Different implementations have been verified on parallel platforms, including graphics accelerators using CUDA technology, multi-core microprocessors using OpenMP and many graphics accelerators. Results of investigations have been compared in terms of performance and scalability. Different types of data representation were also tested. The possibilities of both platforms, using the classification algorithms: the k-nearest neighbors, support vector machines and logistic regression are discussed.

  5. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  6. Development of a Fingerprint Gender Classification Algorithm Using Fingerprint Global Features

    OpenAIRE

    S. F. Abdullah; A.F.N.A. Rahman; Z.A.Abas; W.H.M Saad

    2016-01-01

    In forensic world, the process of identifying and calculating the fingerprint features is complex and take time when it is done manually using fingerprint laboratories magnifying glass. This study is meant to enhance the forensic manual method by proposing a new algorithm for fingerprint global feature extraction for gender classification. The result shows that the new algorithm gives higher acceptable readings which is above 70% of classification rate when it is compared to the manual method...

  7. Development of numerical algorithms for practical computation of nonlinear normal modes

    OpenAIRE

    2008-01-01

    When resorting to numerical algorithms, we show that nonlinear normal mode (NNM) computation is possible with limited implementation effort, which paves the way to a practical method for determining the NNMs of nonlinear mechanical systems. The proposed method relies on two main techniques, namely a shooting procedure and a method for the continuation of NNM motions. In addition, sensitivity analysis is used to reduce the computational burden of the algorithm. A simplified discrete model of a...

  8. Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and NASA Experimental Flight Planning

    Science.gov (United States)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2004-01-01

    During the grant period, several tasks were performed in support of the NASA Turbulence Prediction and Warning Systems (TPAWS) program. The primary focus of the research was on characterizing the preturbulence environment by developing predictive tools and simulating atmospheric conditions that preceded severe turbulence. The goal of the research being to provide both dynamical understanding of conditions that preceded turbulence as well as providing predictive tools in support of operational NASA B-757 turbulence research flights. The advancements in characterizing the preturbulence environment will be applied by NASA to sensor development for predicting turbulence onboard commercial aircraft. Numerical simulations with atmospheric models as well as multi-scale observational analyses provided insights into the environment organizing turbulence in a total of forty-eight specific case studies of severe accident producing turbulence on commercial aircraft. These accidents exclusively affected commercial aircraft. A paradigm was developed which diagnosed specific atmospheric circulation systems from the synoptic scale down to the meso-y scale that preceded turbulence in both clear air and in proximity to convection. The emphasis was primarily on convective turbulence as that is what the TPAWS program is most focused on in terms of developing improved sensors for turbulence warning and avoidance. However, the dynamical paradigm also has applicability to clear air and mountain turbulence. This dynamical sequence of events was then employed to formulate and test new hazard prediction indices that were first tested in research simulation studies and then ultimately were further tested in support of the NASA B-757 turbulence research flights. The new hazard characterization algorithms were utilized in a Real Time Turbulence Model (RTTM) that was operationally employed to support the NASA B-757 turbulence research flights. Improvements in the RTTM were implemented in an

  9. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    Science.gov (United States)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  10. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  11. Developing algorithms for healthcare insurers to systematically monitor surgical site infection rates

    Directory of Open Access Journals (Sweden)

    Livingston James M

    2007-06-01

    Full Text Available Abstract Background Claims data provide rapid indicators of SSIs for coronary artery bypass surgery and have been shown to successfully rank hospitals by SSI rates. We now operationalize this method for use by payers without transfer of protected health information, or any insurer data, to external analytic centers. Results We performed a descriptive study testing the operationalization of software for payers to routinely assess surgical infection rates among hospitals where enrollees receive cardiac procedures. We developed five SAS programs and a user manual for direct use by health plans and payers. The manual and programs were refined following provision to two national insurers who applied the programs to claims databases, following instructions on data preparation, data validation, analysis, and verification and interpretation of program output. A final set of programs and user manual successfully guided health plan programmer analysts to apply SSI algorithms to claims databases. Validation steps identified common problems such as incomplete preparation of data, missing data, insufficient sample size, and other issues that might result in program failure. Several user prompts enabled health plans to select time windows, strata such as insurance type, and the threshold number of procedures performed by a hospital before inclusion in regression models assessing relative SSI rates among hospitals. No health plan data was transferred to outside entities. Programs, on default settings, provided descriptive tables of SSI indicators stratified by hospital, insurer type, SSI indicator (inpatient, outpatient, antibiotic, and six-month period. Regression models provided rankings of hospital SSI indicator rates by quartiles, adjusted for comorbidities. Programs are publicly available without charge. Conclusion We describe a free, user-friendly software package that enables payers to routinely assess and identify hospitals with potentially high SSI

  12. Adjusting for COPD severity in database research: developing and validating an algorithm

    Directory of Open Access Journals (Sweden)

    Goossens LMA

    2011-12-01

    Full Text Available Lucas MA Goossens1, Christine L Baker2, Brigitta U Monz3, Kelly H Zou2, Maureen PMH Rutten-van Mölken11Institute for Medical Technology Assessment, Erasmus University, Rotterdam, The Netherlands; 2Pfizer Inc, New York City, NY, USA; 3Boehringer Ingelheim International GmbH, Ingelheim am Rhein, GermanyPurpose: When comparing chronic obstructive lung disease (COPD interventions in database research, it is important to adjust for severity. Global Initiative for Chronic Obstructive Lung Disease (GOLD guidelines grade severity according to lung function. Most databases lack data on lung function. Previous database research has approximated COPD severity using demographics and healthcare utilization. This study aims to derive an algorithm for COPD severity using baseline data from a large respiratory trial (UPLIFT.Methods: Partial proportional odds logit models were developed for probabilities of being in GOLD stages II, III and IV. Concordance between predicted and observed stage was assessed using kappa-statistics. Models were estimated in a random selection of 2/3 of patients and validated in the remainder. The analysis was repeated in a subsample with a balanced distribution across severity stages. Univariate associations of COPD severity with the covariates were tested as well.Results: More severe COPD was associated with being male and younger, having quit smoking, lower BMI, osteoporosis, hospitalizations, using certain medications, and oxygen. After adjusting for these variables, co-morbidities, previous healthcare resource use (eg, emergency room, hospitalizations and inhaled corticosteroids, xanthines, or mucolytics were no longer independently associated with COPD severity, although they were in univariate tests. The concordance was poor (kappa = 0.151 and only slightly better in the balanced sample (kappa = 0.215.Conclusion: COPD severity cannot be reliably predicted from demographics and healthcare use. This limitation should be

  13. Prepatellar and olecranon bursitis: literature review and development of a treatment algorithm.

    Science.gov (United States)

    Baumbach, Sebastian F; Lobo, Christopher M; Badyine, Ilias; Mutschler, Wolf; Kanz, Karl-Georg

    2014-03-01

    Olecranon bursitis and prepatellar bursitis are common entities, with a minimum annual incidence of 10/100,000, predominantly affecting male patients (80 %) aged 40-60 years. Approximately 1/3 of cases are septic (SB) and 2/3 of cases are non-septic (NSB), with substantial variations in treatment regimens internationally. The aim of the study was the development of a literature review-based treatment algorithm for prepatellar and olecranon bursitis. Following a systematic review of Pubmed, the Cochrane Library, textbooks of emergency medicine and surgery, and a manual reference search, 52 relevant papers were identified. The initial differentiation between SB and NSB was based on clinical presentation, bursal aspirate, and blood sampling analysis. Physical findings suggesting SB were fever >37.8 °C, prebursal temperature difference greater 2.2 °C, and skin lesions. Relevant findings for bursal aspirate were purulent aspirate, fluid-to-serum glucose ratio 3,000 cells/μl, polymorphonuclear cells >50 %, positive Gram staining, and positive culture. General treatment measures for SB and NSB consist of bursal aspiration, NSAIDs, and PRICE. For patients with confirmed NSB and high athletic or occupational demands, intrabursal steroid injection may be performed. In the case of SB, antibiotic therapy should be initiated. Surgical treatment, i.e., incision, drainage, or bursectomy, should be restricted to severe, refractory, or chronic/recurrent cases. The available evidence did not support the central European concept of immediate bursectomy in cases of SB. A conservative treatment regimen should be pursued, following bursal aspirate-based differentiation between SB and NSB.

  14. Development of hybrid fog detection algorithm (FDA) using satellite and ground observation data for nighttime

    Science.gov (United States)

    Kim, So-Hyeong; Han, Ji-Hae; Suh, Myoung-Seok

    2017-04-01

    In this study, we developed a hybrid fog detection algorithm (FDA) using AHI/Himawari-8 satellite and ground observation data for nighttime. In order to detect fog at nighttime, Dual Channel Difference (DCD) method based on the emissivity difference between SWIR and IR1 is most widely used. DCD is good at discriminating fog from other things (middle/high clouds, clear sea and land). However, it is difficult to distinguish fog from low clouds. In order to separate the low clouds from the pixels that satisfy the thresholds of fog in the DCD test, we conducted supplementary tests such as normalized local standard derivation (NLSD) of BT11 and the difference of fog top temperature (BT11) and air temperature (Ta) from NWP data (SST from OSTIA data). These tests are based on the larger homogeneity of fog top than low cloud tops and the similarity of fog top temperature and Ta (SST). Threshold values for the three tests were optimized through ROC analysis for the selected fog cases. In addition, considering the spatial continuity of fog, post-processing was performed to detect the missed pixels, in particular, at edge of fog or sub-pixel size fog. The final fog detection results are presented by fog probability (0 100 %). Validation was conducted by comparing fog detection probability with the ground observed visibility data from KMA. The validation results showed that POD and FAR are ranged from 0.70 0.94 and 0.45 0.72, respectively. The quantitative validation and visual inspection indicate that current FDA has a tendency to over-detect the fog. So, more works which reducing the FAR is needed. In the future, we will also validate sea fog using CALIPSO data.

  15. Using a multi-objective genetic algorithm for developing aerial sensor team search strategies

    Science.gov (United States)

    Ridder, Jeffrey P.; Herweg, Jared A.; Sciortino, John C., Jr.

    2008-04-01

    Finding certain associated signals in the modern electromagnetic environment can prove a difficult task due to signal characteristics and associated platform tactics as well as the systems used to find these signals. One approach to finding such signal sets is to employ multiple small unmanned aerial systems (UASs) equipped with RF sensors in a team to search an area. The search environment may be partially known, but with a significant level of uncertainty as to the locations and emissions behavior of the individual signals and their associated platforms. The team is likely to benefit from a combination of using uncertain a priori information for planning and online search algorithms for dynamic tasking of the team. Two search algorithms are examined for effectiveness: Archimedean spirals, in which the UASs comprising the team do not respond to the environment, and artificial potential fields, in which they use environmental perception and interactions to dynamically guide the search. A multi-objective genetic algorithm (MOGA) is used to explore the desirable characteristics of search algorithms for this problem using two performance objectives. The results indicate that the MOGA can successfully use uncertain a priori information to set the parameters of the search algorithms. Also, we find that artificial potential fields may result in good performance, but that each of the fields has a different contribution that may be appropriate only in certain states.

  16. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  17. Algorithms Development in Detection of the Gelatinization Process during Enzymatic ‘Dodol’ Processing

    Directory of Open Access Journals (Sweden)

    Azman Hamzah

    2013-09-01

    Full Text Available Computer vision systems have found wide application in foods processing industry to perform quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic „dodol. processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic „dodol.. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 14.75 minutes for 24 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic „dodol. processing via image processing technology.

  18. Algorithms Development in Detection of the Gelatinization Process during Enzymatic ‘Dodol’ Processing

    Directory of Open Access Journals (Sweden)

    Azman Hamzah

    2007-11-01

    Full Text Available Computer vision systems have found wide application in foods processing industry to perform the quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic ‘dodol’ processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic ‘dodol’. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 15.33 minutes for 20 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic ‘dodol’ processing via image processing technology.

  19. Developing Aerosol Algorithm over Ocean and a Littoral Zone Using the Next Generation Geo-Stationary Observations

    Science.gov (United States)

    Oo, M. M.; Holz, R.; Levy, R. C.; Miller, S. D.; Walther, A.; Heidinger, A.

    2016-12-01

    The advanced Himawari Imager (AHI) and the upcoming GOES-R are the next generation geo-stationary sensors with the capability of multi-spectral, high spatial and geo-stationary observation over southeast Asia (AHI) and United States (GOES-R). The long-term goal of this project is to develop an aerosol algorithm for the AHI and GOES-R that can be applied to the littoral regions where the surface reflectance can vary significantly and cannot be assumed dark. This new algorithm will be integrated into the NOAA's Clouds from AVHRR Extended (CLAVR-x) framework providing near real time processing capability. The foundation for the algorithm is the dark target approach, developed for NASA's Earth Observing System Moderate Resolution Imaging Spectroradiometer (MODIS) to retrieve aerosol properties. In this paper we will present our preliminary AOD retrievals from geo-stationary (AH) data over ocean and inter-compare with collocated MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) (Dark Target) retrievals and the Japanese Aerospace Exploration Agency (JAXA) AHI beta AOD retrieval. We will then present the design of the littoral aerosol algorithm with a focus on methods to separate the surface reflectance from the aerosol signal using the combined multispectral capability of AHI with the ability to characterize the temporal variability of a given FOV. Finally, we will demonstrate a case study of aerosol retrieval using this approach.

  20. Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation

    Science.gov (United States)

    Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.

    2015-01-01

    Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the

  1. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  2. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  3. The development of a bearing spectral analyzer and algorithms to detect turbopump bearing wear from deflectometer and strain gage data

    Science.gov (United States)

    Martinez, Carol L.

    1992-07-01

    Over the last several years, Rocketdyne has actively developed condition and health monitoring techniques and their elements for rocket engine components, specifically high pressure turbopumps. Of key interest is the development of bearing signature analysis systems for real-time monitoring of the cryogen-cooled turbopump shaft bearings, which spin at speeds up to 36,000 RPM. These system elements include advanced bearing vibration sensors, signal processing techniques, wear mode algorithms, and integrated control software. Results of development efforts in the areas of signal processing and wear mode identification and quantification algorithms based on strain gage and deflectometer data are presented. Wear modes investigated include: inner race wear, cage pocket wear, outer race wear, differential ball wear, cracked inner race, and nominal wear.

  4. Development and validation of an algorithm to recalibrate mental models and reduce diagnostic errors associated with catheter-associated bacteriuria

    Science.gov (United States)

    2013-01-01

    Background Overtreatment of catheter-associated bacteriuria is a quality and safety problem, despite the availability of evidence-based guidelines. Little is known about how guidelines-based knowledge is integrated into clinicians’ mental models for diagnosing catheter-associated urinary tract infection (CA-UTI). The objectives of this research were to better understand clinicians’ mental models for CA-UTI, and to develop and validate an algorithm to improve diagnostic accuracy for CA-UTI. Methods We conducted two phases of this research project. In phase one, 10 clinicians assessed and diagnosed four patient cases of catheter associated bacteriuria (n= 40 total cases). We assessed the clinical cues used when diagnosing these cases to determine if the mental models were IDSA guideline compliant. In phase two, we developed a diagnostic algorithm derived from the IDSA guidelines. IDSA guideline authors and non-expert clinicians evaluated the algorithm for content and face validity. In order to determine if diagnostic accuracy improved using the algorithm, we had experts and non-experts diagnose 71 cases of bacteriuria. Results Only 21 (53%) diagnoses made by clinicians without the algorithm were guidelines-concordant with fair inter-rater reliability between clinicians (Fleiss’ kappa = 0.35, 95% Confidence Intervals (CIs) = 0.21 and 0.50). Evidence suggests that clinicians’ mental models are inappropriately constructed in that clinicians endorsed guidelines-discordant cues as influential in their decision-making: pyuria, systemic leukocytosis, organism type and number, weakness, and elderly or frail patient. Using the algorithm, inter-rater reliability between the expert and each non-expert was substantial (Cohen’s kappa = 0.72, 95% CIs = 0.52 and 0.93 between the expert and non-expert #1 and 0.80, 95% CIs = 0.61 and 0.99 between the expert and non-expert #2). Conclusions Diagnostic errors occur when clinicians’ mental models for catheter

  5. Development and Verification of the Tire/Road Friction Estimation Algorithm for Antilock Braking System

    Directory of Open Access Journals (Sweden)

    Jian Zhao

    2014-01-01

    Full Text Available Road friction information is very important for vehicle active braking control systems such as ABS, ASR, or ESP. It is not easy to estimate the tire/road friction forces and coefficient accurately because of the nonlinear system, parameters uncertainties, and signal noises. In this paper, a robust and effective tire/road friction estimation algorithm for ABS is proposed, and its performance is further discussed by simulation and experiment. The tire forces were observed by the discrete Kalman filter, and the road friction coefficient was estimated by the recursive least square method consequently. Then, the proposed algorithm was analysed and verified by simulation and road test. A sliding mode based ABS with smooth wheel slip ratio control and a threshold based ABS by pulse pressure control with significant fluctuations were used for the simulation. Finally, road tests were carried out in both winter and summer by the car equipped with the same threshold based ABS, and the algorithm was evaluated on different road surfaces. The results show that the proposed algorithm can identify the variation of road conditions with considerable accuracy and response speed.

  6. Development of real-time plasma analysis and control algorithms for the TCV tokamak using Simulink

    NARCIS (Netherlands)

    Felici, F.; Le, H. B.; J. I. Paley,; Duval, B. P.; Coda, S.; Moret, J. M.; Bortolon, A.; L. Federspiel,; Goodman, T. P.; Hommen, G.; A. Karpushov,; Piras, F.; A. Pitzschke,; J. Romero,; G. Sevillano,; Sauter, O.; Vijvers, W.; TCV team,

    2014-01-01

    One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful

  7. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woohyun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Underhill, Ronald M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  8. Modeling in the State Flow Environment to Support Launch Vehicle Verification Testing for Mission and Fault Management Algorithms in the NASA Space Launch System

    Science.gov (United States)

    Trevino, Luis; Berg, Peter; England, Dwight; Johnson, Stephen B.

    2016-01-01

    Analysis methods and testing processes are essential activities in the engineering development and verification of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS). Central to mission success is reliable verification of the Mission and Fault Management (M&FM) algorithms for the SLS launch vehicle (LV) flight software. This is particularly difficult because M&FM algorithms integrate and operate LV subsystems, which consist of diverse forms of hardware and software themselves, with equally diverse integration from the engineering disciplines of LV subsystems. M&FM operation of SLS requires a changing mix of LV automation. During pre-launch the LV is primarily operated by the Kennedy Space Center (KSC) Ground Systems Development and Operations (GSDO) organization with some LV automation of time-critical functions, and much more autonomous LV operations during ascent that have crucial interactions with the Orion crew capsule, its astronauts, and with mission controllers at the Johnson Space Center. M&FM algorithms must perform all nominal mission commanding via the flight computer to control LV states from pre-launch through disposal and also address failure conditions by initiating autonomous or commanded aborts (crew capsule escape from the failing LV), redundancy management of failing subsystems and components, and safing actions to reduce or prevent threats to ground systems and crew. To address the criticality of the verification testing of these algorithms, the NASA M&FM team has utilized the State Flow environment6 (SFE) with its existing Vehicle Management End-to-End Testbed (VMET) platform which also hosts vendor-supplied physics-based LV subsystem models. The human-derived M&FM algorithms are designed and vetted in Integrated Development Teams composed of design and development disciplines such as Systems Engineering, Flight Software (FSW), Safety and Mission Assurance (S&MA) and major subsystems and vehicle elements

  9. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  10. TrackNTrace: A simple and extendable open-source framework for developing single-molecule localization and tracking algorithms.

    Science.gov (United States)

    Stein, Simon Christoph; Thiart, Jan

    2016-11-25

    Super-resolution localization microscopy and single particle tracking are important tools for fluorescence microscopy. Both rely on detecting, and tracking, a large number of fluorescent markers using increasingly sophisticated computer algorithms. However, this rise in complexity makes it difficult to fine-tune parameters and detect inconsistencies, improve existing routines, or develop new approaches founded on established principles. We present an open-source MATLAB framework for single molecule localization, tracking and super-resolution applications. The purpose of this software is to facilitate the development, distribution, and comparison of methods in the community by providing a unique, easily extendable plugin-based system and combining it with a novel visualization system. This graphical interface incorporates possibilities for quick inspection of localization and tracking results, giving direct feedback of the quality achieved with the chosen algorithms and parameter values, as well as possible sources for errors. This is of great importance in practical applications and even more so when developing new techniques. The plugin system greatly simplifies the development of new methods as well as adapting and tailoring routines towards any research problem's individual requirements. We demonstrate its high speed and accuracy with plugins implementing state-of-the-art algorithms and show two biological applications.

  11. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    Science.gov (United States)

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  12. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    Science.gov (United States)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  13. Algorithm-based method for detection of blood vessels in breast MRI for development of computer-aided diagnosis.

    Science.gov (United States)

    Lin, Muqing; Chen, Jeon-Hor; Nie, Ke; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2009-10-01

    To develop a computer-based algorithm for detecting blood vessels that appear in breast dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI), and to evaluate the improvement in reducing the number of vascular pixels that are labeled by computer-aided diagnosis (CAD) systems as being suspicious of malignancy. The analysis was performed in 34 cases. The algorithm applied a filter bank based on wavelet transform and the Hessian matrix to detect linear structures as blood vessels on a two-dimensional maximum intensity projection (MIP). The vessels running perpendicular to the MIP plane were then detected based on the connectivity of enhanced pixels above a threshold. The nonvessel enhancements were determined and excluded based on their morphological properties, including those showing scattered small segment enhancements or nodular or planar clusters. The detected vessels were first converted to a vasculature skeleton by thinning and subsequently compared to the vascular track manually drawn by a radiologist. When evaluating the performance of the algorithm in identifying vascular tissue, the correct-detection rate refers to pixels identified by both the algorithm and radiologist, while the incorrect-detection rate refers to pixels identified by only the algorithm, and the missed-detection rate refers to pixels identified only by the radiologist. From 34 analyzed cases the median correct-detection rate was 85.6% (mean 84.9% +/- 7.8%), the incorrect-detection rate was 13.1% (mean 15.1% +/- 7.8%), and the missed-detection rate was 19.2% (mean 21.3% +/- 12.8%). When detected vessels were excluded in the hot-spot color-coding of the CAD system, they could reduce the labeling of vascular vessels in 2.6%-68.6% of hot-spot pixels (mean 16.6% +/- 15.9%). The computer algorithm-based method can detect most large vessels and provide an effective means in reducing the labeling of vascular pixels as suspicious on a DCE-MRI CAD system. This algorithm may improve the

  14. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  15. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    Science.gov (United States)

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  16. Comparative Analysis of LEON 3 and NIOS II Processor Development Environment: Case Study Rijindael’s Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Meghana Hasamnis

    2012-06-01

    Full Text Available Embedded system design is becoming complex day by day, combined with reduced time-to-market deadlines. Due to the constraints and complexity in the design of embedded systems, it incorporates hardware / software co-design methodology. An embedded system is a combination of hardware and software parts integrated together on a common platform. A soft-core processor which is a hardware description language (HDL model of a specific processor (CPU can be customized for any application and can be synthesized for FPGA target. This paper gives a comparative analysis of the development environment for embedded systems using LEON 3 and NIOS II Processor, both soft core processors. LEON3 is an open source processor and NIOS II is a commercial processor. Case study under consideration is Rijindael’s Encryption Algorithm (AES. It is a standard encryption algorithm used to encrypt huge bulk of data and for security. Using the co-design methodology the algorithm is implemented on two different platforms. One using the open source and other using the commercial processor and the comparative results of the two different platforms is stated in terms of its performance parameters. The algorithm is partitioned in hardware and software parts and integrated on a common platform.

  17. Performance of MODIS Thermal Emissive Bands On-orbit Calibration Algorithms

    Science.gov (United States)

    Xiong, Xiaoxiong; Chang, T.

    2009-01-01

    serves as the thermal calibration source and the SV provides measurements for the sensor's background and offsets. MODIS on-board BB is a v-grooved plate with its temperature measured using 12 platinum resistive thermistors (PRT) uniformly embedded in the BB substrate. All the BB thermistors were characterized pre-launch with reference to the NIST temperature standards. Unlike typical BB operations in many heritage sensors, which have no temperature control capability, the MODIS on-board BB can be operated at any temperatures between instrument ambient (about 270K) and 315K and can also be varied continuously within this range. This feature has significantly enhanced the MODIS' capability of tracking and updating the TEB nonlinear calibration coefficients over its entire mission. Following a brief description of MODIS TEB on-orbit calibration methodologies and its onboard BB operational activities, this paper provides a comprehensive performance assessment of MODIS TEB quadratic calibration algorithm. It examines the scan-by-scan, orbit-by-orbit, daily, and seasonal variations of detector responses and associated impact due changes in the CFPA and instrument temperatures. Specifically, this paper will analyze the contribution by each individual thermal emissive source term (BB, scan cavity, and scan mirror), the impact on the Level 1 B data product quality due to pre-launch and on-orbit calibration uncertainties. A comparison of Terra and Aqua TEB on-orbit performance, lessons learned, and suggestions for future improvements will also be made.

  18. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that

  19. Development and Beam Tests of an Automatic Algorithm for Alignment of LHC Collimators with Embedded BPMs

    CERN Document Server

    Valentino, G; Gasior, M; Mirarchi, D; Nosych, A A; Redaelli, S; Salvachua, B; Assmann, R W; Sammut, N

    2013-01-01

    Collimators with embedded Beam Position Monitor (BPM) buttons will be installed in the LHC during the upcoming long shutdown period. During the subsequent operation, the BPMs will allow the collimator jaws to be kept centered around the beam trajectory. In this manner, the best possible beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation, as the BPM measurements are affected by non-linearities, which vary with the distance between opposite buttons, as well as the difference between the beam and the jaw centers. The successful test results, as well as some considerations for eventual operation in the LHC are also presented.

  20. Development of a scatter search optimization algorithm for BWR fuel lattice design

    Energy Technology Data Exchange (ETDEWEB)

    Francois, J.L.; Martin-del-Campo, C. [Mexico Univ. Nacional Autonoma, Facultad de Ingenieria (Mexico); Morales, L.B.; Palomera, M.A. [Mexico Univ. Nacional Autonoma, Instituto de Investigaciones en Matematicas Aplicadas y Sistemas, D.F. (Mexico)

    2005-07-01

    A basic Scatter Search (SS) method, applied to the optimization of radial enrichment and gadolinia distributions for BWR fuel lattices, is presented in this paper. Scatter search is considered as an evolutionary algorithm that constructs solutions by combining others. The goal of this methodology is to enable the implementation of solution procedures that can derive new solutions from combined elements. The main mechanism for combining solutions is such that a new solution is created from the strategic combination of two other solutions to explore the solutions' space. Results show that the Scatter Search method is an efficient optimization algorithm applied to the BWR design and optimization problem. Its main features are based on the use of heuristic rules since the beginning of the process, which allows directing the optimization process to the solution, and to use the diversity mechanism in the combination operator, which allows covering the search space in an efficient way. (authors)

  1. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    Science.gov (United States)

    Gordon, Howard R.; Conboy, B. (Technical Monitor)

    1999-01-01

    Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.

  2. Proposing the new Algorithm and Technique Development for Integrating Web Table Extraction and Building a Mashup

    Directory of Open Access Journals (Sweden)

    Rudy A.G. Gultom

    2011-01-01

    Full Text Available Problem statement: Nowadays, various types of data in web table can be easily extracted from the Internet, although not all of web tables are relevant to the users. As we may know, most web pages are in unstructured HTML format, making web table extraction process very time consuming and costly. HTML format only focuses on the presentation, not based on the database system. Therefore, users need a tool in dealing with that process. Approach: This research proposed an approach for implementing web table extraction and making a Mashup from HTML web pages using Xtractorz application. It is also discussed on how to collaborate and integrate a web table extraction process in the stage of building a Mashup, i.e., Data Retrieval, Data Source Modeling, Data Cleaning/ Filtering, Data Integration and Data Visualization. The main issue lies in stage of data modeling creation, in which Xtractorz must be able to automatically render Document Object Model (DOM tree in accordance to HTML tag or code of the web page from which the table is extracted. To overcome that, the Xtractorz is equipped with algorithm and rules so it can enable to specifically analyze the HTML tags and to extract the data into a new table format. The algorithm is created by using recursive technique within a user-friendly GUI of Xtractorz. Results: The approach was evaluated by conducting experiment using Xtractorz and other similar applications, such as RoboMaker and Karma. The result of experiment showed that Xtractorz is more efficient in completing the experiment tasks, since Xtractorz has fewer steps to complete the whole tasks. Conclusion: Xtractorz can give a positive contribution in terms of algorithm technique and a new approach method to web table extraction process and making a Mashup, where the core algorithm can extracts web data tables using recursive technique while rendering the DOM tree model automatically.

  3. Clinical guidelines development and usage: a critical insight and literature review: thyroid disease diagnostic algorithms.

    Science.gov (United States)

    Murgić, Jure; Salopek, Daniela; Prpić, Marin; Jukić, Tomislav; Kusić, Zvonko

    2008-12-01

    Clinical guidelines have been increasingly used in medicine. They represent a system of recommendations for the conduction of specific procedures used in fields from public health to different diagnostic and therapeutic procedures in clinical medicine. Guidelines are designed to facilitate to medical practitioners the adoption, evaluation and application of an increasing body of evidence and arising number of expert opinions regarding the presently best treatment and to help in delivering proper decision for the management of a patient or condition. Clinical guidelines represent a part of complementary activity by which research is implemented into praxis, standards are defined and clinical excellence is promoted in all health care fields. There are specific conditions which quality guidelines should meet. First of all, they need to be founded on comprehensive literature review, apart from clinical studies and trials in the target field. Also, there are more systems for analyzing and grading the strength of clinical evidence and the level of recommendation emerging from it. Algorithms are used to organize and summarize guidelines. The algorithm itself has a form of an informatic record and a logical flow. Algorithms, especially in case of clinical uncertainty, must be used for the improvement of health care, increasing it's availability and integration of the newest scientific knowledge. They should have an important role in the health care rationalisation, fight against non-rational diagnostics manifested as diagnostic procedures with no clinical indications, it's unnecessary repetition and wrong sequence. Several diagnostic algorithms used in the field of thyroid diseases are presented, since they have been proved to be of great use.

  4. Development of a Response Planner using the UCT Algorithm for Cyber Defense

    Science.gov (United States)

    2013-03-01

    14 2.5.5 SAT/CSP Planning . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.6 Markovian Decision Problems ( MDP ...square H Host reward value R Network reward value w Weight of Host Q Average Reward in UCT algorithm xvii List of Acronyms Acronym Definition MDP Markovian...network state based on the network operator’s preference is the second important area to building a cyber defense planner. Networks vary in composition and

  5. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    Science.gov (United States)

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  6. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Bostock, J.; Weller, P. [School of Informatics, City University London, London EC1V 0HB (United Kingdom); Cooklin, M., E-mail: jbostock1@msn.co [Cardiovascular Directorate, Guy' s and St. Thomas' NHS Foundation Trust, London, SE1 7EH (United Kingdom)

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  7. Algorithm Development for Land Surface Temperature Retrieval: Application to Chinese Gaofen-5 Data

    Directory of Open Access Journals (Sweden)

    Yuanyuan Chen

    2017-02-01

    Full Text Available Land surface temperature (LST is a key variable in the study of the energy exchange between the land surface and the atmosphere. Among the different methods proposed to estimate LST, the quadratic split-window (SW method has achieved considerable popularity. This method works well when the emissivities are high in both channels. Unfortunately, it performs poorly for low land surface emissivities (LSEs. To solve this problem, assuming that the LSE is known, the constant in the quadratic SW method was calculated by maintaining the other coefficients the same as those obtained for the black body condition. This procedure permits transfer of the emissivity effect to the constant. The result demonstrated that the constant was influenced by both atmospheric water vapour content (W and atmospheric temperature (T0 in the bottom layer. To parameterize the constant, an exponential approximation between W and T0 was used. A LST retrieval algorithm was proposed. The error for the proposed algorithm was RMSE = 0.70 K. Sensitivity analysis results showed that under the consideration of NEΔT = 0.2 K, 20% uncertainty in W and 1% uncertainties in the channel mean emissivity and the channel emissivity difference, the RMSE was 1.29 K. Compared with AST 08 product, the proposed algorithm underestimated LST by about 0.8 K for both study areas when ASTER L1B data was used as a proxy of Gaofen-5 (GF-5 satellite data. The GF-5 satellite is scheduled to be launched in 2017.

  8. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    Science.gov (United States)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  9. Development of an algorithm to predict serum vitamin D levels using a simple questionnaire based on sunlight exposure.

    Science.gov (United States)

    Vignali, Edda; Macchia, Enrico; Cetani, Filomena; Reggiardo, Giorgio; Cianferotti, Luisella; Saponaro, Federica; Marcocci, Claudio

    2017-01-01

    Sun exposure is the main determinant of vitamin D production. The aim of this study was to develop an algorithm to assess individual vitamin D status, independently of serum 25(OHD) measurement, using a simple questionnaire, mostly relying upon sunlight exposure, which might help select subjects requiring serum 25(OHD) measurement. Six hundred and twenty adult subjects living in a mountain village in Southern Italy, located at 954 m above the sea level and at a latitude of 40°50'11″76N, were asked to fill the questionnaire in two different periods of the year: August 2010 and March 2011. Seven predictors were considered: month of investigation, age, sex, BMI, average daily sunlight exposure, beach holidays in the past 12 months, and frequency of going outdoors. The statistical model assumes four classes of serum 25(OHD) concentrations: ≤10, 10-19.9, 20-29.9, and ≥30 ng/ml. The algorithm was developed using a two-step procedure. In Step 1, the linear regression equation was defined in 385 randomly selected subjects. In Step 2, the predictive ability of the regression model was tested in the remaining 235 subjects. Seasonality, daily sunlight exposure and beach holidays in the past 12 months accounted for 27.9, 13.5, and 6.4 % of the explained variance in predicting vitamin D status, respectively. The algorithm performed extremely well: 212 of 235 (90.2 %) subjects were assigned to the correct vitamin D status. In conclusion, our pilot study demonstrates that an algorithm to estimate the vitamin D status can be developed using a simple questionnaire based on sunlight exposure.

  10. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    Science.gov (United States)

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  11. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...... the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...

  12. Technical note: Boundary layer height determination from lidar for improving air pollution episode modeling: development of new algorithm and evaluation

    Science.gov (United States)

    Yang, Ting; Wang, Zifa; Zhang, Wei; Gbaguidi, Alex; Sugimoto, Nobuo; Wang, Xiquan; Matsui, Ichiro; Sun, Yele

    2017-05-01

    Predicting air pollution events in the low atmosphere over megacities requires a thorough understanding of the tropospheric dynamics and chemical processes, involving, notably, continuous and accurate determination of the boundary layer height (BLH). Through intensive observations experimented over Beijing (China) and an exhaustive evaluation of existing algorithms applied to the BLH determination, persistent critical limitations are noticed, in particular during polluted episodes. Basically, under weak thermal convection with high aerosol loading, none of the retrieval algorithms is able to fully capture the diurnal cycle of the BLH due to insufficient vertical mixing of pollutants in the boundary layer associated with the impact of gravity waves on the tropospheric structure. Consequently, a new approach based on gravity wave theory (the cubic root gradient method: CRGM) is developed to overcome such weakness and accurately reproduce the fluctuations of the BLH under various atmospheric pollution conditions. Comprehensive evaluation of CRGM highlights its high performance in determining BLH from lidar. In comparison with the existing retrieval algorithms, CRGM potentially reduces related computational uncertainties and errors from BLH determination (strong increase of correlation coefficient from 0.44 to 0.91 and significant decreases of the root mean square error from 643 to 142 m). Such a newly developed technique is undoubtedly expected to contribute to improving the accuracy of air quality modeling and forecasting systems.

  13. Development and comparative assessment of Raman spectroscopic classification algorithms for lesion discrimination in stereotactic breast biopsies with microcalcifications.

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Saha, Anushree; McGee, Sasha; Galindo, Luis H; Liu, Wendy; Plecha, Donna; Klein, Nina; Dasari, Ramachandra Rao; Fitzmaurice, Maryann

    2013-04-01

    Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent-based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k-nearest neighbor (k -NN) and support vector machine (SVM) analysis, and subjected to leave-one-out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported.

  14. Drowsiness/alertness algorithm development and validation using synchronized EEG and cognitive performance to individualize a generalized model.

    Science.gov (United States)

    Johnson, Robin R; Popovic, Djordje P; Olmstead, Richard E; Stikic, Maja; Levendowski, Daniel J; Berka, Chris

    2011-05-01

    A great deal of research over the last century has focused on drowsiness/alertness detection, as fatigue-related physical and cognitive impairments pose a serious risk to public health and safety. Available drowsiness/alertness detection solutions are unsatisfactory for a number of reasons: (1) lack of generalizability, (2) failure to address individual variability in generalized models, and/or (3) lack of a portable, un-tethered application. The current study aimed to address these issues, and determine if an individualized electroencephalography (EEG) based algorithm could be defined to track performance decrements associated with sleep loss, as this is the first step in developing a field deployable drowsiness/alertness detection system. The results indicated that an EEG-based algorithm, individualized using a series of brief "identification" tasks, was able to effectively track performance decrements associated with sleep deprivation. Future development will address the need for the algorithm to predict performance decrements due to sleep loss, and provide field applicability.

  15. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    Science.gov (United States)

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  16. Development of computational algorithms for quantification of pulmonary structures; Desenvolvimento de algoritmos computacionais para quantificacao de estruturas pulmonares

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  17. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms.

    Science.gov (United States)

    Wu, Meihong; Liao, Lifang; Luo, Xin; Ye, Xiaoquan; Yao, Yuchen; Chen, Pinnan; Shi, Lei; Huang, Hui; Wu, Yunfeng

    2016-01-01

    Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn) and average stride interval (ASI) parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p algorithms were used to effectively distinguish the children's gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%), recall (≥0.8), and precision (≥0.8077).

  18. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    Science.gov (United States)

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  19. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    Science.gov (United States)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  20. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    Science.gov (United States)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  1. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    Science.gov (United States)

    Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  2. A multi-channel feedback algorithm for the development of active liners to reduce noise in flow duct applications

    Science.gov (United States)

    Mazeaud, B.; Galland, M.-A.

    2007-10-01

    The present paper deals with the design and development of the active part of a hybrid acoustic treatment combining porous material properties and active control techniques. Such an acoustic system was developed to reduce evolutionary tones in flow duct applications. Attention was particularly focused on the optimization process of the controller part of the hybrid cell. A piezo-electric transducer combining efficiency and compactness was selected as a secondary source. A digital adaptive feedback control algorithm was specially developed in order to operate independently cell by cell, and to facilitate a subsequent increase in the liner surface. An adaptive bandpass filter was used to prevent the development of instabilities due to the coupling occurring between cells. Special care was taken in the development of such systems for time-varying primary signals. An automatic frequency detection loop was therefore introduced in the control algorithm, enabling the continuous adaptation of the bandpass filtering. The multi-cell structure was experimentally validated for a four-cell system located on a duct wall in the presence of flow. Substantial noise reduction was obtained throughout the 0.7-2.5 kHz frequency range, with flow velocities up to 50 m/s.

  3. Development of a deterministic downscaling algorithm for remote sensing soil moisture footprint using soil and vegetation classifications

    Science.gov (United States)

    Shin, Yongchul; Mohanty, Binayak P.

    2013-10-01

    Soil moisture (SM) at the local scale is required to account for small-scale spatial heterogeneity of land surface because many hydrological processes manifest at scales ranging from cm to km. Although remote sensing (RS) platforms provide large-scale soil moisture dynamics, scale discrepancy between observation scale (e.g., approximately several kilometers) and modeling scale (e.g., few hundred meters) leads to uncertainties in the performance of land surface hydrologic models. To overcome this drawback, we developed a new deterministic downscaling algorithm (DDA) for estimating fine-scale soil moisture with pixel-based RS soil moisture and evapotranspiration (ET) products using a genetic algorithm. This approach was evaluated under various synthetic and field experiments (Little Washita-LW 13 and 21, Oklahoma) conditions including homogeneous and heterogeneous land surface conditions composed of different soil textures and vegetations. Our algorithm is based on determining effective soil hydraulic properties for different subpixels within a RS pixel and estimating the long-term soil moisture dynamics of individual subpixels using the hydrological model with the extracted soil hydraulic parameters. The soil moisture dynamics of subpixels from synthetic experiments matched well with the observations under heterogeneous land surface condition, although uncertainties (Mean Bias Error, MBE: -0.073 to -0.049) exist. Field experiments have typically more variations due to weather conditions, measurement errors, unknown bottom boundary conditions, and scale discrepancy between remote sensing pixel and model grid resolution. However, the soil moisture estimates of individual subpixels (from the airborne Electronically Scanned Thinned Array Radiometer (ESTAR) footprints of 800 m × 800 m) downscaled by this approach matched well (R: 0.724 to -0.914, MBE: -0.203 to -0.169 for the LW 13; R: 0.343-0.865, MBE: -0.165 to -0.122 for the LW 21) with the in situ local scale soil

  4. Development of a B-flavor tagging algorithm for the Belle II experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abudinen, Fernando; Li Gioi, Luigi [Max-Planck-Institut fuer Physik Muenchen (Germany); Gelb, Moritz [Karlsruher Institut fuer Technologie (Germany)

    2015-07-01

    The high luminosity SuperB-factory SuperKEKB will allow a precision measurement of the time-dependent CP violation parameters in the B-meson system. The analysis requires the reconstruction of one of the two exclusively produced neutral B mesons to a CP eigenstate and the determination of the flavor of the other one. Because of the high amount of decay possibilities, full reconstruction of the tagging B is not feasible. Consequently, inclusive methods that utilize flavor specific signatures of B decays are employed. The algorithm is based on multivariate methods and follows the approach adopted by BaBar. It proceeds in three steps: the track level, where the most probable target track is selected for each decay category; the event level, where the flavor specific signatures of the selected targets are analyzed; and the combiner, where the results of all categories are combined into the final output. The framework has been completed reaching a tagging efficiency of ca. 25%. A comprehensive optimization is being launched in order to increase the efficiency. This includes studies on the categories, the method-specific parameters and the kinematic variables. An overview of the algorithm is presented together with the results at the current status.

  5. Development of visual peak selection system based on multi-ISs normalization algorithm to apply to methamphetamine impurity profiling.

    Science.gov (United States)

    Lee, Hun Joo; Han, Eunyoung; Lee, Jaesin; Chung, Heesun; Min, Sung-Gi

    2016-11-01

    The aim of this study is to improve resolution of impurity peaks using a newly devised normalization algorithm for multi-internal standards (ISs) and to describe a visual peak selection system (VPSS) for efficient support of impurity profiling. Drug trafficking routes, location of manufacture, or synthetic route can be identified from impurities in seized drugs. In the analysis of impurities, different chromatogram profiles are obtained from gas chromatography and used to examine similarities between drug samples. The data processing method using relative retention time (RRT) calculated by a single internal standard is not preferred when many internal standards are used and many chromatographic peaks present because of the risk of overlapping between peaks and difficulty in classifying impurities. In this study, impurities in methamphetamine (MA) were extracted by liquid-liquid extraction (LLE) method using ethylacetate containing 4 internal standards and analyzed by gas chromatography-flame ionization detection (GC-FID). The newly developed VPSS consists of an input module, a conversion module, and a detection module. The input module imports chromatograms collected from GC and performs preprocessing, which is converted with a normalization algorithm in the conversion module, and finally the detection module detects the impurities in MA samples using a visualized zoning user interface. The normalization algorithm in the conversion module was used to convert the raw data from GC-FID. The VPSS with the built-in normalization algorithm can effectively detect different impurities in samples even in complex matrices and has high resolution keeping the time sequence of chromatographic peaks the same as that of the RRT method. The system can widen a full range of chromatograms so that the peaks of impurities were better aligned for easy separation and classification. The resolution, accuracy, and speed of impurity profiling showed remarkable improvement. Copyright

  6. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    Science.gov (United States)

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  7. Development of Interpretation Algorithm for Optical Fiber Bragg Grating Sensors for Composite Structures

    Science.gov (United States)

    Peters, Kara

    2002-12-01

    Increasingly, optical fiber sensors, and in particular Bragg grating sensors, are being used in aerospace structures due to their immunity to electrical noise and the ability to multiplex hundreds of sensors into a single optical fiber. This significantly reduces the cost per sensor as the number of fiber connections and demodulation systems required is also reduced. The primary objective of this project is to study the effects of mounting issues such as adhesion, surface roughness, and high strain gradients on the interpretation of the measured strain. This is performed through comparison with electrical strain gage benchmark data. The long-term goal is to integrate such optical fiber Bragg grating sensors into a structural integrity monitoring system for the 2nd Generation Reusable Launch Vehicle. Previously, researchers at NASA Langley instrumented a composite wingbox with both optical fiber Bragg grating sensors and electrical strain gages during laboratory load-to-failure testing. A considerable amount of data was collected during these tests. For this project, data from two of the sensing optical fibers (each containing 800 Bragg grating sensors) were analyzed in detail. The first fiber studied was mounted in a straight line on the upper surface of the wingbox far from any structural irregularities. The results from these sensors showed a relatively large amount of noise compared to the electrical strain gages, but measured the same averaged strain curve. It was shown that the noise could be varied through the choice of input parameters in the data interpretation algorithm. Based upon the assumption that the strain remains constant along the gage length (a valid assumption for this fiber as confirmed by the measured grating spectra) this noise was significantly reduced. The second fiber was mounted on the lower surface of the wingbox in a pattern that circled surface cutouts and ran close to sites of impact damage, induced before the loading tests. As

  8. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  9. SU-E-T-252: Developing a Pencil Beam Dose Calculation Algorithm for CyberKnife System

    Energy Technology Data Exchange (ETDEWEB)

    Liang, B [Image processing center, Beihang University, Beijing (China); Duke University Medical Center, Durham, NC (United States); Liu, B; Zhou, F [Image processing center, Beihang University, Beijing (China); Xu, S [Chinese PLA General Hospital, Beijing (China); Wu, Q [Duke University Medical Center, Durham, NC (United States)

    2015-06-15

    Purpose: Currently there are two dose calculation algorithms available in the Cyberknife planning system: ray-tracing and Monte Carlo, which is either not accurate or time-consuming for irregular field shaped by the MLC that was recently introduced. The purpose of this study is to develop a fast and accurate pencil beam dose calculation algorithm which can handle irregular field. Methods: A pencil beam dose calculation algorithm widely used in Linac system is modified. The algorithm models both primary (short range) and scatter (long range) components with a single input parameter: TPR{sub 20}/{sub 10}. The TPR{sub 20}/{sub 20}/{sub 10} value was first estimated to derive an initial set of pencil beam model parameters (PBMP). The agreement between predicted and measured TPRs for all cones were evaluated using the root mean square of the difference (RMSTPR), which was then minimized by adjusting PBMPs. PBMPs are further tuned to minimize OCR RMS (RMSocr) by focusing at the outfield region. Finally, an arbitrary intensity profile is optimized by minimizing RMSocr difference at infield region. To test model validity, the PBMPs were obtained by fitting to only a subset of cones (4) and applied to all cones (12) for evaluation. Results: With RMS values normalized to the dmax and all cones combined, the average RMSTPR at build-up and descending region is 2.3% and 0.4%, respectively. The RMSocr at infield, penumbra and outfield region is 1.5%, 7.8% and 0.6%, respectively. Average DTA in penumbra region is 0.5mm. There is no trend found in TPR or OCR agreement among cones or depths. Conclusion: We have developed a pencil beam algorithm for Cyberknife system. The prediction agrees well with commissioning data. Only a subset of measurements is needed to derive the model. Further improvements are needed for TPR buildup region and OCR penumbra. Experimental validations on MLC shaped irregular field needs to be performed. This work was partially supported by the National

  10. Image microarrays derived from tissue microarrays (IMA-TMA): New resource for computer-aided diagnostic algorithm development.

    Science.gov (United States)

    Hipp, Jennifer A; Hipp, Jason D; Lim, Megan; Sharma, Gaurav; Smith, Lauren B; Hewitt, Stephen M; Balis, Ulysses G J

    2012-01-01

    Conventional tissue microarrays (TMAs) consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD) algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE), and image microarray maker (iMAM) enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA). We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ) algorithm. Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM) appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic bodies, was subsequently carried out on the

  11. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  12. Developing Algorithms to Improve Defect Extraction and Suppressing Undesired Heat Patterns in Sonic IR Images

    Science.gov (United States)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-12-01

    Sonic IR imaging is an emerging NDE technology. This technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in materials and structures. Sonic energy is coupled to the specimen under inspection by means of direct contact between the transducer tip and the specimen at some convenient point. This region which is normally in the field of view of the camera appears as intensity peak in the image which might be misinterpreted as defects or obscure the detection and/or extraction of the defect signals in the proximity of the contact region. Moreover, certain defects may have very small heat signature or being buried in noise. In this paper, we present algorithms to improve defect extraction and suppression of undesired heat patterns in sonic IR images. Two approaches are presented, each fits to a specific category of sonic IR images.

  13. Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic

    Science.gov (United States)

    Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.

    2016-01-01

    One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.

  14. Development of a prototype algorithm for the operational retrieval of height-resolved products from GOME

    Science.gov (United States)

    Spurr, Robert J. D.

    1997-01-01

    Global ozone monitoring experiment (GOME) level 2 products of total ozone column amounts have been generated on a routine operational basis since July 1996. These products and the level 1 radiance products are the major outputs from the ERS-2 ground segment GOME data processor (GDP) at DLR in Germany. Off-line scientific work has already shown the feasibility of ozone profile retrieval from GOME. It is demonstrated how the retrievals can be performed in an operational context. Height-resolved retrieval is based on the optimal estimation technique, #and cloud-contaminated scenes are treated in an equivalent reflecting surface approximation. The prototype must be able to handle GOME measurements routinely on a global basis. Requirements for the major components of the algorithm are described: this incorporates an overall strategy for operational height-resolved retrieval from GOME.

  15. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  16. Development of a Sequential Restoration Strategy Based on the Enhanced Dijkstra Algorithm for Korean Power Systems

    Directory of Open Access Journals (Sweden)

    Bokyung Goo

    2016-12-01

    Full Text Available When a blackout occurs, it is important to reduce the time for power system restoration to minimize damage. For fast restoration, it is important to reduce taking time for the selection of generators, transmission lines and transformers. In addition, it is essential that a determination of a generator start-up sequence (GSS be made to restore the power system. In this paper, we propose the optimal selection of black start units through the generator start-up sequence (GSS to minimize the restoration time using generator characteristic data and the enhanced Dijkstra algorithm. For each restoration step, the sequence selected for the next start unit is recalculated to reflect the system conditions. The proposed method is verified by the empirical Korean power systems.

  17. A Novel Sensing Method and Sensing Algorithm Development for a Ubiquitous Network

    Directory of Open Access Journals (Sweden)

    Sungwook Yu

    2010-08-01

    Full Text Available This paper proposes a novel technique which provides energy efficient circuit design for sensors networks. The overall system presented requires a minimum number of independently communicating sensors and sub-circuits which enable it to reduce the power consumption by setting unused sensors to idle. This technique reduces hardware requirements, time and interconnection problems with a supervisory control. Our proposed algorithm, which hands over the controls to two software mangers for the sensing and moving subsystems can greatly improve the overall system performance. Based on the experimental results, we observed that our system, which is using sensing and moving managers, the four sensors required only 3.4 mW power consumption when a robot arm is moved a total distance of 17 cm. This system is designed for robot applications but could be implemented to many other human environments such as “ubiquitous cities”, “smart homes”, etc.

  18. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    Science.gov (United States)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  19. Developing an Algorithm for Finding Deep-Sea Corals on Seamounts Using Bathymetry and Photographic Data

    Science.gov (United States)

    Fernandez, D. P.; Adkins, J. F.; Scheirer, D. P.

    2006-12-01

    Over the last three years we have conducted several cruises on seamounts in the North Atlantic to sample and characterize the distribution of deep-sea corals in space and time. Using the deep submergence vehicle Alvin and the ROV Hercules we have spent over 80 hours on the seafloor. With the autonomous vehicle ABE and a towed camera sled, we collected over 10,000 bottom photographs and over 60 hours of micro- bathymetry over 120 km of seafloor. While there are very few living scleractinia (Desmophyllum dianthus, Solenosmilia sp. and, Lophilia sp.), we recovered over 5,000 fossil D. dianthus and over 60 kg of fossil Solenosmilia sp. The large numbers of fossil corals mean that a perceived lack of material does not have to limit the use of this new archive of the deep ocean. However, we need a better strategy for finding and returning samples to the lab. Corals clearly prefer to grow on steep slopes and at the tops of scarps of all scales. They are preferentially found along ridges and on small knolls flanking a larger edifice. There is also a clear preference for D. dianthus to recruit onto carbonate substrate. Overall, our sample collection, bathymetry and bottom photographs allow us to create an algorithm for finding corals based only on knowledge of the seafloor topography. We can test this algorithm against known sampling locations and visual surveys of the seafloor. Similar to the way seismic data are used to locate ideal coring locations, we propose that high-resolution bathymetry can be used to predict the most likely locations for finding fossil deep-sea corals.

  20. Development of an algorithm for tip-related artifacts identification in AFM biological film imaging

    Directory of Open Access Journals (Sweden)

    Rubens Bernardes-Filho

    2005-07-01

    Full Text Available One major drawback identified in atomic force microscopy imaging is the dependence of the image's precision on the shape of the probe tip. In this paper a simple algorithm is proposed to provide artifact identification signaling in-situ tip features in atomic force microscopy images. The base of the identifications lied when the angle formed between two scanned points was kept the same as the tip sweeps a certain length of the sample. The potential of the described method was illustrated on a chitosan polysaccharide film. The images produced were compared to evaluate tip-artifact regions. This algorithm showed promise as a tool in the measurement and characterization fields to separate true images from artificial images in probe microscopy.Um aspecto limitante a plena interpretação de imagens geradas por microscopia de força atômica é a interação entre a superfie varrida e ponta de varredura do sistema gerando artefatos de imagens. Para a identificação desses artefatos, propomos neste texto, um algoritmo simples capaz de assinalar esse tipo de irregularidade de imagem. A medida tem por princípio a identificação de ângulos similares formados entre dois pontos sobre a espécie varrida. O potencial de aplicação do método proposto é aqui ilustrado sobre um filme de polissacarídeo quitosana. As imagens geradas são comparadas entre si, indicando regiões de artefatos típicos gerados pela ponta de varredura. Este algoritmo apresenta-se como uma ferramenta útil a cientistas e usuários, permitindo a separação de aspectos reais e artificiais, que são fundamentais para uma melhor caracterização e medida.

  1. Development of a sensorimotor algorithm able to deal with unforeseen pushes and its implementation based on VHDL

    OpenAIRE

    Lezcano Giménez, Pablo Gabriel

    2015-01-01

    Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, sp...

  2. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  3. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  4. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    Science.gov (United States)

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  5. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  7. Development and laboratory verification of control algorithms for formation flying configuration with a single-input control

    Science.gov (United States)

    Ovchinnikov, M.; Bindel, D.; Ivanov, D.; Smirnov, G.; Theil, S.; Zaramenskikh, I.

    2010-11-01

    Once been orbited, the technological nanosatellite TNS-0 no. 1 is supposed to be used in one of the next missions for the demonstration of orbital maneuvering capability to eliminate a secular relative motion of two satellites due to the J2 harmonic of the Earth gravitational field. It is assumed that the longitudinal axis of the satellite is stabilized along the induction vector of the geomagnetic field and a thruster engine is installed along this axis. Continuous and impulsive thruster control algorithms eliminating the secular relative motion have been developed. Special equipment was developed in ZARM for demonstration and laboratory testing of the satellite motion identification and control algorithms. The facility consists of a horizontal smooth table and mobile mock-up that enables to glide over the table surface due to compressed air stored in on-board pressure tanks. Compressed air is used to control the translation and attitude motion of the mock-up equipped with a number of pulse thrusters. In this work a dynamic model for mock-up controlled motion over the table is developed. This allows us to simulate a relative motion of a pair of TNS-0 type nanosatellites in the plane of the orbit.

  8. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Moura, Eduardo S., E-mail: emoura@wisc.edu [Department of Medical Physics, University of Wisconsin–Madison, Madison, Wisconsin 53705 and Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil); Micka, John A.; Hammer, Cliff G.; Culberson, Wesley S.; DeWerd, Larry A. [Department of Medical Physics, University of Wisconsin–Madison, Madison, Wisconsin 53705 (United States); Rostelato, Maria Elisa C. M.; Zeituni, Carlos A. [Instituto de Pesquisas Energéticas e Nucleares—IPEN-CNEN/SP, São Paulo 05508-000 (Brazil)

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  9. A fusion algorithm for joins based on collections in Odra-Object Database for Rapid Application development

    Directory of Open Access Journals (Sweden)

    Laika Satish

    2011-07-01

    Full Text Available In this paper we present the functionality of a currently under development database programming methodology called ODRA (Object Database for Rapid Application development which works fully on the object oriented principles. The database programming language is called SBQL (Stack based query language. We discuss some concepts in ODRA for e.g. the working of ODRA, how ODRA runtime environment operates, the interoperability of ODRA with .net and java .A view of ODRA's working with web services and xml. Currently the stages under development in ODRA are query optimization. So we present the prior work that is done in ODRA related to Query optimization and we also present a new fusion algorithm of how ODRA can deal with joins based on collections like set, lists, and arrays for query optimization.

  10. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili

    2017-06-15

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.

  11. Development and applications of various optimization algorithms for diesel engine combustion and emissions optimization

    Science.gov (United States)

    Ogren, Ryan M.

    For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.

  12. Development of a Reduction Algorithm of GEO Satellite Optical Observation Data for Optical Wide Field Patrol (OWL)

    Science.gov (United States)

    Park, Sun-youp; Choi, Jin; Jo, Jung Hyun; Son, Ju Young; Park, Yung-Sik; Yim, Hong-Suh; Moon, Hong-Kyu; Bae, Young-Ho; Choi, Young-Jun; Park, Jang-Hyun

    2015-09-01

    An algorithm to automatically extract coordinate and time information from optical observation data of geostationary orbit satellites (GEO satellites) or geosynchronous orbit satellites (GOS satellites) is developed. The optical wide-field patrol system is capable of automatic observation using a pre-arranged schedule. Therefore, if this type of automatic analysis algorithm is available, daily unmanned monitoring of GEO satellites can be possible. For data acquisition for development, the COMS1 satellite was observed with 1-s exposure time and 1-m interval. The images were grouped and processed in terms of ¡°action¡±, and each action was composed of six or nine successive images. First, a reference image with the best quality in one action was selected. Next, the rest of the images in the action were geometrically transformed to fit in the horizontal coordinate system (expressed in azimuthal angle and elevation) of the reference image. Then, these images were median-combined to retain only the possible non-moving GEO candidates. By reverting the coordinate transformation of the positions of these GEO satellite candidates, the final coordinates could be calculated.

  13. Developing an Upper Bound and Heuristic Solution Algorithm for Order Scheduling Problem with Machines Idle Time Minimization

    Directory of Open Access Journals (Sweden)

    Hadi Mokhtari

    2013-01-01

    Full Text Available In this paper, the problem of received order scheduling by a manufacturer, with the measure of maximum completion times of orders, has been formulated and then an analytical approach has been devised for its solution. At the beginning of a planning period, the manufacturer receives a number of orders from customers, each of which requires two different stages for processing. In order to minimize the work in process inventories, the no-wait condition between two operations of each order is regarded. Then, the equality of obtained schedules is proved by machine idle time minimization, as objective, with the schedules obtained by maximum completion time minimization. A concept entitled “Order pairing” has been defined and an algorithm for achieving optimal order pairs which is based on symmetric assignment problem has been presented. Using the established order pairs, an upper bound has been developed based on contribution of every order pair out of total machines idle time. Out of different states of improving upper bound, 12 potential situations of order pairs sequencing have been also evaluated and then the upper bound improvement has been proved in each situation, separately. Finally, a heuristic algorithm has been developed based on attained results of pair improvement and a case study in printing industry has been investigated and analyzed to approve its applicability.

  14. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2016-01-01

    Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.

  15. In situ investigation on rapid microstructure evolution in extreme complex environment by developing a new AFBP-TVM sparse tomography algorithm from original CS-XPCMT

    Science.gov (United States)

    Xu, Feng; Dong, Bo; Hu, Xiaofang; Xiao, Yu; Wang, Yang

    2017-09-01

    A new sparse tomography method for observing the rapid internal microstructure evolution of material, called the Algebraic Filtered-Back-Projection and Total Variation Minimization (AFBP-TVM) iteration sparse reconstruction algorithm, was proposed in this paper. The new algorithm was developed by combining the two techniques of the Algebraic Reconstruction Technique (ART) and the Filtered-Back-Projection (FBP) on the basis of analysis in linear space. A series of numerical reconstruction experiments were conducted to validate the new algorithm. The results indicated the new algorithm can obtain satisfactory reconstruction images from 1/6 of the projections that were used in traditional algorithms. So the time spent on projection acquisition process can be reduced to 1/6 of that in traditional tomography method. The quality of images reconstructed by new algorithm was better than other algorithms, which was evaluated by three quantitative parameters. The normalized average absolute distance criterion and the normalized mean square criterion, which were used to evaluate the relative error of the reconstruction results (smaller value means better quality of reconstruction), decreased from 0.3758 to 0.1272 and from 0.1832 to 0.0894 respectively. The standardized covariance criterion, which was used to evaluate the similarity level (greater value means higher accuracy of reconstruction), increased from 92.72% to 99.30%. Finally, the new algorithm was validated under actual experimental conditions. The results indicated that the AFBP-TVM algorithm obtained better reconstruction quality than other algorithms. It meant that the AFBP-TVM algorithm may be a suitable method for in situ investigation on material's rapid internal microstructure evolution in extreme complex environment.

  16. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    Science.gov (United States)

    Gordan, Howard R.

    1996-01-01

    Several significant accomplishments were made during the present reporting period. We have completed our basic study of using the 1.38 micron MODIS band for removal of the effects of thin cirrus clouds and stratospheric aerosol. The results suggest that it should be possible to correct imagery for thin cirrus clouds with optical thicknesses as large as 0.5 to 1.0. We have also acquired reflectance data for oceanic whitecaps during a cruise on the RV Malcolm Baldrige in the Gulf of Mexico. The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps. We installed a CIMEL sun photometer at Fort Jefferson on the Dry Tortugas off Key West in the Gulf of Mexico. The instrument has yielded a continuous stream of data since February. It shows that the aerosol optical thickness at 669 nm is often less than 0.1 in winter. This suggests that the Southern Gulf of Mexico will be an excellent winter site for vicarious calibration. In addition, we completed a study of the effect of vicarious calibration, i.e., the accuracy with which the radiance at the top of the atmosphere (TOA) can be predicted from measurement of the sky radiance at the bottom of the atmosphere (BOA). The results suggest that the neglect of polarization in the aerosol optical property inversion algorithm and in the prediction code for the TOA radiances is the largest error associated with the radiative transfer process. Overall, the study showed that the accuracy of the TOA radiance prediction is now limited by the racliometric calibration error in the sky radiometer. Finally, considerable coccolith light scattering data were obtained in the Gulf of Maine with a flow-through instrument, along with data relating to calcite concentration and the rate of calcite production.

  17. Genetic algorithm guided population pharmacokinetic model development for simvastatin, concurrently or non-concurrently co-administered with amlodipine.

    Science.gov (United States)

    Chaturvedula, Ayyappa; Sale, Mark E; Lee, Howard

    2014-02-01

    An automated model development was performed for simvastatin, co-administered with amlodipine concurrently or non-concurrently (i.e., 4 hours later) in 17 patients with coexisting hyperlipidemia and hypertension. The single objective hybrid genetic algorithm (SOHGA) was implemented in the NONMEM software by defining the search space for structural, statistical and covariate models. Candidate models obtained from the SOHGA runs were further assessed for biological plausibility and the precision of parameter estimates, followed by traditional backward elimination process for model refinement. The final population pharmacokinetic model shows that the elimination rate constant for simvastatin acid, the active form by hydrolysis of its lactone prodrug (i.e., simvastatin), is only 44% in the concurrent amlodipine administration group compared with the non-concurrent group. The application of SOHGA for automated model selection, combined with traditional model selection strategies, appears to save time for model development, which also can generate new hypotheses that are biologically more plausible.

  18. Progressive geometric algorithms

    Directory of Open Access Journals (Sweden)

    Sander P.A. Alewijnse

    2015-01-01

    Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.

  19. Algorithms for Quantum Computers

    CERN Document Server

    Smith, Jamie

    2010-01-01

    This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).

  20. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  1. Developing Backward Chaining Algorithm of Inference Engine in Ternary Grid Expert System

    Directory of Open Access Journals (Sweden)

    Yuliadi Erdani

    2012-09-01

    Full Text Available The inference engine is one of main components of expert system that influences the performance of expert system. The task of inference engine is to give answers and reasons to users by inference the knowledge of expert system. Since the idea of ternary grid issued in 2004, there is only several developed method, technique or engine working on ternary grid knowledge model. The in 2010 developed inference engine is less efficient because it works based on iterative process. The in 2011 developed inference engine works statically and quite expensive to compute. In order to improve the previous inference methods, a new inference engine has been developed. It works based on backward chaining process in ternary grid expert system. This paper describes the development of inference engine of expert system that can work in ternary grid knowledge model. The strategy to inference knowledge uses backward chaining with recursive process. The design result is implemented in the form of software. The result of experiment shows that the inference process works properly, dynamically and more efficient to compute in comparison to the previous developed methods.

  2. Inversion methods for satellite studies of the Earth Radiation Budget - Development of algorithms for the ERBE mission

    Science.gov (United States)

    Smith, G. L.; Green, R. N.; Avis, L. M.; Suttles, J. T.; Wielicki, B. A.; Raschke, E.; Davies, R.

    1986-01-01

    The Earth Radiation Budget Experiment carries a three-channel scanning radiometer and a set of nadir-looking wide and medium field-of-view instruments for measuring the radiation emitted from earth and the solar radiation reflected from earth. This paper describes the algorithms which are used to compute the radiant exitances at a reference level ('top of the atmosphere') from these measurements. Methods used to analyze data from previous radiation budget experiments are reviewed, and the rationale for the present algorithms is developed. The scanner data are converted to radiances by use of spectral factors, which account for imperfect spectral response of the optics. These radiances are converted to radiant exitances at the reference level by use of directional models, which account for anisotropy of the radiation as it leaves the earth. The spectral factors and directional models are selected on the basis of the scene, which is identified on the basis of the location and the long-wave and shortwave radiances. These individual results are averaged over 2.5 x 2.5 deg regions. Data from the wide and medium field-of-view instruments are analyzed by use of the traditional shape factor method and also by use of a numerical filter, which permits resolution enhancement along the orbit track.

  3. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    Science.gov (United States)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  4. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  5. An approach to the development of numerical algorithms for first order linear hyperbolic systems in multiple space dimensions: The constant coefficient case

    Science.gov (United States)

    Goodrich, John W.

    1995-01-01

    Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.

  6. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  7. Continued Research into Characterizing the Preturbulence Environment for Sensor Development, New Hazard Algorithms and Experimental Flight Planning

    Science.gov (United States)

    Kaplan, Michael L.; Lin, Yuh-Lang

    2005-01-01

    The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.

  8. Development of an algorithm for quantifying extremity biological tissue; Desenvolvimento de um algoritmo quantificador de tecido biologico de extremidade

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana L.M.; Miranda, Jose R.A., E-mail: analuiza@ibb.unesp.br, E-mail: jmiranda@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (IBB/UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Dept. de Fisica e Biofisica; Pina, Diana R. de, E-mail: drpina@frnb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (FMB/UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Dept. de Doencas Tropicas e Diagnostico por Imagem

    2013-07-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab Registered-Sign software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom.

  9. The Algorithm of Development the World Ocean Mining of the Industry During the Global Crisis

    Science.gov (United States)

    Nyrkov, Anatoliy; Budnik, Vladislav; Sokolov, Sergei; Chernyi, Sergei

    2016-08-01

    In the article reviewed extraction effect of hydrocarbons on the general country's developing, under the impact of economical, demographical and technological factors, as well as it's future role in the world energy balance. Also adduced facts which designate offshore and deep water production of unconventional and conventional hydrocarbons including mining of marine mineral resources as perspective area of development in the future, despite all the difficulties of this sector. In the article considered the state and prospects of the Russian continental shelf, in consideration of its geographical location and its all existing problems.

  10. Development of a Water Treatment Plant Operation Manual Using an Algorithmic Approach.

    Science.gov (United States)

    Counts, Cary A.

    This document describes the steps to be followed in the development of a prescription manual for training of water treatment plant operators. Suggestions on how to prepare both flow and narrative prescriptions are provided for a variety of water treatment systems, including: raw water, flocculation, rapid sand filter, caustic soda feed, alum feed,…

  11. HEAVY DUTY DIESEL VEHICLE LOAD ESTIMATION: DEVELOPMENT OF VEHICLE ACTIVITY OPTIMIZATION ALGORITHM

    Science.gov (United States)

    The Heavy-Duty Vehicle Modal Emission Model (HDDV-MEM) developed by the Georgia Institute of Technology(Georgia Tech) has a capability to model link-specific second-by-second emissions using speed/accleration matrices. To estimate emissions, engine power demand calculated usin...

  12. Development of a Water Treatment Plant Operation Manual Using an Algorithmic Approach.

    Science.gov (United States)

    Counts, Cary A.

    This document describes the steps to be followed in the development of a prescription manual for training of water treatment plant operators. Suggestions on how to prepare both flow and narrative prescriptions are provided for a variety of water treatment systems, including: raw water, flocculation, rapid sand filter, caustic soda feed, alum feed,…

  13. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  14. Development of Flexible Active Power Control Strategies for Grid-Connected Photovoltaic Inverters by Modifying MPPT Algorithms

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    As the penetration level of grid-connected PV systems increases, more advanced control functionality is demanded. In order to ensure smooth and friendly grid integration as well as enable more PV installations, the power generated by PV systems needs to be flexible and capable of: 1) limiting...... strategies for grid-connected PV inverters by modifying maximum power point tracking algorithms, where the PV power is regulated by changing the operating point of the PV system. In this way, no extra equipment is needed, being a cost-effective solution. Experiments on a 3-kW grid-connected PV system have...... been performed, where the developed flexible active power control functionalities are achieved per demands....

  15. Scaling to 150K cores: Recent algorithm and performance engineering developments enabling XGC1 to run at scale

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark F [Department of Applied Physics and Applied Mathematics, Columbia University (United States); Ku, Seung-Hoe; Chang, C-S [Courant Institute of Mathematical Sciences, New York University (United States); Worley, Patrick; D' Azevedo, Ed [Computer Science and Mathematics Division, Oak Ridge National Laboratory (United States); Cummings, Julian C, E-mail: mark.adams@columbia.ed, E-mail: sku@cims.nyu.ed, E-mail: worleyph@ornl.go, E-mail: dazevedoef@ornl.go, E-mail: cummings@cacr.caltech.ed, E-mail: cschang@cims.nyu.ed [Center for Advanced Computing Research, California Institute of Technology (United States)

    2009-07-01

    Particle-in-cell (PIC) methods have proven to be effective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the first fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes field-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being efficiently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.

  16. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Max [Rice Univ., Houston, TX (United States); Pritchard Jr., Howard Porter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budimlic, Zoran [Rice Univ., Houston, TX (United States); Sarkar, Vivek [Rice Univ., Houston, TX (United States)

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to test against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.

  17. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    Science.gov (United States)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  18. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    Science.gov (United States)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  19. Development of real-time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, I-35127, Padua (Italy); Joffrin, E [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Felton, R [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Mazon, D [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Zabeo, L [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Albanese, R [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC, Loc. Feo di Vito, I-89060, RC (Italy); Arena, P [Assoc. Euratom-ENEA-CREATE, Univ. di Catania (Italy); Ambrosino, G [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico II, Via Claudio 21, I-80125 Naples (Italy); Ariola, M [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico II, Via Claudio 21, I-80125 Napoli (Italy); Barana, O [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, I-35127, Padua (Italy); Bruno, M [Assoc. Euratom-ENEA-CREATE, Univ. di Catania (Italy); Laborde, L [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Moreau, D [Association EURATOM-CEA, CEA Cadarache, 13108 Saint-Paul-lez-Durance (France); Piccolo, F [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Sartori, F [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Crisanti, F [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E de la [Associacion EURATOM CIEMAT para Fusion, Avenida Complutense 22, E-28040 Madrid (Spain); Sanchez, J [Associacion EURATOM CIEMAT para Fusion, Avenida Complutense 22, E-28040 Madrid (Spain)

    2005-03-01

    Real-time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of next step tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real-time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. Some of the signals now routinely provided in real time at JET are: (i) the internal inductance and the main confinement quantities obtained by calculating the Shafranov integrals from the pick-up coils with 2 ms time resolution; (ii) the electron temperature profile, from electron cyclotron emission every 10 ms; (iii) the ion temperature and plasma toroidal velocity profiles, from charge exchange recombination spectroscopy, provided every 50 ms; and (iv) the safety factor profile, derived from the inversion of the polarimetric line integrals every 2 ms. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. With these new tools, several real-time schemes were implemented, among which the most significant is the simultaneous control of the safety factor and the plasma pressure profiles using the additional heating systems (LH, NBI, ICRH) as actuators. The control strategy adopted in this case consists of a multi-variable model-based technique, which was implemented as a truncated singular value decomposition of an integral operator. This approach is considered essential for systems like tokamak machines, characterized by a strong mutual dependence of the various parameters and the distributed nature of the quantities, the plasma profiles, to be controlled. First encouraging results were also obtained using non-algorithmic

  20. Development of a novel algorithm for detecting glucocorticoid-induced diabetes mellitus using a medical information database.

    Science.gov (United States)

    Imatoh, T; Sai, K; Hori, K; Segawa, K; Kawakami, J; Kimura, M; Saito, Y

    2017-04-01

    Glucocorticoid-induced diabetes mellitus (GIDM) increases the risk of diabetes mellitus (DM)-related complications but is generally difficult to detect in clinical settings. The criteria for diagnosing GIDM have not been established. Recently, medical information databases (MIDs) have been used in post-marketing surveillance (PMS) studies. We conducted a pharmacoepidemiological study to develop an algorithm for detecting GIDM using MID. We selected 1214 inpatients who were newly prescribed with a typical glucocorticoid, prednisolone, during hospitalization from 2008 to 2014 from an MID of Hamamatsu University Hospital in Japan. GIDM was screened based on fasting blood glucose (FBG) and haemoglobin A1c (HbA1c) levels according to the current Japan Diabetes Society (JDS) DM criteria, and its predictability was evaluated by an expert's review of medical records. We investigated further candidate screening factors using receiver operating characteristics analysis. Sixty-three inpatients were identified by the JDS DM criteria. Of these, 33 patients were definitely diagnosed as having GIDM by expert's review (positive predictive value = 52·4%). To develop a highly predictive algorithm, we compared the characteristics of inpatients diagnosed with definite GIDM and those diagnosed as non-GIDM. The maximum levels of HbA1c in patients with GIDM were significantly higher than those of patients with non-GIDM (66·9 mmol/mol vs. 58·7 mmol/mol, P level of HbA1c (RIM-HbA1c) than those with non-GIDM (0·3 vs. 0·03, P levels. We applied the RIM-HbA1c as a second screening factor to improve the detection of GIDM. It showed that a 13% increase in RIM-HbA1c separated patients with from patients without GIDM. Patients with GIDM had significantly higher RIM-HbA1c than patients with non-GIDM. There was a 13% increase in RIM-HbA1c in patients with GIDM compared to the others. Our detection algorithm for GIDM using an MID achieved high sensitivity and specificity, and was superior to

  1. Development of an Outdoor Temperature Based Control Algorithm for Residential Mechanical Ventilation Control

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tang, Yihuan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-01

    The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.

  2. Development of Decision Making Algorithm for Control of Sea Cargo Containers by ``TAGGED'' Neutron Method

    Science.gov (United States)

    Anan'ev, A. A.; Belichenko, S. G.; Bogolyubov, E. P.; Bochkarev, O. V.; Petrov, E. V.; Polishchuk, A. M.; Udaltsov, A. Yu.

    2009-12-01

    Nowadays in Russia and abroad there are several groups of scientists, engaged in development of systems based on "tagged" neutron method (API method) and intended for detection of dangerous materials, including high explosives (HE). Particular attention is paid to possibility of detection of dangerous objects inside a sea cargo container. Energy gamma-spectrum, registered from object under inspection is used for determination of oxygen/carbon and nitrogen/carbon chemical ratios, according to which dangerous object is distinguished from not dangerous one. Material of filled container, however, gives rise to additional effects of rescattering and moderation of 14 MeV primary neutrons of generator, attenuation of secondary gamma-radiation from reactions of inelastic neutron scattering on objects under inspection. These effects lead to distortion of energy gamma-response from examined object and therefore prevent correct recognition of chemical ratios. These difficulties are taken into account in analytical method, presented in the paper. Method has been validated against experimental data, obtained by the system for HE detection in sea cargo, based on API method and developed in VNIIA. Influence of shielding materials on results of HE detection and identification is considered. Wood and iron were used as shielding materials. Results of method application for analysis of experimental data on HE simulator measurement (tetryl, trotyl, hexogen) are presented.

  3. The development and utility of a clinical algorithm to predict early HIV-1 infection.

    Science.gov (United States)

    Sharghi, Neda; Bosch, Ronald J; Mayer, Kenneth; Essex, Max; Seage, George R

    2005-12-01

    The association between self-reported clinical factors and recent HIV-1 seroconversion was evaluated in a prospective cohort of 4652 high-risk participants in the HIV Network for Prevention Trials (HIVNET) Vaccine Preparedness Study. Eighty-six individuals seroconverted, with an overall annual seroconversion rate of 1.3 per 100 person-years. Four self-reported clinical factors were significantly associated with HIV-1 seroconversion in multivariate analyses: recent history of chlamydia infection or gonorrhea, recent fever or night sweats, belief of recent HIV exposure, and recent illness lasting > or =3 days. Two scoring systems, based on the presence of either 4 or 11 clinical factors, were developed. Sensitivity ranged from 2.3% (with a positive predictive value of 12.5%) to 72.1% (with a positive predictive value of 1%). Seroconversion rates were directly associated with the number of these clinical factors. The use of scoring systems comprised of clinical factors may aid in detecting early and acute HIV-1 infection in vaccine and microbicide trials. Organizers can educate high-risk trial participants to return for testing during interim visits if they develop these clinical factors. Studying individuals during early and acute HIV-1 infection would allow scientists to investigate the impact of the intervention being studied on early transmission or pathogenesis of HIV-1 infection.

  4. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  5. Development of an Experimental Phased Array Feed System and Algorithms for Radio Astronomy

    Science.gov (United States)

    Landon, Jonathan C.

    . Results are given for simulated and experimental data, demonstrating deeper beampattern nulls by 6 to 30dB. To increase the system bandwidth toward the hundreds of MHz bandwidth required by astronomers for a fully science-ready instrument, an FPGA digital backend is introduced using a 64-input analog-to-digital converter running at 50 Msamp/sec and the ROACH processing board developed at the University of California, Berkeley. International efforts to develop digital back ends for large antenna arrays are considered, and a road map is proposed for development of a hardware correlator/beamformer at BYU using three ROACH boards communicating over 10 gigabit Ethernet.

  6. Algorithm design

    CERN Document Server

    Kleinberg, Jon

    2006-01-01

    Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.

  7. Design and development of a new micro-beam treatment planning system: effectiveness of algorithms of optimization and dose calculations and potential of micro-beam treatment.

    Science.gov (United States)

    Tachibana, Hidenobu; Kojima, Hiroyuki; Yusa, Noritaka; Miyajima, Satoshi; Tsuda, Akihisa; Yamashita, Takashi

    2012-07-01

    A new treatment planning system (TPS) was designed and developed for a new treatment system, which consisted of a micro-beam-enabled linac with robotics and a real-time tracking system. We also evaluated the effectiveness of the implemented algorithms of optimization and dose calculations in the TPS for the new treatment system. In the TPS, the optimization procedure consisted of the pseudo Beam's-Eye-View method for finding the optimized beam directions and the steepest-descent method for determination of beam intensities. We used the superposition-/convolution-based (SC-based) algorithm and Monte Carlo-based (MC-based) algorithm to calculate dose distributions using CT image data sets. In the SC-based algorithm, dose density scaling was applied for the calculation of inhomogeneous corrections. The MC-based algorithm was implemented with Geant4 toolkit and a phase-based approach using a network-parallel computing. From the evaluation of the TPS, the system can optimize the direction and intensity of individual beams. The accuracy of the dose calculated by the SC-based algorithm was less than 1% on average with the calculation time of 15 s for one beam. However, the MC-based algorithm needed 72 min for one beam using the phase-based approach, even though the MC-based algorithm with the parallel computing could decrease multiple beam calculations and had 18.4 times faster calculation speed using the parallel computing. The SC-based algorithm could be practically acceptable for the dose calculation in terms of the accuracy and computation time. Additionally, we have found a dosimetric advantage of proton Bragg peak-like dose distribution in micro-beam treatment.

  8. Development and validation of case-finding algorithms for the identification of patients with anti-neutrophil cytoplasmic antibody-associated vasculitis in large healthcare administrative databases.

    Science.gov (United States)

    Sreih, Antoine G; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A

    2016-12-01

    The aim of this study was to develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener's, GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (Churg-Strauss, EGPA). Two hundred fifty patients per disease were randomly selected from two large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). Sixteen case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the diagnoses (alveolar hemorrhage, interstitial lung disease, glomerulonephritis, and acute or chronic kidney disease), encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the anti-neutrophil cytoplasmic antibody type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA, respectively. Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. An approach to the development and analysis of wind turbine control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Wu, K.C.

    1998-03-01

    The objective of this project is to develop the capability of symbolically generating an analytical model of a wind turbine for studies of control systems. This report focuses on a theoretical formulation of the symbolic equations of motion (EOMs) modeler for horizontal axis wind turbines. In addition to the power train dynamics, a generic 7-axis rotor assembly is used as the base model from which the EOMs of various turbine configurations can be derived. A systematic approach to generate the EOMs is presented using d`Alembert`s principle and Lagrangian dynamics. A Matlab M file was implemented to generate the EOMs of a two-bladed, free yaw wind turbine. The EOMs will be compared in the future to those of a similar wind turbine modeled with the YawDyn code for verification. This project was sponsored by Sandia National Laboratories as part of the Adaptive Structures and Control Task. This is the final report of Sandia Contract AS-0985.

  10. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    Science.gov (United States)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  11. Neural network and fuzzy logic based secondary cells charging algorithm development and the controller architecture for implementation

    Science.gov (United States)

    Ullah, Muhammed Zafar

    Neural Network and Fuzzy Logic are the two key technologies that have recently received growing attention in solving real world, nonlinear, time variant problems. Because of their learning and/or reasoning capabilities, these techniques do not need a mathematical model of the system, which may be difficult, if not impossible, to obtain for complex systems. One of the major problems in portable or electric vehicle world is secondary cell charging, which shows non-linear characteristics. Portable-electronic equipment, such as notebook computers, cordless and cellular telephones and cordless-electric lawn tools use batteries in increasing numbers. These consumers demand fast charging times, increased battery lifetime and fuel gauge capabilities. All of these demands require that the state-of-charge within a battery be known. Charging secondary cells Fast is a problem, which is difficult to solve using conventional techniques. Charge control is important in fast charging, preventing overcharging and improving battery life. This research work provides a quick and reliable approach to charger design using Neural-Fuzzy technology, which learns the exact battery charging characteristics. Neural-Fuzzy technology is an intelligent combination of neural net with fuzzy logic that learns system behavior by using system input-output data rather than mathematical modeling. The primary objective of this research is to improve the secondary cell charging algorithm and to have faster charging time based on neural network and fuzzy logic technique. Also a new architecture of a controller will be developed for implementing the charging algorithm for the secondary battery.

  12. Development of visible/infrared/microwave agriculture classification and biomass estimation algorithms

    Science.gov (United States)

    Rosenthal, W. D.; Blanchard, B. J.; Blanchard, A. J.

    1983-01-01

    This paper describes the results of a study to determine if crop acreage and biomass estimates could be improved by using visible IR and microwave data. The objectives were to (1) develop and test agricultural crop classification models using two or more spectral regions (visible through microwave), and (2) estimate biomass by including microwave with visible and infrared data. Aircraft multispectral data collected during the study included visible and infrared data (multiband data from 0.5 m - 12 m), and active microwave data K band (2 cm), C band (6 cm), L band (20 cm), and P band (75 cm) HH and HV polarizations. Ground truth data from each field consisted of soil moisture and biomass measurements. Results indicated that C, L, and P band active microwave data combined with visible and infrared data improved crop discrimination and biomass estimates compared to results using only visible and infrared data. The active microwave frequencies were sensitive to different biomass levels; K and C being sensitive to differences at low biomass levels, while P band was sensitive to differences at high biomass levels.

  13. Developing an algorithm for enhancement of a digital terrain model for a densely vegetated floodplain wetland

    Science.gov (United States)

    Mirosław-Świątek, Dorota; Szporak-Wasilewska, Sylwia; Michałowski, Robert; Kardel, Ignacy; Grygoruk, Mateusz

    2016-07-01

    Airborne laser scanning survey data were conducted with a scanning density of 4 points/m2 to accurately map the surface of a unique central European complex of wetlands: the lower Biebrza River valley (Poland). A method to correct a degrading effect of vegetation (so-called "vegetation effect") on digital terrain models (DTMs) was applied utilizing remotely sensed images, real-time kinematic global positioning system elevation measurements, topographical surveys, and vegetation height measurements. Geographic object-based image analysis (GEOBIA) was performed to map vegetation within the study area that was used as categories from which vegetation height information was derived for the DTM correction. The final DTM was compared with a model obtained, where additional correction of the "vegetation effect" was neglected. A comparison between corrected and uncorrected DTMs demonstrated the importance of accurate topography through a simple presentation of the discrepancies arising in features of the flood using various DTM products. An overall map classification accuracy of 80% was attained with the use of GEOBIA. Correction factors developed for various types of the vegetation reached values from 0.08 up to 0.92 m and were dependent on the vegetation type.

  14. Development of a screening algorithm for Alzheimer's disease using categorical verbal fluency.

    Science.gov (United States)

    Chi, Yeon Kyung; Han, Ji Won; Jeong, Hyeon; Park, Jae Young; Kim, Tae Hui; Lee, Jung Jae; Lee, Seok Bum; Park, Joon Hyuk; Yoon, Jong Chul; Kim, Jeong Lan; Ryu, Seung-Ho; Jhoo, Jin Hyeong; Lee, Dong Young; Kim, Ki Woong

    2014-01-01

    We developed a weighted composite score of the categorical verbal fluency test (CVFT) that can more easily and widely screen Alzheimer's disease (AD) than the mini-mental status examination (MMSE). We administered the CVFT using animal category and MMSE to 423 community-dwelling mild probable AD patients and their age- and gender-matched cognitively normal controls. To enhance the diagnostic accuracy for AD of the CVFT, we obtained a weighted composite score from subindex scores of the CVFT using a logistic regression model: logit (case)  = 1.160+0.474× gender +0.003× age +0.226× education level - 0.089× first-half score - 0.516× switching score -0.303× clustering score +0.534× perseveration score. The area under the receiver operating curve (AUC) for AD of this composite score AD was 0.903 (95% CI = 0.883 - 0.923), and was larger than that of the age-, gender- and education-adjusted total score of the CVFT (paccuracy, sensitivity and specificity for AD than the total score. Although AUC for AD of the CVFT composite score was slightly smaller than that of the MMSE (0.930, p = 0.006), the CVFT composite score may be a good alternative to the MMSE for screening AD since it is much briefer, cheaper, and more easily applicable over phone or internet than the MMSE.

  15. Development of Effective Algorithm for Coupled Thermal-Hydraulics - Neutron-Kinetics Analysis of Reactivity Transient

    Energy Technology Data Exchange (ETDEWEB)

    Peltonen, Joanna

    2009-09-15

    Analyses of nuclear reactor safety have increasingly required coupling of full three dimensional neutron kinetics (NK) core models with system transient thermal-hydraulics (TH) codes. To produce results 'within a reasonable' computing time, the coupled codes use different spatial description of the reactor core. The TH code uses few, typically 5 to 20 TH channels, which represent the core. The NK code uses explicit node for each fuel assembly. Therefore, a spatial mapping of coarse grid TH and fine grid NK domain is necessary. However, improper mappings may result in loss of valuable information, thus causing inaccurate prediction of safety parameters. The purpose of this thesis is to study the sensitivity of spatial coupling (channel refinement and spatial mapping) and develop recommendations for NK-TH mapping in simulation of safety transients - Control Rod Drop, Turbine Trip, Feedwater Transient combined with stability performance (minimum pump speed of recirculation pumps). The research methodology consists of spatial coupling convergence study, as increasing number of TH channels and different mapping approach the reference case. The reference case consists of one TH channel per one fuel assembly. The comparison of results has been done under steady-state and transient conditions

  16. Development of an algorithm for the analysis of surface defects in mechanical elements

    Science.gov (United States)

    Fargione, Giovanna A.; Geraci, Alberto L.; Pennisi, Luigi; Risitano, Antonino

    1998-10-01

    The non-destructive tests allow to establish the physical and structural conditions of a mechanical part, to verify its condition, the superficial wear and tear and then evaluate its `remaining' efficiency. The non-destructive tests are applied in all those fields of engineering in which the determination of the mechanical and structural characteristics of elements in use is requested, without making them undergo destructive or damaging tests. In the present work an application program has been developed which, examining the surface of mechanical parts under an optical microscope and a blaster video, is able to characterize the material and to recognize and identify the possible presence of a superficial crack. The program constitutes the first step towards the realization of an industrial prototype which, thanks to the utilization of a plan moved by step-by-step motors, allowing the scanning of the whole surface of a part and the recognition of the crack in an automatic way, that is without the presence of an operator, and its characterization, in case it is identified, through the determination of some geometric parameters useful to ascertain the structural integrity of the element under examination. For the realization of the program different techniques of image analysis have been applied and the use of an artificial neural network preset for the recognition of the crack has been necessary. The program has been realized in C language and it works in Linux system.

  17. A Robust Damage Detection Method Developed for Offshore Jacket Platforms Using Modified Artificial Immune System Algorithm

    Institute of Scientific and Technical Information of China (English)

    Mojtahedi,A.; Lotfollahi Yaghin,M.A.; Hassanzadeh,Y.; Abbasidoust,F.; Ettefagh,M.M.; Aminfar,M.H.

    2012-01-01

    Steel jacket-type platforms are the common kind of the offshore structures and health monitoring is an important issue in their safety assessment.In the present study,a new damage detection method is adopted for this kind of structures and inspected experimentally by use of a laboratory model.The method is investigated for developing the robust damage detection technique which is less sensitive to both measurement and analytical model uncertainties.For this purpose,incorporation of the artificial immune system with weighted attributes (AISWA) method into finite element (FE) model updating is proposed and compared with other methods for exploring its effectiveness in damage identification.Based on mimicking immune recognition,noise simulation and attributes weighting,the method offers important advantages and has high success rates.Therefore,it is proposed as a suitable method for the detection of the failures in the large civil engineering structures with complicated structural geometry,such as the considered case study.

  18. Development of a screening algorithm for Alzheimer's disease using categorical verbal fluency.

    Directory of Open Access Journals (Sweden)

    Yeon Kyung Chi

    Full Text Available We developed a weighted composite score of the categorical verbal fluency test (CVFT that can more easily and widely screen Alzheimer's disease (AD than the mini-mental status examination (MMSE. We administered the CVFT using animal category and MMSE to 423 community-dwelling mild probable AD patients and their age- and gender-matched cognitively normal controls. To enhance the diagnostic accuracy for AD of the CVFT, we obtained a weighted composite score from subindex scores of the CVFT using a logistic regression model: logit (case  = 1.160+0.474× gender +0.003× age +0.226× education level - 0.089× first-half score - 0.516× switching score -0.303× clustering score +0.534× perseveration score. The area under the receiver operating curve (AUC for AD of this composite score AD was 0.903 (95% CI = 0.883 - 0.923, and was larger than that of the age-, gender- and education-adjusted total score of the CVFT (p<0.001. In 100 bootstrapped re-samples, the composite score consistently showed better diagnostic accuracy, sensitivity and specificity for AD than the total score. Although AUC for AD of the CVFT composite score was slightly smaller than that of the MMSE (0.930, p = 0.006, the CVFT composite score may be a good alternative to the MMSE for screening AD since it is much briefer, cheaper, and more easily applicable over phone or internet than the MMSE.

  19. Evaluation of carbapenemase screening and confirmation tests with Enterobacteriaceae and development of a practical diagnostic algorithm.

    Science.gov (United States)

    Maurer, Florian P; Castelberg, Claudio; Quiblier, Chantal; Bloemberg, Guido V; Hombach, Michael

    2015-01-01

    Reliable identification of carbapenemase-producing members of the family Enterobacteriaceae is necessary to limit their spread. This study aimed to develop a diagnostic flow chart using phenotypic screening and confirmation tests that is suitable for implementation in different types of clinical laboratories. A total of 334 clinical Enterobacteriaceae isolates genetically characterized with respect to carbapenemase, extended-spectrum β-lactamase (ESBL), and AmpC genes were analyzed. A total of 142/334 isolates (42.2%) were suspected of carbapenemase production, i.e., intermediate or resistant to ertapenem (ETP) and/or meropenem (MEM) and/or imipenem (IPM) according to EUCAST clinical breakpoints (CBPs). A group of 193/334 isolates (57.8%) showing susceptibility to ETP, MEM, and IPM was considered the negative-control group in this study. CLSI and EUCAST carbapenem CBPs and the new EUCAST MEM screening cutoff were evaluated as screening parameters. ETP, MEM, and IPM with or without aminophenylboronic acid (APBA) or EDTA combined-disk tests (CDTs) and the Carba NP-II test were evaluated as confirmation assays. EUCAST temocillin cutoffs were evaluated for OXA-48 detection. The EUCAST MEM screening cutoff (carbapenemase confirmation. ETP and MEM EDTA CDTs showed 100% sensitivity and specificity for class B carbapenemases. Temocillin zone diameters/MIC testing on MH-CLX was highly specific for OXA-48 producers. The overall sensitivity, specificity, positive predictive value, and negative predictive value of the Carba NP-II test were 78.9, 100, 100, and 98.7%, respectively. Combining the EUCAST MEM carbapenemase screening cutoff (carbapenemase detection. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  20. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation.

    Science.gov (United States)

    Otake, Yoshito; Wang, Adam S; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L; Wolinsky, Jean-Paul; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2015-03-07

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product

  1. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    Science.gov (United States)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  2. Entropy Message Passing Algorithm

    CERN Document Server

    Ilic, Velimir M; Branimir, Todorovic T

    2009-01-01

    Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.

  3. Development of an Experimental Model for a Magnetorheological Damper Using Artificial Neural Networks (Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Ayush Raizada

    2016-01-01

    Full Text Available This paper is based on the experimental study for design and control of vibrations in automotive vehicles. The objective of this paper is to develop a model for the highly nonlinear magnetorheological (MR damper to maximize passenger comfort in an automotive vehicle. The behavior of the MR damper is studied under different loading conditions and current values in the system. The input and output parameters of the system are used as a training data to develop a suitable model using Artificial Neural Networks. To generate the training data, a test rig similar to a quarter car model was fabricated to load the MR damper with a mechanical shaker to excite it externally. With the help of the test rig the input and output parameter data points are acquired by measuring the acceleration and force of the system at different points with the help of an impedance head and accelerometers. The model is validated by measuring the error for the testing and validation data points. The output of the model is the optimum current that is supplied to the MR damper, using a controller, to increase the passenger comfort by minimizing the amplitude of vibrations transmitted to the passenger. Besides using this model for cars, bikes, and other automotive vehicles it can also be modified by retraining the algorithm and used for civil structures to make them earthquake resistant.

  4. Development of a control algorithm for teleoperation of DFDF(IMEF/M6 hot cell) maintenance equipment

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Chae Youn; Kwon, Hyuk Jo; Kim, Hak Duck; Jun, Ji Myung; Oh, Hee Geun [Chonbuk National University, Chonju (Korea)

    2002-03-01

    Teleoperation has been used for separating operators from the working environment. Thus, it is usually used to perform a work in an inaccessible place such as space, deep sea, etc. Also, it is used to perform a work in an accessible but a very poor working environment such as explosive, poison gas, radioactive area, etc. It is one of the advanced technology-intensive research areas. It has potentially big economical and industrial value. There is a tendency to avoid working in a difficult, dirty or dangerous place, particularly, in a high radioactive area since there always exist a possibility to be in a very dangerous situation. Thus, developing and utilizing of a teleoperation system will minimize the possibility to be exposed in such a extreme situation directly. Recently, there has been many researches on reflecting force information occurring in teleoperation to the operator in addition to visual information. The reflected force information is used to control the teleoperation system bilaterally. It will contribute a lot to improve teleoperation's safety and working efficiency. This study developed a bilateral force reflecting control algorithm. It may be used as a key technology of a teleoperation system for maintaining, repairing and dismantling facilities exposed in a high radioactive. 42 refs., 71 figs., 12 tabs. (Author)

  5. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    Science.gov (United States)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  6. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  7. Development of a multi-objective scheduling system for offshore projects based on hybrid non-dominated sorting genetic algorithm

    Directory of Open Access Journals (Sweden)

    Jinghua Li

    2015-03-01

    Full Text Available In order to enhance the efficiency of offshore companies, a multi-objective scheduling system based on hybrid non-dominated sorting genetic algorithm was proposed. An optimized model for multi-objective and multi-execution mode was constructed under the condition of taking time, cost, and resource into account, and then the mathematical model for the same was established. Moreover, the key techniques of the proposed system were elaborated, and the flowchart was designed. Aiming at the weaknesses of non-dominated sorting genetic algorithm which is short for non-dominated sorting genetic algorithm-II in the facet of local search and computational efficiency, Pareto-dominated simulated annealing algorithm was applied in search global solution. Finally, by simulation examples and industrial application, the robustness and outperformance of the improved algorithm were verified.

  8. Continuous measurements of water surface height and width along a 6.5km river reach for discharge algorithm development

    Science.gov (United States)

    Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.

    2015-12-01

    The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.

  9. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  10. Optimization of Vertical Well Placement for Oil Field Development Based on Basic Reservoir Rock Properties using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Tutuka Ariadji

    2012-07-01

    Full Text Available Comparing the quality of basic reservoir rock properties is a common practice to locate new infills or development wells for optimizing an oil field development using a reservoir simulation. The conventional technique employs a manual trial and error process to find new well locations, which proves to be time-consuming, especially, for a large field. Concerning this practical matter, an alternative in the form of a robust technique was introduced in order that time and efforts could be reduced in finding best new well locations capable of producing the highest oil recovery. The objective of the research was to apply Genetic Algorithm (GA in determining wells locations using reservoir simulation to avoid the manual conventional trial and error method. GA involved the basic rock properties, i.e., porosity, permeability, and oil saturation, of each grid block obtained from a reservoir simulation model, which was applied into a newly generated fitness function formulated through translating the common engineering practice in the reservoir simulation into a mathematical equation and then into a computer program. The maximum of the fitness value indicated a final searching of the best grid location for a new well location. In order to evaluate the performance of the generated GA program, two fields that had different production profile characteristics, namely the X and Y fields, were applied to validate the proposed method. The proposed GA method proved to be a robust and accurate method to find the best new well locations for field development. The key success of this proposed GA method is in the formulation of the objective function.

  11. Development and validation of a new algorithm for the reclassification of genetic variants identified in the BRCA1 and BRCA2 genes.

    Science.gov (United States)

    Pruss, Dmitry; Morris, Brian; Hughes, Elisha; Eggington, Julie M; Esterling, Lisa; Robinson, Brandon S; van Kan, Aric; Fernandes, Priscilla H; Roa, Benjamin B; Gutin, Alexander; Wenstrup, Richard J; Bowles, Karla R

    2014-08-01

    BRCA1 and BRCA2 sequencing analysis detects variants of uncertain clinical significance in approximately 2 % of patients undergoing clinical diagnostic testing in our laboratory. The reclassification of these variants into either a pathogenic or benign clinical interpretation is critical for improved patient management. We developed a statistical variant reclassification tool based on the premise that probands with disease-causing mutations are expected to have more severe personal and family histories than those having benign variants. The algorithm was validated using simulated variants based on approximately 145,000 probands, as well as 286 BRCA1 and 303 BRCA2 true variants. Positive and negative predictive values of ≥99 % were obtained for each gene. Although the history weighting algorithm was not designed to detect alleles of lower penetrance, analysis of the hypomorphic mutations c.5096G>A (p.Arg1699Gln; BRCA1) and c.7878G>C (p.Trp2626Cys; BRCA2) indicated that the history weighting algorithm is able to identify some lower penetrance alleles. The history weighting algorithm is a powerful tool that accurately assigns actionable clinical classifications to variants of uncertain clinical significance. While being developed for reclassification of BRCA1 and BRCA2 variants, the history weighting algorithm is expected to be applicable to other cancer- and non-cancer-related genes.

  12. Development of Smart Ventilation Control Algorithms for Humidity Control in High-Performance Homes in Humid U.S. Climates

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan; Ticci, Sara

    2017-04-11

    Past field research and simulation studies have shown that high performance homes experience elevated indoor humidity levels for substantial portions of the year in humid climates. This is largely the result of lower sensible cooling loads, which reduces the moisture removed by the cooling system. These elevated humidity levels lead to concerns about occupant comfort, health and building durability. Use of mechanical ventilation at rates specified in ASHRAE Standard 62.2-2013 are often cited as an additional contributor to humidity problems in these homes. Past research has explored solutions, including supplemental dehumidification, cooling system operational enhancements and ventilation system design (e.g., ERV, supply, exhaust, etc.). This project’s goal is to develop and demonstrate (through simulations) smart ventilation strategies that can contribute to humidity control in high performance homes. These strategies must maintain IAQ via equivalence with ASHRAE Standard 62.2-2013. To be acceptable they must not result in excessive energy use. Smart controls will be compared with dehumidifier energy and moisture performance. This work explores the development and performance of smart algorithms for control of mechanical ventilation systems, with the objective of reducing high humidity in modern high performance residences. Simulations of DOE Zero-Energy Ready homes were performed using the REGCAP simulation tool. Control strategies were developed and tested using the Residential Integrated Ventilation (RIVEC) controller, which tracks pollutant exposure in real-time and controls ventilation to provide an equivalent exposure on an annual basis to homes meeting ASHRAE 62.2-2013. RIVEC is used to increase or decrease the real-time ventilation rate to reduce moisture transport into the home or increase moisture removal. This approach was implemented for no-, one- and two-sensor strategies, paired with a variety of control approaches in six humid climates (Miami

  13. PRGPred: A platform for prediction of domains of resistance gene analogue (RGA in Arecaceae developed using machine learning algorithms

    Directory of Open Access Journals (Sweden)

    MATHODIYIL S. MANJULA

    2015-12-01

    Full Text Available Plant disease resistance genes (R-genes are responsible for initiation of defense mechanism against various phytopathogens. The majority of plant R-genes are members of very large multi-gene families, which encode structurally related proteins containing nucleotide binding site domains (NBS and C-terminal leucine rich repeats (LRR. Other classes possess' an extracellular LRR domain, a transmembrane domain and sometimes, an intracellular serine/threonine kinase domain. R-proteins work in pathogen perception and/or the activation of conserved defense signaling networks. In the present study, sequences representing resistance gene analogues (RGAs of coconut, arecanut, oil palm and date palm were collected from NCBI, sorted based on domains and assembled into a database. The sequences were analyzed in PRINTS database to find out the conserved domains and their motifs present in the RGAs. Based on these domains, we have also developed a tool to predict the domains of palm R-genes using various machine learning algorithms. The model files were selected based on the performance of the best classifier in training and testing. All these information is stored and made available in the online ‘PRGpred' database and prediction tool.

  14. Development of the knowledge-based and empirical combined scoring algorithm (KECSA) to score protein-ligand interactions.

    Science.gov (United States)

    Zheng, Zheng; Merz, Kenneth M

    2013-05-24

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the Protein Databank (PDB). Forty-nine (49) types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training set and enthalpy/entropy of KECSA. The second validation benchmark suite included two large-scale and five small-scale test sets, to compare the reproducibility of KECSA, with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods, especially in its ability to minimize the root mean square error (RMSE). LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed for which would allow KECSA to be more sensitive to subtle changes in ligand structure.

  15. Development and Evaluation of Vectorised and Multi-Core Event Reconstruction Algorithms within the CMS Software Framework

    Science.gov (United States)

    Hauth, T.; Innocente and, V.; Piparo, D.

    2012-12-01

    The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.

  16. Development of a Forward/Backward Power Flow Algorithm in Distribution Systems Based on Probabilistic Technique Using Normal Distribution

    Directory of Open Access Journals (Sweden)

    Shahrokh Shojaeian

    2014-01-01

    Full Text Available There are always some uncertainties in prediction and estimation of distribution systems loads. These uncertainties impose some undesirable impacts and deviations on power flow of the system which may cause reduction in accuracy of the results obtained by system analysis. Thus, probabilistic analysis of distribution system is very important. This paper proposes a probabilistic power flow technique by applying a normal probabilistic distribution in seven standard deviations on forward-backward algorithm. The losses and voltage of IEEE 33-bus test distribution network is investigated by our new algorithm and the results are compared with the conventional algorithm i.e., based on deterministic methods.

  17. SIMAS ADM XBT Algorithm

    Science.gov (United States)

    2016-06-07

    REFERENCE ONLY NAVAL U~DERWATER SYSTEMS CENTER NEW LONDON LABORATORY NEW LONDON, CONNECTICUT 06320 Technical Memorandum SIMAS ADM XBT ALGORITHM ...REPORT TYPE Technical Memo 3. DATES COVERED 05-12-1984 to 05-12-1984 4. TITLE AND SUBTITLE SIMAS ADM XBT Algorithm 5a. CONTRACT NUMBER 5b...NOTES NUWC2015 14. ABSTRACT An algorithm has been developed for the detection and correction of surface ship launched expendable bathythermograph

  18. Static Analysis Numerical Algorithms

    Science.gov (United States)

    2016-04-01

    STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C... algorithms , linear digital filters and integrating accumulators, modifying existing versions of Honeywell’s HiLiTE model-based development system and

  19. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  20. Quantum Algorithms

    Science.gov (United States)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  1. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    1993-01-01

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri

  2. Development of a Treatment Algorithm for Streptococci and Enterococci from Positive Blood Cultures Identified with the Verigene Gram-Positive Blood Culture Assay

    OpenAIRE

    Alby, Kevin; Daniels, Lindsay M.; Weber, David J; Miller, Melissa B.

    2013-01-01

    Seventy-eight blood cultures with a Gram stain result of Gram-positive cocci in pairs and/or chains were evaluated with the Nanosphere Verigene Gram-positive blood culture (BC-GP) assay. The overall concordance of the assay with culture was 89.7% (70/78 cultures), allowing for the development of a targeted treatment algorithm.

  3. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    Science.gov (United States)

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  4. The Development of Geo-KOMPSAT-2A (GK-2A) Convective Initiation Algorithm over the Korea peninsular

    Science.gov (United States)

    Kim, H. S.; Chung, S. R.; Lee, B. I.; Baek, S.; Jeon, E.

    2016-12-01

    The rapid development of convection can bring heavy rainfall that suffers a great deal of damages to society as well as threatens human life. The high accurate forecast of the strong convection is essentially demanded to prevent those disasters from the severe weather. Since a geostationary satellite is the most suitable instrument for monitoring the single cloud's lifecycle from its formation to extinction, it has been attempted to capture the precursor signals of convection clouds by satellite. Keeping pace with the launch of Geo-KOMPSAT-2A (GK-2A) in 2018, we planned to produce convective initiation (CI) defined as the indicator of potential cloud objects to bring heavy precipitation within two hours. The CI algorithm for GK-2A is composed of four stages. The beginning is to subtract mature cloud pixels, a sort of convective cloud mask by visible (VIS) albedo and infrared (IR) brightness temperature thresholds. Then, the remained immature cloud pixels are clustered as a cloud object by watershed techniques. Each clustering object is undergone 'Interest Fields' tests for IR data that reflect cloud microphysical properties at the current and their temporal changes; the cloud depth, updraft strength and production of glaciations. All thresholds of 'Interest fields' were optimized for Korean-type convective clouds. Based on scores from tests, it is decided whether the cloud object would develop as a convective cell or not. Here we show the result of case study in this summer over the Korea peninsular by using Himawari-8 VIS and IR data. Radar echo and data were used for validation. This study suggests that CI products of GK-2A would contribute to enhance accuracy of the very short range forecast over the Korea peninsular.

  5. Developing Benthic Class Specific, Chlorophyll-a Retrieving Algorithms for Optically-Shallow Water Using SeaWiFS

    Directory of Open Access Journals (Sweden)

    Tara Blakey

    2016-10-01

    Full Text Available This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning.

  6. Development of a decision tree to classify the most accurate tissue-specific tissue to plasma partition coefficient algorithm for a given compound.

    Science.gov (United States)

    Yun, Yejin Esther; Cotton, Cecilia A; Edginton, Andrea N

    2014-02-01

    Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism and are used to predict a drug's pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), key PBPK model parameters, define the steady-state concentration differential between tissue and plasma and are used to predict the volume of distribution. The experimental determination of these parameters once limited the development of PBPK models; however, in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy, and none are considered standard, warranting further research. In this study, a novel decision-tree-based Kp prediction method was developed using six previously published algorithms. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physicochemical space. Three versions of tissue-specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy than that of any single Kp prediction algorithm for all tissues, the current mode of use in PBPK model building. Because built-in estimation equations for those input parameters are not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The presented innovative method will improve tissue distribution prediction accuracy, thus enhancing the confidence in PBPK modeling outputs.

  7. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  8. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    Science.gov (United States)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  9. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    Science.gov (United States)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  10. Development on Haptic Interaction Algorithm of Virtual Objects%虚拟物体力/触觉交互算法的研究进展

    Institute of Scientific and Technical Information of China (English)

    张小瑞; 孙伟; 宋爱国; 崔桐; 胡小科

    2011-01-01

    虚拟物体在受力作用时的力/触觉交互算法是虚拟环境中力/触觉人机交互的关键。分析了虚拟物体力/触觉交互算法的研究难点。介绍了当前十一种有代表性的力/触觉交互算法,并在此基础上对这些算法的优缺点进行了比较。最后对力/触觉交互算法的发展趋势进行了展望。%Haptic interaction algorithm of virtual objects under force is a key issue for haptic interaction between human and computer in virtual environment.The challenges of haptic interaction algorithm were analyzed.Eleven typical haptic interaction algorithms were discussed.Based on the analysis,the merits and the disadvantages of these algorithms were compared.Finally,the developing tendency of haptic interaction algorithms in future was pointed out.

  11. The NASA Soil Moisture Active Passive (SMAP) Mission - Science and Data Product Development Status

    Science.gov (United States)

    Nloku, E.; Entekhabi, D.; O'Neill, P.

    2012-01-01

    of the SMAP data products. The Testbed simulations are designed to capture various sources of errors in the products including environment effects, instrument effects (nonideal aspects of the measurement system), and retrieval algorithm errors. The SMAP project has developed a Calibration and Validation (Cal/Val) Plan that is designed to support algorithm development (pre-launch) and data product validation (post-launch). A key component of the Cal/Val Plan is the identification, characterization, and instrumentation of sites that can be used to calibrate and validate the sensor data (Level l) and derived geophysical products (Level 2 and higher).

  12. Shape formation algorithm

    OpenAIRE

    2016-01-01

    This project concerns the implementation of a decentralized algorithm for shape formation. The first idea was to test this algorithm with a swarm of autonomous drones but, due to the lack of time and the complexity of the project, the work was just developed in 2D and in simulation.

  13. Development of an algorithm for TLD badge system for dosimetry in the field of X and gamma radiation in terms of Hp(10).

    Science.gov (United States)

    Bakshi, A K; Srivastava, K; Varadharajan, G; Pradhan, A S; Kher, R K

    2007-01-01

    In view of the introduction of International Commission on Radiation Units and Measurements operational quantities Hp(10) and Hp(0.07), defined for individual monitoring, it became necessary to develop an algorithm that gives direct response of the dosemeter in terms of the operational quantities. Hence, for this purpose and also to improve the accuracy in dose estimation especially in the mixed fields of X ray and gamma, an algorithm was developed based on higher-order polynomial fit of the data points generated from the dose-response of discs under different filter regions of the present TL dosemeter system for known delivered doses. Study on the response of the BARC TL dosemeter system based on CaSO(4):Dy Teflon thermoluminescence dosemeter discs in the mixed fields of X and gamma radiation was carried out to ensure that the accuracies are within the prescribed limits recommended by the international organisations. The prevalent algorithm, based on the ratios of the disc response under various filters regions of the dosemeter to pure photons, was tested for different proportion of two radiations in case of mixed field dosimetry. It was found that the accuracy for few fields is beyond the acceptable limit in case of prevalent algorithm. The new proposed algorithm was also tested in mixed fields of photon fields and to pure photon fields of varied angles. It was found that the response of the dosemeter in mixed fields of photons and its angular response are satisfactory. The new algorithm can be used to record and report the personal dose in terms of Hp(10) as per the international recommendation for the present TL dosemeter.

  14. Computerized Ultrasound Risk Evaluation (CURE) System: Development of Combined Transmission and Reflection Ultrasound with New Reconstruction Algorithms for Breast Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Littrup, P J; Duric, N; Azevedo, S; Chambers, D; Candy, J V; Johnson, S; Auner, G; Rather, J; Holsapple, E T

    2001-09-07

    processing, but the operator dependent nature of using a moveable transducer head remains a significant problem for thorough coverage of the entire breast. We have therefore undertaken the development of a whole breast (i.e., including auxiliary tail) system, with improved resolution and tissue characterization abilities. The extensive ultrasound physics considerations, engineering, materials process development and subsequent algorithm reconstruction are beyond the scope of this initial paper. The proprietary nature of these processes will be forthcoming as the intellectual property is fully secured. We will focus here on the imaging outcomes as they apply to eventual expansion into clinical use.

  15. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part I: Development and Validation

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available The Surface Energy Balance Algorithm for Land (SEBAL is one of the remote sensing (RS models that are increasingly being used to determine evapotranspiration (ET. SEBAL is a widely used model, mainly due to the fact that it requires minimum weather data, and also no prior knowledge of surface characteristics is needed. However, it has been observed that it underestimates ET under advective conditions due to its disregard of advection as another source of energy available for evaporation. A modified SEBAL model was therefore developed in this study. An advection component, which is absent in the original SEBAL, was introduced such that the energy available for evapotranspiration was a sum of net radiation and advected heat energy. The improved SEBAL model was termed SEBAL-Advection or SEBAL-A. An important aspect of the improved model is the estimation of advected energy using minimal weather data. While other RS models would require hourly weather data to be able to account for advection (e.g., METRIC, SEBAL-A only requires daily averages of limited weather data, making it appropriate even in areas where weather data at short time steps may not be available. In this study, firstly, the original SEBAL model was evaluated under advective and non-advective conditions near Rocky Ford in southeastern Colorado, a semi-arid area where afternoon advection is common occurrence. The SEBAL model was found to incur large errors when there was advection (which was indicated by higher wind speed and warm and dry air. SEBAL-A was then developed and validated in the same area under standard surface conditions, which were described as healthy alfalfa with height of 40–60 cm, without water-stress. ET values estimated using the original and modified SEBAL were compared to large weighing lysimeter-measured ET values. When the SEBAL ET was compared to SEBAL-A ET values, the latter showed improved performance, with the ET Mean Bias Error (MBE reduced from −17

  16. Evaluation the Quality of Cloud Dataset from the Goddard Multi-Scale Modeling Framework for Supporting GPM Algorithm Development

    Science.gov (United States)

    Chern, J.; Tao, W.; Mohr, K. I.; Matsui, T.; Lang, S. E.

    2013-12-01

    With recent rapid advancement in computational technology, the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM has been developed and improved at NASA Goddard. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the Goddard GEOS global model. In recent years, a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. These schemes have been incorporated into the MMF. The MMF has global coverage and can provide detailed cloud properties such as cloud amount, hydrometeors types, and vertical profile of water contents at high spatial and temporal resolution of a cloud-resolving model. When coupled with the Goddard Satellite Data Simulation Unit (GSDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators, the MMF system can provide radiances and backscattering similar to what satellite directly observed. In this study, one-year (2007) MMF simulation has been performed with the new 4-ice (cloud ice, snow, graupel and hail) microphysical scheme. The GEOS global model is run at 2o x 2.5o resolution and the embedded two-dimensional GCEs each has 64 columns at 4 km horizontal resolution. The large-scale forcing from the GCM is nudged to EC-Interim analysis to reduce the influence of MMF model biases on the cloud-resolving model results. The simulation provides more than 300 millions of vertical profiles of cloud dataset in different season, geographic locations, and climate regimes. This cloud dataset is used to supplement observations over data sparse areas for supporting GPM algorithm development. The model simulated mean and variability of surface rainfall and snowfall, cloud and precipitation types, cloud properties, radiances and backscattering are evaluated against satellite observations. We will assess the strengths

  17. Sensor placement algorithm development to maximize the efficiency of acid gas removal unit for integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture

    Energy Technology Data Exchange (ETDEWEB)

    Paul, P.; Bhattacharyya, D.; Turton, R.; Zitney, S.

    2012-01-01

    Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In this work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel

  18. Sensor placement algorithm development to maximize the efficiency of acid gas removal unit for integrated gasifiction combined sycle (IGCC) power plant with CO2 capture

    Energy Technology Data Exchange (ETDEWEB)

    Paul, P.; Bhattacharyya, D.; Turton, R.; Zitney, S.

    2012-01-01

    Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In this work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel

  19. IMPLEMENTATION OF THE DEVELOPMENT OF A FILTERING ALGORITHM TO IMPROVE THE SYSTEM OF HEARING IN HEARING IMPAIRED WITH COCHLEAR IMPLANT

    Directory of Open Access Journals (Sweden)

    Salaheddine Derouiche

    2013-11-01

    Full Text Available In this paper, we present the implemented denoising section in the coding strategy of cochlear implants, the technique used is the technique of wavelet bionic BWT (Bionic Wavelet Transform. We have implemented the algorithm for denoising Raise the speech signal by the hybrid method BWT in the FPGA (Field Programmable Gate Array, Xilinx (Virtex5 XC5VLX110T. In our study, we considered the following: at the beginning, we present how to demonstrate features of this technique. We present an algorithm implementation we proposed, we present simulation results and the performance of this technique in terms of improvement of the SNR (Signal to Noise Ratio. The proposed implementations are realized in VHDL (Very high speed integrated circuits Hardware Description Language. Different algorithms for speech processing, including CIS (Continuous Interleaved Sampling have been implemented the strategy in this processor and tested successfully.

  20. Research on the New Development of Image Encryption Algorithms%图像加密算法研究新进展

    Institute of Scientific and Technical Information of China (English)

    张晓强; 王蒙蒙; 朱贵良

    2012-01-01

    With the rapid development of information netization, image interaction in the Internet is widely applied in many fields. People pay much attention to the security of image interaction. We illustrate the encryption principles, characteristics and new development of the important image encryption algorithms, such as those based on matrix transformation, chaos, image secret sharing, frequency domains, SCAN languages, and DNA computing. Finally, the development trend of image encryption algorithm is discussed in this paper. This study is significant in algorithm improvement, new a