WorldWideScience

Sample records for performance map applied

  1. A Three-Dimensional Foil Bearing Performance Map Applied to Oil-Free Turbomachinery

    Science.gov (United States)

    2009-04-01

    stress brought on by excessive viscous power loss; therefore a map that graphically relates component and system-level parameters (bearing size, applied...Introduction Foil bearings are self-acting, hydrodynamic gas bearings that use air as their working fluid . Their use in rotating systems eliminates the...weight, maintenance requirements, speed, and temperature limitations associated with conventional oil-lubricated rotor supports (i.e., bearings, dampers

  2. Mapping Intermediality in Performance

    NARCIS (Netherlands)

    2010-01-01

    Mapping Intermediality in Performance benadert het vraagstuk van intermedialiteit met betrekking tot performance (vooral theater) vanuit vijf verschillende invalshoeken: performativiteit en lichaam; tijd en ruimte; digitale cultuur en posthumanisme; netwerken; pedagogiek en praxis. In deze boeiende

  3. Applied cartographic communication: map symbolization for atlases.

    Science.gov (United States)

    Morrison, J.L.

    1984-01-01

    A detailed investigation of the symbolization used on general-purpose atlas reference maps. It indicates how theories of cartographic communication can be put into practice. Two major points emerge. First, that a logical scheme can be constructed from existing cartographic research and applied to an analysis of the choice of symbolization on a map. Second, the same structure appears to allow the cartographer to specify symbolization as a part of map design. An introductory review of cartographic communication is followed by an analysis of selected maps' usage of point, area and line symbols, boundaries, text and colour usage.-after Author

  4. Nigeria Journal of Pure and Applied Physics: Site Map

    African Journals Online (AJOL)

    Nigeria Journal of Pure and Applied Physics: Site Map. Journal Home > About the Journal > Nigeria Journal of Pure and Applied Physics: Site Map. Log in or Register to get access to full text downloads.

  5. Satellite SAR interferometric techniques applied to emergency mapping

    Science.gov (United States)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce

  6. Applying field mapping refractive beam shapers to improve holographic techniques

    Science.gov (United States)

    Laskin, Alexander; Williams, Gavin; McWilliam, Richard; Laskin, Vadim

    2012-03-01

    Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

  7. Hardware Transactional Memory Optimization Guidelines, Applied to Ordered Maps

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal; Probst, Christian W.; Karlsson, Sven

    2015-01-01

    efficiently requires reasoning about those differences. In this paper we present 5 guidelines for applying hardware transactional memory efficiently, and apply the guidelines to BT-trees, a concurrent ordered map. Evaluating BT-trees on standard benchmarks shows that they are up to 5.3 times faster than...

  8. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    Science.gov (United States)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS

  9. Radar Mapping of Building Structures Applying Sparse Reconstruction

    NARCIS (Netherlands)

    Tan, R.G.; Wit, J.J.M. de; Rossum, W.L. van

    2012-01-01

    The ability to map building structures at a certain stand-off distance allows intelligence, reconnaissance, and clearance tasks to be performed in a covert way by driving around a building. This will greatly improve security, response time, and reliability of aforementioned tasks. Therefore,

  10. Methodical Aspects of Applying Strategy Map in an Organization

    Directory of Open Access Journals (Sweden)

    Piotr Markiewicz

    2013-06-01

    Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.

  11. Interdisciplinary strate in applied performance and activism

    OpenAIRE

    Johansson, Ola

    2014-01-01

    The paper addresses the challenges and advantages of collaborative stratification in the continuum of theatre and fine arts, with examples from applied performance projects in international contexts. With different approaches to performance/media, acting/agency, devising/curatorship and participation/social engagement, collaborative processes have proved to be quite incongruent in method and motif, although inclusive and versatile in media tactics and political outreach. In post-Brechtian the...

  12. Performance Monitoring Applied to System Supervision

    Directory of Open Access Journals (Sweden)

    Bertille Somon

    2017-07-01

    Full Text Available Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL performance problem.

  13. A comparative analysis of three metaheuristic methods applied to fuzzy cognitive maps learning

    Directory of Open Access Journals (Sweden)

    Bruno A. Angélico

    2013-12-01

    Full Text Available This work analyses the performance of three different population-based metaheuristic approaches applied to Fuzzy cognitive maps (FCM learning in qualitative control of processes. Fuzzy cognitive maps permit to include the previous specialist knowledge in the control rule. Particularly, Particle Swarm Optimization (PSO, Genetic Algorithm (GA and an Ant Colony Optimization (ACO are considered for obtaining appropriate weight matrices for learning the FCM. A statistical convergence analysis within 10000 simulations of each algorithm is presented. In order to validate the proposed approach, two industrial control process problems previously described in the literature are considered in this work.

  14. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    Science.gov (United States)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  15. Geo-environmental mapping tool applied to pipeline design

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Karina de S.; Calle, Jose A.; Gil, Euzebio J. [Geomecanica S/A Tecnologia de Solo Rochas e Materiais, Rio de Janeiro, RJ (Brazil); Sare, Alexandre R. [Geomechanics International Inc., Houston, TX (United States); Soares, Ana Cecilia [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The Geo-Environmental Mapping is an improvement of the Geological-Geotechnical Mapping used for basic pipeline designs. The main purpose is to assembly the environmental, geotechnical and geological concepts in a methodological tool capable to predict constrains and reduce the pipeline impact to the environment. The Geo-Environmental mapping was built to stress the influence of soil/structure interaction, related to the physical effect that comes from the contact between structures and soil or rock. A Geological-Geotechnical-Environmental strip (chart) was presented to emphasize the pipeline operational constrains and its influence to the environment. The mapping was developed to clearly show the occurrence and properties of geological materials divided into geotechnical domain units (zones). The strips present construction natural properties, such as: excavability, stability of the excavation and soil re-use capability. Also, the environmental constrains were added to the geological-geotechnical mapping. The Geo-Environmental Mapping model helps the planning of the geotechnical and environmental inquiries to be carried out during executive design, the discussion on the types of equipment to be employed during construction and the analysis of the geological risks and environmental impacts to be faced during the built of the pipeline. (author)

  16. Comparing the performance of various digital soil mapping approaches to map physical soil properties

    Science.gov (United States)

    Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2015-04-01

    digital soil mapping methods and sets of ancillary variables for producing the most accurate spatial prediction of texture classes in a given area of interest. Both legacy and recently collected data on PSD were used as reference information. The predictor variable data set consisted of digital elevation model and its derivatives, lithology, land use maps as well as various bands and indices of satellite images. Two conceptionally different approaches can be applied in the mapping process. Textural classification can be realized after particle size data were spatially extended by proper geostatistical method. Alternatively, the textural classification is carried out first, followed by the spatial extension through suitable data mining method. According to the first approach, maps of sand, silt and clay percentage have been computed through regression kriging (RK). Since the three maps are compositional (their sum must be 100%), we applied Additive Log-Ratio (alr) transformation, instead of kriging them independently. Finally, the texture class map has been compiled according to the USDA categories from the three maps. Different combinations of reference and training soil data and auxiliary covariables resulted several different maps. On the basis of the other way, the PSD were classified firstly into the USDA categories, then the texture class maps were compiled directly by data mining methods (classification trees and random forests). The various results were compared to each other as well as to the RK maps. The performance of the different methods and data sets has been examined by testing the accuracy of the geostatistically computed and the directly classified results to assess the most predictive and accurate method. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  17. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  18. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  19. CRESST Human Performance Knowledge Mapping System

    National Research Council Canada - National Science Library

    Chung, Gregory K; Michiuye, Joanne K; Brill, David G; Sinha, Ravi; Saadat, Farzad; de Vries, Linda F; Delacruz, Girlie C; Bewley, William L; Baker, Eva L

    2002-01-01

    .... While several tools exist that are available to construct knowledge maps, CRESST's knowledge mapping tool is one of the only systems designed specifically for assessment purposes, the only system...

  20. CRESST Human Performance Knowledge Mapping System

    National Research Council Canada - National Science Library

    Chung, Gregory K; Michiuye, Joanne K; Brill, David G; Sinha, Ravi; Saadat, Farzad; de Vries, Linda F; Delacruz, Girlie C; Bewley, William L; Baker, Eva L

    2002-01-01

    .... This report presents a review of knowledge mapping scoring methods and current online mapping systems, and the overall design, functionality, scoring, usability testing, and authoring capabilities of the CRESST system...

  1. Applying the metro map to software development management

    Science.gov (United States)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  2. ESA web mapping activities applied to Earth observation

    Science.gov (United States)

    Caspar, C.; Petiteville, I.; Kohlhammer, G.; Tandurella, G.

    2002-05-01

    Thousands of Earth Observation satellite instrument products are generated daily, in a multitude of formats, using a variety of projection coordinate sytems. This diversity is a barrier to the development of EO multi-mission-based applications and prevents the merging of EO data with GIS data, which is requested by the user community (value-added companies, serivce providers, scientists, institutions, commercial users, and academic users). The web mapping technologies introduced in this article represent an elegant and low-technologies introduced in this article represent an elegant and low-cost solution. The extraordinary added value that is achieved may be considered a revolution in the use of EO data products.

  3. Theoretical and applied aspects of the self-organizing maps

    OpenAIRE

    Cottrell , Marie; Olteanu , Madalina; Rossi , Fabrice; Villa-Vialaneix , Nathalie

    2016-01-01

    International audience; The Self-Organizing Map (SOM) is widely used, easy to implement , has nice properties for data mining by providing both clustering and visual representation. It acts as an extension of the k-means algorithm that preserves as much as possible the topological structure of the data. However, since its conception, the mathematical study of the SOM remains difficult and has be done only in very special cases. In WSOM 2005, Jean-Claude Fort presented the state of the art, th...

  4. MULTI-CRITERIA ANALYSIS APPLIED TO LANDSLIDE SUSCEPTIBILITY MAPPING

    Directory of Open Access Journals (Sweden)

    Mariana Madruga de Brito

    2017-10-01

    Full Text Available This paper presents the application of a multi-criteria analysis (MCA tool for landslide susceptibility assessment in Porto Alegre municipality, southern Brazil. A knowledge driven approach was used, aiming to ensure an optimal use of the available information. The landslide conditioning factors considered were slope, lithology, flow accumulation and distance from lineaments. Standardization of these factors was done through fuzzy membership functions, and evaluation of their relative importance for landslide predisposition was supported by the analytic hierarchy process (AHP, based on local expert knowledge. Finally, factors were integrated in a GIS environment using the weighted linear combination (WLC method. For validation, an inventory, including 107 landslide points recorded between 2007 and 2013 was used. Results indicated that 8.2% (39.40 km² of the study area are highly and very highly susceptible to landslides. An overall accuracy of 95% was found, with an area under the receiver operating characteristic (ROC curve of 0.960. Therefore, the resulting map can be regarded as useful for monitoring landslide-prone areas. Based on the findings, it is concluded that the proposed method is effective for susceptibility assessment since it yielded meaningful results and does not require extensive input data.

  5. Performance analysis of different database in new internet mapping system

    Science.gov (United States)

    Yao, Xing; Su, Wei; Gao, Shuai

    2017-03-01

    In the Mapping System of New Internet, Massive mapping entries between AID and RID need to be stored, added, updated, and deleted. In order to better deal with the problem when facing a large number of mapping entries update and query request, the Mapping System of New Internet must use high-performance database. In this paper, we focus on the performance of Redis, SQLite, and MySQL these three typical databases, and the results show that the Mapping System based on different databases can adapt to different needs according to the actual situation.

  6. Performing Mimetic Mapping: A Non-Visualisable Map of the Suzhou River Area of Shanghai

    Directory of Open Access Journals (Sweden)

    Anastasia Karandinou

    2014-07-01

    Full Text Available This paper questions issues concerning the mapping of experience, through the concept of mimesis – the creative re-performance of the site experience onto the map. The place mapped is the Suzhou River area, a significant part of Shanghai, the former boundary between the British and American Settlements, and an ever-changing and transforming territory. Through the detailed description of the mapping processes, we analyse the position of this particular map within contemporary discourse about mapping. Here, we question the purpose of the process, the desired outcome, the consciousness of the significance of each step/event, and the possible significance of the final traces that the mapping leaves behind. Although after the mapping had been carried out, the procedure was analysed, post-rationalised, and justified through its partial documentation (as part of an educational process, this paper questions the way and the reason for these practices (the post-rationalising of the mapping activity, justifying the strategy, etc., and their possible meaning, purpose, demand or context. Thus we conclude that the subject matter is not the final outcome of an object or ‘map’; there is no final map to be exhibited. What this paper brings forth is the mapping as an event, an action performed by the embodied experience of the actual place and by the trans-local materiality of the tools and elements involved in the process of its making.

  7. Crime clocks and target performance maps

    CSIR Research Space (South Africa)

    Cooper, Antony K

    1999-12-01

    Full Text Available the period of analysis. Each segment of a pie chart represents a selected part of the day (eg: a two- or three-hour period) or a day of the week. The first and last segments in the day or week are then adjacent, ensuring that there is no artificial break... clocks We have also used crime clocks to map the proportion of crimes that occur during normal police working hours (07:00 to 16:00, Monday to Friday, in the case of the Johannesburg Area), against those that occur outside these hours. 3. Target...

  8. The CMS Magnetic Field Map Performance

    CERN Document Server

    Klyukhin, V.I.; Andreev, V.; Ball, A.; Cure, B.; Herve, A.; Gaddi, A.; Gerwig, H.; Karimaki, V.; Loveless, R.; Mulders, M.; Popescu, S.; Sarycheva, L.I.; Virdee, T.

    2010-04-05

    The Compact Muon Solenoid (CMS) is a general-purpose detector designed to run at the highest luminosity at the CERN Large Hadron Collider (LHC). Its distinctive featuresinclude a 4 T superconducting solenoid with 6 m diameter by 12.5 m long free bore, enclosed inside a 10000-ton return yoke made of construction steel. Accurate characterization of the magnetic field everywhere in the CMS detector is required. During two major tests of the CMS magnet the magnetic flux density was measured inside the coil in a cylinder of 3.448 m diameter and 7 m length with a specially designed field-mapping pneumatic machine as well as in 140 discrete regions of the CMS yoke with NMR probes, 3-D Hall sensors and flux-loops. A TOSCA 3-D model of the CMS magnet has been developed to describe the magnetic field everywhere outside the tracking volume measured with the field-mapping machine. A volume based representation of the magnetic field is used to provide the CMS simulation and reconstruction software with the magnetic field ...

  9. Performance maps for the control of thermal energy storage

    DEFF Research Database (Denmark)

    Finck, Christian; Li, Rongling; Zeiler, Wim

    2017-01-01

    Predictive control in building energy systems requires the integration of the building, building system, and component dynamics. The prediction accuracy of these dynamics is crucial for practical applications. This paper introduces performance maps for the control of water tanks, phase change mat...... material tanks, and thermochemical material tanks. The results show that these performance maps can fully account for the dynamics of thermal energy storage tanks.......Predictive control in building energy systems requires the integration of the building, building system, and component dynamics. The prediction accuracy of these dynamics is crucial for practical applications. This paper introduces performance maps for the control of water tanks, phase change...

  10. Insights into earthquake hazard map performance from shaking history simulations

    Science.gov (United States)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher-than-mapped

  11. Mapping strategy, structure, ownership and performance in European corporations : Introduction

    NARCIS (Netherlands)

    Colli, A.; Iversen, M.J.; de Jong, A.

    2011-01-01

    This paper is the introduction to the Business History special issue on European Business Models. The volume presents results of the international project about mapping European corporations, within the strategy, structure, ownership and performance (SSOP) framework. The paper describes the

  12. Applying Value Stream Mapping to reduce food losses and wastes in supply chains: A systematic review.

    Science.gov (United States)

    De Steur, Hans; Wesana, Joshua; Dora, Manoj K; Pearce, Darian; Gellynck, Xavier

    2016-12-01

    The interest to reduce food losses and wastes has grown considerably in order to guarantee adequate food for the fast growing population. A systematic review was used to show the potential of Value Stream Mapping (VSM) not only to identify and reduce food losses and wastes, but also as a way to establish links with nutrient retention in supply chains. The review compiled literature from 24 studies that applied VSM in the agri-food industry. Primary production, processing, storage, food service and/or consumption were identified as susceptible hotspots for losses and wastes. Results further revealed discarding and nutrient loss, most especially at the processing level, as the main forms of loss/waste in food, which were adapted to four out of seven lean manufacturing wastes (i.e. defect, unnecessary inventory, overproduction and inappropriate processing). This paper presents the state of the art of applying lean manufacturing practices in the agri-food industry by identifying lead time as the most applicable performance indicator. VSM was also found to be compatible with other lean tools such as Just-In-Time and 5S which are continuous improvement strategies, as well as simulation modelling that enhances adoption. In order to ensure successful application of lean practices aimed at minimizing food or nutrient losses and wastes, multi-stakeholder collaboration along the entire food supply chain is indispensable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Stirling convertor performance mapping test results

    Science.gov (United States)

    Qiu, Songgang; Peterson, Allen A.; White, Maurice A.; Faultersack, Franklyn; Redinger, Darin L.; Petersen, Stephen L.

    2002-01-01

    The Department of Energy (DOE) has selected Free-Piston Stirling Convertors as a technology for future advanced radioisotope space power systems. In August 2000, DOE awarded competitive Phase I, Stirling Radioisotope Generator (SRG) power system integration contracts to three major aerospace contractors, resulting in SRG conceptual designs in February 2001. All three contractors based their designs on the Technology Demonstration Convertor (TDC) developed by Stirling Technology Company (STC) for DOE. The contract award to a single system integration contractor for Phases II and III of the SRG program is anticipated in late 2001. The first potential SRG mission is targeted for a Mars rover. Recent TDC performance data are provided in this paper, together with predictions from Stirling simulation models. .

  14. Spatiotemporal mapping of interictal spike propagation: a novel methodology applied to pediatric intracranial EEG recordings.

    Directory of Open Access Journals (Sweden)

    Samuel Tomlinson

    2016-12-01

    Full Text Available Synchronized cortical activity is implicated in both normative cognitive functioning andmany neurological disorders. For epilepsy patients with intractable seizures, irregular patterns ofsynchronization within the epileptogenic zone (EZ is believed to provide the network substratethrough which seizures initiate and propagate. Mapping the EZ prior to epilepsy surgery is critical fordetecting seizure networks in order to achieve post-surgical seizure control. However, automatedtechniques for characterizing epileptic networks have yet to gain traction in the clinical setting.Recent advances in signal processing and spike detection have made it possible to examine thespatiotemporal propagation of interictal spike discharges across the epileptic cortex. In this study, wepresent a novel methodology for detecting, extracting, and visualizing spike propagation anddemonstrate its potential utility as a biomarker for the epileptogenic zone. Eighteen pre-surgicalintracranial EEG recordings were obtained from pediatric patients ultimately experiencing favorable(i.e., seizure-free, n = 9 or unfavorable (i.e., seizure-persistent, n = 9 surgical outcomes. Novelalgorithms were applied to extract multi-channel spike discharges and visualize their spatiotemporalpropagation. Quantitative analysis of spike propagation was performed using trajectory clusteringand spatial autocorrelation techniques. Comparison of interictal propagation patterns revealed anincrease in trajectory organization (i.e., spatial autocorrelation among Sz-Free patients compared toSz-Persist patients. The pathophysiological basis and clinical implications of these findings areconsidered.

  15. MAPS evaluation report and procedures governing interviews and performance appraisals

    CERN Multimedia

    HR Department

    2006-01-01

    Following various improvements to the MAPS report and to the procedures governing interviews and performance appraisals (announced in the CERN Bulletin 48-49/2005), a third information session has been organized for all staff members on Tuesday, 31 January at 10 a.m.: AB Auditorium P (864-1-D02), Human Resources Department Tel. 73566

  16. Systematic mapping review on student's performance analysis using ...

    African Journals Online (AJOL)

    This paper classify the various existing predicting models that are used for monitoring and improving students' performance at schools and higher learning institutions. It analyses all the areas within the educational data mining methodology. Two databases were chosen for this study and a systematic mapping study was ...

  17. How Concept-Mapping Perception Navigates Student Knowledge Transfer Performance

    Science.gov (United States)

    Tseng, Kuo-Hung; Chang, Chi-Cheng; Lou, Shi-Jer; Tan, Yue; Chiu, Chien-Jung

    2012-01-01

    The purpose of this paper is to investigate students' perception of concept maps as a learning tool where knowledge transfer is the goal. This article includes an evaluation of the learning performance of 42 undergraduate students enrolled in a nanotech course at a university in Taiwan. Canonical correlation and MANOVA analyses were employed to…

  18. 3D laser scanning techniques applying to tunnel documentation and geological mapping at Aespoe hard rock laboratory, Sweden

    International Nuclear Information System (INIS)

    Feng, Q.; Wang, G.; Roeshoff, K.

    2008-01-01

    3D terrestrial laser scanning is nowadays one of the most attractive methods to applying for 3D mapping and documentation of rock faces and tunnels, and shows the most potential to improve the data quality and provide some good solutions in rock engineering projects. In this paper, the state-of-the-art methods are described for different possibility to tunnel documentation and geological mapping based on 3D laser scanning data. Some results are presented from the case study performed at the Hard Rock Laboratory, Aespoe run by SKB, Swedish Nuclear Fuel and Waste Management Co. Comparing to traditional methods, 3D laser scanning techniques can not only provide us with a rapid and 3D digital way for tunnel documentation, but also create a potential chance to achieve high quality data, which might be beneficial to different rock engineering project procedures, including field data acquisition, data processing, data retrieving and management, and also modeling and design. (authors)

  19. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    Science.gov (United States)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  20. Global Appearance Applied to Visual Map Building and Path Estimation Using Multiscale Analysis

    Directory of Open Access Journals (Sweden)

    Francisco Amorós

    2014-01-01

    Full Text Available In this work we present a topological map building and localization system for mobile robots based on global appearance of visual information. We include a comparison and analysis of global-appearance techniques applied to wide-angle scenes in retrieval tasks. Next, we define multiscale analysis, which permits improving the association between images and extracting topological distances. Then, a topological map-building algorithm is proposed. At first, the algorithm has information only of some isolated positions of the navigation area in the form of nodes. Each node is composed of a collection of images that covers the complete field of view from a certain position. The algorithm solves the node retrieval and estimates their spatial arrangement. With these aims, it uses the visual information captured along some routes that cover the navigation area. As a result, the algorithm builds a graph that reflects the distribution and adjacency relations between nodes (map. After the map building, we also propose a route path estimation system. This algorithm takes advantage of the multiscale analysis. The accuracy in the pose estimation is not reduced to the nodes locations but also to intermediate positions between them. The algorithms have been tested using two different databases captured in real indoor environments under dynamic conditions.

  1. Swimming and other activities: applied aspects of fish swimming performance

    Science.gov (United States)

    Castro-Santos, Theodore R.; Farrell, A.P.

    2011-01-01

    Human activities such as hydropower development, water withdrawals, and commercial fisheries often put fish species at risk. Engineered solutions designed to protect species or their life stages are frequently based on assumptions about swimming performance and behaviors. In many cases, however, the appropriate data to support these designs are either unavailable or misapplied. This article provides an overview of the state of knowledge of fish swimming performance – where the data come from and how they are applied – identifying both gaps in knowledge and common errors in application, with guidance on how to avoid repeating mistakes, as well as suggestions for further study.

  2. New modified map for digital image encryption and its performance

    Science.gov (United States)

    Suryadi, MT; Yus Trinity Irsan, Maria; Satria, Yudi

    2017-10-01

    Protection to classified digital data becomes so important in avoiding data manipulation and alteration. The focus of this paper is in data and information protection of digital images form. Protection is provided in the form of encrypted digital image. The encryption process uses a new map, {x}n+1=\\frac{rλ {x}n}{1+λ {(1-{x}n)}2}\\quad ({mod} 1), which is called MS map. This paper will show: the results of digital image encryption using MS map and how the performance is regarding the average time needed for encryption/decryption process; randomness of key stream sequence with NIST test, histogram analysis and goodness of fit test, quality of the decrypted image by PSNR, initial value sensitivity level, and key space. The results show that the average time of the encryption process is relatively same as the decryption process and it depends to types and sizes of the image. Cipherimage (encrypted image) is uniformly distributed since: it passes the goodness of fit test and also the histogram of the cipherimage is flat; key stream, that are generated by MS map, passes frequency (monobit) test, and runs test, which means the key stream is a random sequence; the decrypted image has same quality as the original image; and initial value sensitivity reaches 10-17, and key space reaches 3.24 × 10634. So, that encryption algorithm generated by MS map is more resistant to brute-force attack and known plaintext attack.

  3. Urban local climate zone mapping and apply in urban environment study

    Science.gov (United States)

    He, Shan; Zhang, Yunwei; Zhang, Jili

    2018-02-01

    The city’s local climate zone (LCZ) was considered to be a powerful tool for urban climate mapping. But for cities in different countries and regions, the LCZ division methods and results were different, thus targeted researches should be performed. In the current work, a LCZ mapping method was proposed, which is convenient in operation and city planning oriented. In this proposed method, the local climate zoning types were adjusted firstly, according to the characteristics of Chinese city, that more tall buildings and high density. Then the classification method proposed by WUDAPT based on remote sensing data was performed on Xi’an city, as an example, for LCZ mapping. Combined with the city road network, a reasonable expression of the dividing results was provided, to adapt to the characteristics in city planning that land parcels are usually recognized as the basic unit. The proposed method was validated against the actual land use and construction data that surveyed in Xi’an, with results indicating the feasibility of the proposed method for urban LCZ mapping in China.

  4. Performance Evaluation of Java Based Object Relational Mapping Tools

    Directory of Open Access Journals (Sweden)

    Shoaib Mahmood Bhatti

    2013-04-01

    Full Text Available Object persistency is the hot issue in the form of ORM (Object Relational Mapping tools in industry as developers use these tools during software development. This paper presents the performance evaluation of Java based ORM tools. For this purpose, Hibernate, Ebean and TopLinkhave been selected as the ORM tools which are popular and open source. Their performance has been measured from execution point of view. The results show that ORM tools are the good option for the developers considering the system throughput in shorter setbacks and they can be used efficiently and effectively for performing mapping of the objects into the relational dominated world of database, thus creating a hope for a better and well dominated future of this technology.

  5. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    Science.gov (United States)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  6. THE PERFORMANCE ANALYSIS OF AN INDOOR MOBILE MAPPING SYSTEM WITH RGB-D SENSOR

    Directory of Open Access Journals (Sweden)

    G. J. Tsai

    2015-08-01

    Full Text Available Over the years, Mobile Mapping Systems (MMSs have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM. The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU, the Kinect RGB-D sensor and light detection, ranging (LIDAR and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  7. Image Fusion Applied to Satellite Imagery for the Improved Mapping and Monitoring of Coral Reefs: a Proposal

    Science.gov (United States)

    Gholoum, M.; Bruce, D.; Hazeam, S. Al

    2012-07-01

    A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine the quality of the

  8. IMAGE FUSION APPLIED TO SATELLITE IMAGERY FOR THE IMPROVED MAPPING AND MONITORING OF CORAL REEFS: A PROPOSAL

    Directory of Open Access Journals (Sweden)

    M. Gholoum

    2012-07-01

    Full Text Available A coral reef ecosystem, one of the most complex marine environmental systems on the planet, is defined as biologically diverse and immense. It plays an important role in maintaining a vast biological diversity for future generations and functions as an essential spawning, nursery, breeding and feeding ground for many kinds of marine species. In addition, coral reef ecosystems provide valuable benefits such as fisheries, ecological goods and services and recreational activities to many communities. However, this valuable resource is highly threatened by a number of environmental changes and anthropogenic impacts that can lead to reduced coral growth and production, mass coral mortality and loss of coral diversity. With the growth of these threats on coral reef ecosystems, there is a strong management need for mapping and monitoring of coral reef ecosystems. Remote sensing technology can be a valuable tool for mapping and monitoring of these ecosystems. However, the diversity and complexity of coral reef ecosystems, the resolution capabilities of satellite sensors and the low reflectivity of shallow water increases the difficulties to identify and classify its features. This paper reviews the methods used in mapping and monitoring coral reef ecosystems. In addition, this paper proposes improved methods for mapping and monitoring coral reef ecosystems based on image fusion techniques. This image fusion techniques will be applied to satellite images exhibiting high spatial and low to medium spectral resolution with images exhibiting low spatial and high spectral resolution. Furthermore, a new method will be developed to fuse hyperspectral imagery with multispectral imagery. The fused image will have a large number of spectral bands and it will have all pairs of corresponding spatial objects. This will potentially help to accurately classify the image data. Accuracy assessment use ground truth will be performed for the selected methods to determine

  9. MAPS evaluation report and procedures governing interviews and performance appraisals

    CERN Multimedia

    HR Department

    2006-01-01

    Following various improvements made to the MAPS report and to the procedures governing interviews and performance appraisals (announced in the CERN Weekly Bulletin 48-49/2005), three information sessions have been organized for all staff members: 24 January 10:00: AB Auditorium P (864-1-D02), 26 January 14:00: Main Amphitheatre, 31 January 10:00: AB Auditorium P (864-1-D02). Human Resources Department Tel. 73566

  10. Applying Nitrogen Site-Specifically Using Soil Electrical Conductivity Maps and Precision Agriculture Technology

    Directory of Open Access Journals (Sweden)

    E.D. Lund

    2001-01-01

    Full Text Available Soil texture varies significantly within many agricultural fields. The physical properties of soil, such as soil texture, have a direct effect on water holding capacity, cation exchange capacity, crop yield, production capability, and nitrogen (N loss variations within a field. In short, mobile nutrients are used, lost, and stored differently as soil textures vary. A uniform application of N to varying soils results in a wide range of N availability to the crop. N applied in excess of crop usage results in a waste of the grower’s input expense, a potential negative effect on the environment, and in some crops a reduction of crop quality, yield, and harvestability. Inadequate N levels represent a lost opportunity for crop yield and profit. The global positioning system (GPS-referenced mapping of bulk soil electrical conductivity (EC has been shown to serve as an effective proxy for soil texture and other soil properties. Soils with a high clay content conduct more electricity than coarser textured soils, which results in higher EC values. This paper will describe the EC mapping process and provide case studies of site-specific N applications based on EC maps. Results of these case studies suggest that N can be managed site-specifically using a variety of management practices, including soil sampling, variable yield goals, and cropping history.

  11. Dose mapping sensitivity to deformable registration uncertainties in fractionated radiotherapy – applied to prostate proton treatments

    International Nuclear Information System (INIS)

    Tilly, David; Tilly, Nina; Ahnesjö, Anders

    2013-01-01

    Calculation of accumulated dose in fractionated radiotherapy based on spatial mapping of the dose points generally requires deformable image registration (DIR). The accuracy of the accumulated dose thus depends heavily on the DIR quality. This motivates investigations of how the registration uncertainty influences dose planning objectives and treatment outcome predictions. A framework was developed where the dose mapping can be associated with a variable known uncertainty to simulate the DIR uncertainties in a clinical workflow. The framework enabled us to study the dependence of dose planning metrics, and the predicted treatment outcome, on the DIR uncertainty. The additional planning margin needed to compensate for the dose mapping uncertainties can also be determined. We applied the simulation framework to a hypofractionated proton treatment of the prostate using two different scanning beam spot sizes to also study the dose mapping sensitivity to penumbra widths. The planning parameter most sensitive to the DIR uncertainty was found to be the target D 95 . We found that the registration mean absolute error needs to be ≤0.20 cm to obtain an uncertainty better than 3% of the calculated D 95 for intermediate sized penumbras. Use of larger margins in constructing PTV from CTV relaxed the registration uncertainty requirements to the cost of increased dose burdens to the surrounding organs at risk. The DIR uncertainty requirements should be considered in an adaptive radiotherapy workflow since this uncertainty can have significant impact on the accumulated dose. The simulation framework enabled quantification of the accuracy requirement for DIR algorithms to provide satisfactory clinical accuracy in the accumulated dose

  12. GIS-based landslide susceptibility mapping models applied to natural and urban planning in Trikala, Central Greece

    Energy Technology Data Exchange (ETDEWEB)

    Bathrellos, G. D.; Kalivas, D. P.; Skilodimou, H. D.

    2009-07-01

    Landslide susceptibility mapping is a practical tool in natural and urban planning; it can be applied for determining land use zones, in construction design and planning of a variety of projects. In this study, two different GIS based landslide susceptibility maps were generated in the mountainous part of the Trikala Prefecture in Thessaly, Central Greece. This was accomplished by using different methods for correlating factors, which have an effect on landslide occurrences. The instability factors taken into account were: lithology, tectonic features, slope gradients, road network, drainage network, land use and rainfall. A frequency distribution of the half number of the landslide events of the study area in each class of the instability factors was performed in order to rate the classes. Two models have been used to combine the instability factors and assess the overall landslide susceptibility, namely: the Weight Factor Model (WeF), which is a statistical method, and the Multiple Factor Model (MuF) that is a logical method. The produced maps were classified into four zones: Low, Moderate, High and Very High susceptible zones and validated using the other half number of the landslide events of the area. Evaluation of the results is optimized through a Landslide Models Indicator (La.M.I.). (Author) 36 refs.

  13. A soil map of a large watershed in China: applying digital soil mapping in a data sparse region

    Science.gov (United States)

    Barthold, F.; Blank, B.; Wiesmeier, M.; Breuer, L.; Frede, H.-G.

    2009-04-01

    Prediction of soil classes in data sparse regions is a major research challenge. With the advent of machine learning the possibilities to spatially predict soil classes have increased tremendously and given birth to new possibilities in soil mapping. Digital soil mapping is a research field that has been established during the last decades and has been accepted widely. We now need to develop tools to reduce the uncertainty in soil predictions. This is especially challenging in data sparse regions. One approach to do this is to implement soil taxonomic distance as a classification error criterion in classification and regression trees (CART) as suggested by Minasny et al. (Geoderma 142 (2007) 285-293). This approach assumes that the classification error should be larger between soils that are more dissimilar, i.e. differ in a larger number of soil properties, and smaller between more similar soils. Our study area is the Xilin River Basin, which is located in central Inner Mongolia in China. It is characterized by semi arid climate conditions and is representative for the natural occurring steppe ecosystem. The study area comprises 3600 km2. We applied a random, stratified sampling design after McKenzie and Ryan (Geoderma 89 (1999) 67-94) with landuse and topography as stratifying variables. We defined 10 sampling classes, from each class 14 replicates were randomly drawn and sampled. The dataset was split into 100 soil profiles for training and 40 soil profiles for validation. We then applied classification and regression trees (CART) to quantify the relationships between soil classes and environmental covariates. The classification tree explained 75.5% of the variance with land use and geology as most important predictor variables. Among the 8 soil classes that we predicted, the Kastanozems cover most of the area. They are predominantly found in steppe areas. However, even some of the soils at sand dune sites, which were thought to show only little soil formation

  14. The mapping approach in the path integral formalism applied to curve-crossing systems

    International Nuclear Information System (INIS)

    Novikov, Alexey; Kleinekathoefer, Ulrich; Schreiber, Michael

    2004-01-01

    The path integral formalism in a combined phase-space and coherent-state representation is applied to the problem of curve-crossing dynamics. The system of interest is described by two coupled one-dimensional harmonic potential energy surfaces interacting with a heat bath consisting of harmonic oscillators. The mapping approach is used to rewrite the Lagrangian function of the electronic part of the system. Using the Feynman-Vernon influence-functional method the bath is eliminated whereas the non-Gaussian part of the path integral is treated using the generating functional for the electronic trajectories. The dynamics of a Gaussian wave packet is analyzed along a one-dimensional reaction coordinate within a perturbative treatment for a small coordinate shift between the potential energy surfaces

  15. Applying Clustering Methods in Drawing Maps of Science: Case Study of the Map For Urban Management Science

    Directory of Open Access Journals (Sweden)

    Mohammad Abuei Ardakan

    2010-04-01

    Full Text Available The present paper offers a basic introduction to data clustering and demonstrates the application of clustering methods in drawing maps of science. All approaches towards classification and clustering of information are briefly discussed. Their application to the process of visualization of conceptual information and drawing of science maps are illustrated by reviewing similar researches in this field. By implementing aggregated hierarchical clustering algorithm, which is an algorithm based on complete-link method, the map for urban management science as an emerging, interdisciplinary scientific field is analyzed and reviewed.

  16. What Happens Inside a Fuel Cell? Developing an Experimental Functional Map of Fuel Cell Performance

    KAUST Repository

    Brett, Daniel J. L.

    2010-08-20

    Fuel cell performance is determined by the complex interplay of mass transport, energy transfer and electrochemical processes. The convolution of these processes leads to spatial heterogeneity in the way that fuel cells perform, particularly due to reactant consumption, water management and the design of fluid-flow plates. It is therefore unlikely that any bulk measurement made on a fuel cell will accurately represent performance at all parts of the cell. The ability to make spatially resolved measurements in a fuel cell provides one of the most useful ways in which to monitor and optimise performance. This Minireview explores a range of in situ techniques being used to study fuel cells and describes the use of novel experimental techniques that the authors have used to develop an \\'experimental functional map\\' of fuel cell performance. These techniques include the mapping of current density, electrochemical impedance, electrolyte conductivity, contact resistance and CO poisoning distribution within working PEFCs, as well as mapping the flow of reactant in gas channels using laser Doppler anemometry (LDA). For the high-temperature solid oxide fuel cell (SOFC), temperature mapping, reference electrode placement and the use of Raman spectroscopy are described along with methods to map the microstructural features of electrodes. The combination of these techniques, applied across a range of fuel cell operating conditions, allows a unique picture of the internal workings of fuel cells to be obtained and have been used to validate both numerical and analytical models. © 2010 Wiley-VCH Verlag GmbH& Co. KGaA, Weinheim.

  17. Microscale and nanoscale strain mapping techniques applied to creep of rocks

    Science.gov (United States)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark E.; Evans, Brian; Kohlstedt, David L.

    2017-07-01

    Usually several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary to bridge the gaps from observations of microstructures, to geomechanical descriptions, to extrapolating from laboratory data to field observations. Here, we describe the experimental and computational techniques involved in microscale strain mapping (MSSM), which allows strain produced during high-pressure, high-temperature deformation experiments to be tracked with high resolution. MSSM relies on the analysis of the relative displacement of initially regularly spaced markers after deformation. We present two lithography techniques used to pattern rock substrates at different scales: photolithography and electron-beam lithography. Further, we discuss the challenges of applying the MSSM technique to samples used in high-temperature and high-pressure experiments. We applied the MSSM technique to a study of strain partitioning during creep of Carrara marble and grain boundary sliding in San Carlos olivine, synthetic forsterite, and Solnhofen limestone at a confining pressure, Pc, of 300 MPa and homologous temperatures, T/Tm, of 0.3 to 0.6. The MSSM technique works very well up to temperatures of 700 °C. The experimental developments described here show promising results for higher-temperature applications.

  18. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  19. Fundamental Research Applied To Enable Hardware Performance in Microgravity

    Science.gov (United States)

    Sheredy, William A.

    2005-01-01

    NASA sponsors microgravity research to generate knowledge in physical sciences. In some cases, that knowledge must be applied to enable future research. This article describes one such example. The Dust and Aerosol measurement Feasibility Test (DAFT) is a risk-mitigation experiment developed at the NASA Glenn Research Center by NASA and ZIN Technologies, Inc., in support of the Smoke Aerosol Measurement Experiment (SAME). SAME is an investigation that is being designed for operation in the Microgravity Science Glovebox aboard the International Space Station (ISS). The purpose of DAFT is to evaluate the performance of P-Trak (TSI Incorporated, Shoreview, MN)--a commercially available condensation nuclei counter and a key SAME diagnostic- -in long-duration microgravity because of concerns about its ability to operate properly in that environment. If its microgravity performance is proven, this device will advance the state of the art in particle measurement capabilities for space vehicles and facilities, such as aboard the ISS. The P-Trak, a hand-held instrument, can count individual particles as small as 20 nm in diameter in an aerosol stream. Particles are drawn into the device by a built-in suction pump. Upon entering the instrument, these particles pass through a saturator tube where they mix with an alcohol vapor (see the following figure). This mixture then flows through a cooled condenser tube where some of the alcohol condenses onto the sample particles, and the droplets grow in a controlled fashion until they are large enough to be counted. These larger droplets pass through an internal nozzle and past a focused laser beam, producing flashes of light that are sensed by a photodetector and then counted to determine particle number concentration. The operation of the instrument depends on the proper internal flow and recycling of isopropyl alcohol in both the vapor and liquid phases.

  20. Applying importance-performance analysis to patient safety culture.

    Science.gov (United States)

    Lee, Yii-Ching; Wu, Hsin-Hung; Hsieh, Wan-Lin; Weng, Shao-Jen; Hsieh, Liang-Po; Huang, Chih-Hsuan

    2015-01-01

    The Sexton et al.'s (2006) safety attitudes questionnaire (SAQ) has been widely used to assess staff's attitudes towards patient safety in healthcare organizations. However, to date there have been few studies that discuss the perceptions of patient safety both from hospital staff and upper management. The purpose of this paper is to improve and to develop better strategies regarding patient safety in healthcare organizations. The Chinese version of SAQ based on the Taiwan Joint Commission on Hospital Accreditation is used to evaluate the perceptions of hospital staff. The current study then lies in applying importance-performance analysis technique to identify the major strengths and weaknesses of the safety culture. The results show that teamwork climate, safety climate, job satisfaction, stress recognition and working conditions are major strengths and should be maintained in order to provide a better patient safety culture. On the contrary, perceptions of management and hospital handoffs and transitions are important weaknesses and should be improved immediately. Research limitations/implications - The research is restricted in generalizability. The assessment of hospital staff in patient safety culture is physicians and registered nurses. It would be interesting to further evaluate other staff's (e.g. technicians, pharmacists and others) opinions regarding patient safety culture in the hospital. Few studies have clearly evaluated the perceptions of healthcare organization management regarding patient safety culture. Healthcare managers enable to take more effective actions to improve the level of patient safety by investigating key characteristics (either strengths or weaknesses) that healthcare organizations should focus on.

  1. Anodal tDCS applied during multitasking training leads to transferable performance gains.

    Science.gov (United States)

    Filmer, Hannah L; Lyons, Maxwell; Mattingley, Jason B; Dux, Paul E

    2017-10-11

    Cognitive training can lead to performance improvements that are specific to the tasks trained. Recent research has suggested that transcranial direct current stimulation (tDCS) applied during training of a simple response-selection paradigm can broaden performance benefits to an untrained task. Here we assessed the impact of combined tDCS and training on multitasking, stimulus-response mapping specificity, response-inhibition, and spatial attention performance in a cohort of healthy adults. Participants trained over four days with concurrent tDCS - anodal, cathodal, or sham - applied to the left prefrontal cortex. Immediately prior to, 1 day after, and 2 weeks after training, performance was assessed on the trained multitasking paradigm, an untrained multitasking paradigm, a go/no-go inhibition task, and a visual search task. Training combined with anodal tDCS, compared with training plus cathodal or sham stimulation, enhanced performance for the untrained multitasking paradigm and visual search tasks. By contrast, there were no training benefits for the go/no-go task. Our findings demonstrate that anodal tDCS combined with multitasking training can extend to untrained multitasking paradigms as well as spatial attention, but with no extension to the domain of response inhibition.

  2. Applying the AcciMap methodology to investigate the tragic Sewol Ferry accident in South Korea.

    Science.gov (United States)

    Lee, Samuel; Moh, Young Bo; Tabibzadeh, Maryam; Meshkati, Najmedin

    2017-03-01

    This study applies the AcciMap methodology, which was originally proposed by Professor Jens Rasmussen (1997), to the analysis of the tragic Sewol Ferry accident in South Korea on April 16, 2014, which killed 304 mostly young people and is considered as a national disaster in that country. This graphical representation, by incorporating associated socio-technical factors into an integrated framework, provides a big-picture to illustrate the context in which an accident occurred as well as the interactions between different levels of the studied system that resulted in that event. In general, analysis of past accidents within the stated framework can define the patterns of hazards within an industrial sector. Such analysis can lead to the definition of preconditions for safe operations, which is a main focus of proactive risk management systems. In the case of the Sewol Ferry accident, a lot of the blame has been placed on the Sewol's captain and its crewmembers. However, according to this study, which relied on analyzing all available sources published in English and Korean, the disaster is the result of a series of lapses and disregards for safety across different levels of government and regulatory bodies, Chonghaejin Company, and the Sewol's crewmembers. The primary layers of the AcciMap framework, which include the political environment and non-proactive governmental body; inadequate regulations and their lax oversight and enforcement; poor safety culture; inconsideration of human factors issues; and lack of and/or outdated standard operating and emergency procedures were not only limited to the maritime industry in South Korea, and the Sewol Ferry accident, but they could also subject any safety-sensitive industry anywhere in the world. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. How can mental maps, applied to the coast environment, help in collecting and analyzing spatial representations?

    Directory of Open Access Journals (Sweden)

    Servane Gueben-Venière

    2011-09-01

    Full Text Available Après avoir été principalement utilisées en géographie urbaine, puis quelque peu mises de côté par les géographes, les cartes mentales font désormais l’objet d’un regain d’intérêt, en particulier dans le champ de la géographie de l’environnement. Appliquées à l’espace littoral et employées en complément de l’entretien, elles se révèlent être non seulement un bon outil de recueil des représentations spatiales, mais aussi une aide précieuse pour leur analyse. Cet article s’appuie sur l’exemple de l’utilisation des cartes mentales dans le poster scientifique Des ingénieurs de plus en plus « verts ». Évolution du regard des ingénieurs en charge de la gestion du littoral néerlandais, lauréat du concours organisé par le forum de l’École Doctorale de Géographie de Paris de 2011.After having been mainly used in urban geography, then cast aside by the geographers, mental maps are now the object of renewed interest, particularly in the field of environmental geography. Applied to the coast, and used as a supplement to the interview, these maps are not only of great assistance in collecting spatial representations, but also helpful in analyzing them. This article uses the example of the integration of mental maps in the scientific poster “Des ingénieurs de plus en plus “verts”. Évolution du regard des ingénieurs en charge de la gestion du littoral néerlandais”(Engineers are ‘greener and greener’. Evolution of the thinking of engineers in charge of Dutch coastal management., prize-winner of the competition organized by the Paris Doctoral School of Geography Forum in 2011.

  4. Mapping the Conjugate Gradient Algorithm onto High Performance Heterogeneous Computers

    Science.gov (United States)

    2014-05-01

    Solution of sparse indefinite systems of linear equations. Society for Industrial and Applied Mathematis 12(4), 617 –629. Parker, M. ( 2009 ). Taking advantage...44 vii 11 LIST OF SYMBOLS, ABBREVIATIONS, AND NOMENCLATURE API Application Programming Interface ASIC Application Specific Integrated Circuit...FPGA designer, 1 16 2 thus, final implementations were nearly always performed using fixed-point or integer arithmetic (Parker 2009 ). With the recent

  5. Self-organizing maps applied to two-phase flow on natural circulation loop study

    International Nuclear Information System (INIS)

    Castro, Leonardo Ferreira

    2016-01-01

    Two-phase flow of liquid and gas is found in many closed circuits using natural circulation for cooling purposes. Natural circulation phenomenon is important on recent nuclear power plant projects for decay heat removal. The Natural Circulation Facility (Circuito de Circulacao Natural CCN) installed at Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN, is an experimental circuit designed to provide thermal hydraulic data related to single and two-phase flow under natural circulation conditions. This periodic flow oscillation behavior can be observed thoroughly in this facility due its glass-made tubes transparency. The heat transfer estimation has been improved based on models that require precise prediction of pattern transitions of flow. This work presents experiments realized at CCN to visualize natural circulation cycles in order to classify two-phase flow patterns associated with phase transients and static instabilities of flow. Images are compared and clustered using Kohonen Self-organizing Maps (SOM's) applied on different digital image features. The Full Frame Discret Cosine Transform (FFDCT) coefficients were used as input for the classification task, enabling good results. FFDCT prototypes obtained can be associated to each flow pattern, enabling a better comprehension of each observed instability. A systematic test methodology was used to verify classifier robustness.

  6. Effects of supervised Self Organising Maps parameters on classification performance.

    Science.gov (United States)

    Ballabio, Davide; Vasighi, Mahdi; Filzmoser, Peter

    2013-02-26

    Self Organising Maps (SOMs) are one of the most powerful learning strategies among neural networks algorithms. SOMs have several adaptable parameters and the selection of appropriate network architectures is required in order to make accurate predictions. The major disadvantage of SOMs is probably due to the network optimisation, since this procedure can be often time-expensive. Effects of network size, training epochs and learning rate on the classification performance of SOMs are known, whereas the effect of other parameters (type of SOMs, weights initialisation, training algorithm, topology and boundary conditions) are not so obvious. This study was addressed to analyse the effect of SOMs parameters on the network classification performance, as well as on their computational times, taking into consideration a significant number of real datasets, in order to achieve a comprehensive statistical comparison. Parameters were contemporaneously evaluated by means of an approach based on the design of experiments, which enabled the investigation of their interaction effects. Results highlighted the most important parameters which influence the classification performance and enabled the identification of the optimal settings, as well as the optimal architectures to reduce the computational time of SOMs. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Performance assurance of the re-applying project documentation

    Science.gov (United States)

    Kozlova, Olga

    2017-10-01

    Usage of the re-applying project documentation is cost effective measure. Saving of budgetary funds for purchases for development of new project documentation occurs by means of it. It also becomes possible to consider better decisions and prevent the repetition of mistakes. Nowadays, state authorities in construction management are forming separate institute for re-applying project documentation. The article shows the main tasks of such events and the issues to be solved for achievement of a high positive result.

  8. Process mapping as a framework for performance improvement in emergency general surgery.

    Science.gov (United States)

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2018-02-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  9. Geoelectrical mapping for improved performance of SUDS in clay tills

    DEFF Research Database (Denmark)

    Bockhorn, Britta; Møller, Ingelise; Klint, Knud Erik S.

    2015-01-01

    geological methods, including borehole soil sample descriptions, one excavation description and a near-surface spear auger-mapping project. The experiments returned a significant correlation of geoelectrical and spear auger-mapped surface sediments. Furthermore, a highly permeable oxidized fracture zone...

  10. Visual simultaneous localization and mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    International Nuclear Information System (INIS)

    Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.

    2017-01-01

    New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions

  11. The concept of value stream mapping to reduce of work-time waste as applied the smart construction management

    Science.gov (United States)

    Elizar, Suripin, Wibowo, Mochamad Agung

    2017-11-01

    Delays in construction sites occur due to systematic additions of time waste in various activities that are part of the construction process. Work-time waste is non-adding value activity which used to differentiate between physical construction waste found on site and other waste which occurs during the construction process. The aim of this study is identification using the concept of Value Stream Mapping (VSM) to reduce of work-time waste as applied the smart construction management.VSM analysis is a method of business process improvement. The application of VSM began in the manufacturing community. The research method base on theoretically informed case study and literature review. The data have collected using questionnaire through personal interviews from 383 respondents on construction project in Indonesia. The results show that concept of VSM can identify causes of work-time waste. Base on result of questioners and quantitative approach analysis was obtained 29 variables that influence of work-time waste or non-value-adding activities. Base on three cases of construction project founded that average 14.88% of working time was classified as waste. Finally, the concept of VSM can recommend to identification of systematic for reveal current practices and opportunities for improvement towards global challenges. The concept of value stream mapping can help optimize to reduce work-time waste and improve quality standard of construction management. The concept is also can help manager to make a decision to reduce work-time waste so as to obtain of result in more efficient for performance and sustainable construction project.

  12. Performance analysis of numeric solutions applied to biokinetics of radionuclides

    International Nuclear Information System (INIS)

    Mingatos, Danielle dos Santos; Bevilacqua, Joyce da Silva

    2013-01-01

    Biokinetics models for radionuclides applied to dosimetry problems are constantly reviewed by ICRP. The radionuclide trajectory could be represented by compartmental models, assuming constant transfer rates between compartments. A better understanding of physiological or biochemical phenomena, improve the comprehension of radionuclide behavior in the human body and, in general, more complex compartmental models are proposed, increasing the difficulty of obtaining the analytical solution for the system of first order differential equations. Even with constant transfer rates numerical solutions must be carefully implemented because of almost singular characteristic of the matrix of coefficients. In this work we compare numerical methods with different strategies for ICRP-78 models for Thorium-228 and Uranium-234. The impact of uncertainty in the parameters of the equations is also estimated for local and global truncation errors. (author)

  13. Performance Enhancement of the Patch Antennas Applying Micromachining Technology

    Directory of Open Access Journals (Sweden)

    Mohamed N. Azermanesh

    2007-09-01

    Full Text Available This paper reports on the application of micromachining technology for performance enhancement of two types of compact antennas which are becoming a common practice in microsystems. Shorted patch antennas (SPA and folded shorted patch antennas operating in the 5-6 GHz ISM band, with intended application in short-range wireless communications, are considered. The electrical length of antennas are modified by etching the substrate of the antennas, thus providing a new degree of freedom to control the antenna operating properties, which is the main novelty of our work. The gain and bandwidth of the antennas are increased by increasing the etching depth. However, etching the substrate affects the operating frequency as well. To keep the operating frequency at a pre-specified value, the dimension of the antennas must be increased by deepening the etching depth. Therefore, a trade off between the performance enhancement of the antennas and the dimensional enlargement is required.

  14. Measurement uncertainty analysis techniques applied to PV performance measurements

    International Nuclear Information System (INIS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results

  15. PERFORMANCE INDICATORS APPLIED TO BRAZILIAN PRIVATE EDUCATIONAL INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Átila de Melo Lira

    2015-06-01

    Full Text Available Initially focused on for-profit companies the Balance Scorecard (BSC has been adopted by many organizations with different objectives, such as higher education institutions (HEIs. However, it is not clear if the adoption of the BSC model is appropriate, or yet is hard to perceive how HEIs have modified and implemented this tool for evaluating educational institutions, public or privates, in Brazil. This study aims to fill the gap existent in the context of using the BSC in organizations. We intend to demonstrate how these organizations use performance indicators to measure their primary activities. A quantitative and exploratory study was developed from the analysis of performance indicators found in the web sites to Brazilian universities. A total of 91 Brazilian private universities evaluation process were reviewed. Even with a considerable amount of private HEIs there are few that have performance indicators guided by numerical and statistical data covering its main activities which is a concern for their managers in terms of managerial control.

  16. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  17. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  18. Developments in life cycle assessment applied to evaluate the environmental performance of construction and demolition wastes.

    Science.gov (United States)

    Bovea, M D; Powell, J C

    2016-04-01

    This paper provides a review of the literature that applies the life cycle assessment (LCA) methodology to the assessment of the environmental performance of the life cycle of construction and demolition waste (CDW) management systems. This article is focused on generating a general mapping of the literature and on identifying the best practices in compliance with LCA framework and proposing directions for future LCA studies in this field. The temporal evolution of the research in this field and the aim of the studies have grown in parallel with the legal framework related to waste and energy efficiency of buildings. Most studies have been published in Europe, followed by USA. Asia and Australia, being at an incipient application stage to the rest of the world. Topics related to "LCA of buildings, including their EoL" and "LCA of general CDW management strategies" are the most frequently analysed, followed by "LCA of EoL of construction elements" and "LCA of natural material vs recycled material". Regarding the strategies, recycling off-site and incineration, both combined with landfill for the rejected fractions, are the most commonly applied. Re-use or recycling on-site is the strategy least applied. The key aspect when LCA is applied to evaluate CDW management systems is the need to normalise which processes to include in the system boundary and the functional unit, the use of inventory data adapted to the context of the case study and the definition of a common set of appropriate impact assessment categories. Also, it is important to obtain results disaggregated by unit processes. This will allow the comparison between case studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Applying Value Stream Mapping Technique for Production Improvement in a Manufacturing Company: A Case Study

    Science.gov (United States)

    Jeyaraj, K. L.; Muralidharan, C.; Mahalingam, R.; Deshmukh, S. G.

    2013-01-01

    The purpose of this paper is to explain how value stream mapping (VSM) is helpful in lean implementation and to develop the road map to tackle improvement areas to bridge the gap between the existing state and the proposed state of a manufacturing firm. Through this case study, the existing stage of manufacturing is mapped with the help of VSM process symbols and the biggest improvement areas like excessive TAKT time, production, and lead time are identified. Some modifications in current state map are suggested and with these modifications future state map is prepared. Further TAKT time is calculated to set the pace of production processes. This paper compares the current state and future state of a manufacturing firm and witnessed 20 % reduction in TAKT time, 22.5 % reduction in processing time, 4.8 % reduction in lead time, 20 % improvement in production, 9 % improvement in machine utilization, 7 % improvement in man power utilization, objective improvement in workers skill level, and no change in the product and semi finished product inventory level. The findings are limited due to the focused nature of the case study. This case study shows that VSM is a powerful tool for lean implementation and allows the industry to understand and continuously improve towards lean manufacturing.

  20. Levitation performance of YBCO bulk in different applied magnetic fields

    International Nuclear Information System (INIS)

    Liu, W.; Wang, S.Y.; Jing, H.; Zheng, J.; Jiang, M.; Wang, J.S.

    2008-01-01

    The maglev performance of bulk high-T c superconductor (HTS) is investigated above three different types of permanent magnet guideways (PMGs). The main difference among these PMGs is the method used to concentrate the magnetic flux. The experimental results indicate that the levitation force depends only in part on the peak value of the magnetic field. The variation of the vertical component of the magnetic field (B z ), and the structure of the magnetic field are also responsible for the levitation force. These results imply that the permanent magnet with high coercive force is better at concentrating flux th an iron. The conclusions contribute in a very helpful way to the design and optimization of PMGs for HTS maglev systems

  1. Levitation performance of YBCO bulk in different applied magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W. [Applied Superconductivity Laboratory, Southwest Jiaotong University, Chengdu 610031 (China)], E-mail: asclab@asclab.cn; Wang, S.Y.; Jing, H.; Zheng, J.; Jiang, M.; Wang, J.S. [Applied Superconductivity Laboratory, Southwest Jiaotong University, Chengdu 610031 (China)

    2008-07-01

    The maglev performance of bulk high-T{sub c} superconductor (HTS) is investigated above three different types of permanent magnet guideways (PMGs). The main difference among these PMGs is the method used to concentrate the magnetic flux. The experimental results indicate that the levitation force depends only in part on the peak value of the magnetic field. The variation of the vertical component of the magnetic field (B{sub z}), and the structure of the magnetic field are also responsible for the levitation force. These results imply that the permanent magnet with high coercive force is better at concentrating flux th an iron. The conclusions contribute in a very helpful way to the design and optimization of PMGs for HTS maglev systems.

  2. Applying importance-performance analysis to evaluate banking service quality

    Directory of Open Access Journals (Sweden)

    André Luís Policani Freitas

    2012-11-01

    Full Text Available In an increasingly competitive market, the identification of the most important aspects and the measurement of service quality as perceived by the customers are important actions taken by organizations which seek the competitive advantage. In particular, this scenario is typical of Brazilian banking sector. In this context, this article presents an exploratory case study in which the Importance-Performance Analysis (IPA was used to identify the strong and the weak points related to services provided by a bank. In order to check the reliability of the questionnaire, Cronbach's alpha and correlation analyses were used. The results are presented and some actions have been defined in order to improve the quality of services.

  3. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  4. Academic performance in terms of the applied assessment system

    Directory of Open Access Journals (Sweden)

    Arribas, José María

    2012-05-01

    Full Text Available This paper examines the academic performance depending on the evaluation and rating system used in the university. The sample under study consists of 30 subjects -taught by 35 professors to 2192 students from 7 different degrees at 14 universities of all Spain-. The results confirm that continuous assessment is the one that best results not only in terms of rate of return and success rate but also in terms of grades. En este trabajo se estudia el rendimiento académico en función del sistema de evaluación y calificación empleado en el ámbito universitario. La muestra objeto de estudio está formada por 30 asignaturas -impartidas por 35 profesores a 2192 alumnos de 7 titulaciones diferentes en 14 universidades de toda España-. Los resultados obtenidos confirman que la evaluación continua es la que propicia los mejores resultados no solo en cuanto a la Tasa de Rendimiento y a la Tasa de Éxito, sino también en cuanto a las calificaciones obtenidas.

  5. Tangential derivative mapping of axial MEG applied to event-related desynchronization research

    NARCIS (Netherlands)

    Bastiaansen, M.C.M.; Knösche, T.R.

    2000-01-01

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related

  6. Sensitivity study of a semiautomatic supervised classifier applied to minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Flesche, Harald

    2000-01-01

    This paper addresses the problem of assessing the robustness with respect to change in parameters of an integrated training and classification routine for minerals commonly encountered in siliciclastic or carbonate rocks. Twelve chemical elements are mapped from thin sections by energy dispersive...

  7. Space-mapping techniques applied to the optimization of a safety isolating transformer

    NARCIS (Netherlands)

    T.V. Tran; S. Brisset; D. Echeverria (David); D.J.P. Lahaye (Domenico); P. Brochet

    2007-01-01

    textabstractSpace-mapping optimization techniques allow to allign low-fidelity and high-fidelity models in order to reduce the computational time and increase the accuracy of the solution. The main idea is to build an approximate model from the difference of response between both models. Therefore

  8. Reducing Communication Overhead by Scheduling TCP Transfers on Mobile Devices using Wireless Network Performance Maps

    DEFF Research Database (Denmark)

    Højgaard-Hansen, Kim; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2012-01-01

    The performance of wireless communication networks has been shown to have a strong location dependence. Measuring the performance while having accurate location information available makes it possible to generate performance maps. In this paper we propose a framework for the generation and use...... of such performance maps. We demonstrate how the framework can be used to reduce the retransmissions and to better utilise network resources when performing TCP-based file downloads in vehicular M2M communication scenarios. The approach works on top of a standard TCP stack hence has to map identified transmission...

  9. The Effectiveness of Concept Maps in Teaching Physics Concepts Applied to Engineering Education: Experimental Comparison of the Amount of Learning Achieved With and Without Concept Maps

    Science.gov (United States)

    Martínez, Guadalupe; Pérez, Ángel Luis; Suero, María Isabel; Pardo, Pedro J.

    2013-04-01

    A study was conducted to quantify the effectiveness of concept maps in learning physics in engineering degrees. The following research question was posed: What was the difference in learning results from the use of concept maps to study a particular topic in an engineering course? The study design was quasi-experimental and used a post-test as a measuring instrument. The sample included 114 university students from the School of Industrial Engineering who were divided into two equivalent homogeneous groups of 57 students each. The amount of learning attained by the students in each group was compared, with the independent variable being the teaching method; the experimental group (E.G.) used concept maps, while the control group (C.G.) did not. We performed a crossover study with the two groups of students, with one group acting as the E.G. for the topic of optical fibers and as the C.G. for the topic of the fundamental particles of matter and vice versa for the other group. For each of the two topics studied, the evaluation instrument was a test of 100 dichotomous items. The resulting data were subjected to a comparative statistical analysis, which revealed a significant difference in the amount of learning attained by the E.G. students as compared with the C.G. students. The results allow us to state that for the use of concept maps, the average increment in the E.G. students' learning was greater than 19 percentage points.

  10. Applying multibeam sonar and mathematical modeling for mapping seabed substrate and biota of offshore shallows

    Science.gov (United States)

    Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander

    2017-06-01

    Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.

  11. An integrated approach of analytical network process and fuzzy based spatial decision making systems applied to landslide risk mapping

    Science.gov (United States)

    Abedi Gheshlaghi, Hassan; Feizizadeh, Bakhtiar

    2017-09-01

    Landslides in mountainous areas render major damages to residential areas, roads, and farmlands. Hence, one of the basic measures to reduce the possible damage is by identifying landslide-prone areas through landslide mapping by different models and methods. The purpose of conducting this study is to evaluate the efficacy of a combination of two models of the analytical network process (ANP) and fuzzy logic in landslide risk mapping in the Azarshahr Chay basin in northwest Iran. After field investigations and a review of research literature, factors affecting the occurrence of landslides including slope, slope aspect, altitude, lithology, land use, vegetation density, rainfall, distance to fault, distance to roads, distance to rivers, along with a map of the distribution of occurred landslides were prepared in GIS environment. Then, fuzzy logic was used for weighting sub-criteria, and the ANP was applied to weight the criteria. Next, they were integrated based on GIS spatial analysis methods and the landslide risk map was produced. Evaluating the results of this study by using receiver operating characteristic curves shows that the hybrid model designed by areas under the curve 0.815 has good accuracy. Also, according to the prepared map, a total of 23.22% of the area, amounting to 105.38 km2, is in the high and very high-risk class. Results of this research are great of importance for regional planning tasks and the landslide prediction map can be used for spatial planning tasks and for the mitigation of future hazards in the study area.

  12. APPLYING PRINCIPAL COMPONENT ANALYSIS, MULTILAYER PERCEPTRON AND SELF-ORGANIZING MAPS FOR OPTICAL CHARACTER RECOGNITION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-11-01

    Full Text Available Optical Character Recognition plays an important role in data storage and data mining when the number of documents stored as images is increasing. It is expected to find the ways to convert images of typewritten or printed text into machine-encoded text effectively in order to support for the process of information handling effectively. In this paper, therefore, the techniques which are being used to convert image into editable text in the computer such as principal component analysis, multilayer perceptron network, self-organizing maps, and improved multilayer neural network using principal component analysis are experimented. The obtained results indicated the effectiveness and feasibility of the proposed methods.

  13. GIS-based landslide susceptibility mapping models applied to natural and urban planning in Trikala, Central Greece

    Directory of Open Access Journals (Sweden)

    Skilodimou, H. D.

    2009-06-01

    Full Text Available Landslide susceptibility mapping is a practical tool in natural and urban planning; it can be applied for determining land use zones, in construction design and planning of a variety of projects. In this study, two different GIS based landslide susceptibility maps were generated in the mountainous part of the Trikala Prefecture in Thessaly, Central Greece. This was accomplished by using different methods for correlating factors, which have an effect on landslide occurrences. The instability factors taken into account were: lithology, tectonic features, slope gradients, road network, drainage network, land use and rainfall. A frequency distribution of the half number of the landslide events of the study area in each class of the instability factors was performed in order to rate the classes. Two models have been used to combine the instability factors and assess the overall landslide susceptibility, namely: the Weight Factor Model (WeF, which is a statistical method, and the Multiple Factor Model (MuF that is a logical method. The produced maps were classified into four zones: Low, Moderate, High and Very High susceptible zones and validated using the other half number of the landslide events of the area. Evaluation of the results is optimized through a Landslide Models Indicator (La.M.I..Los mapas de susceptibilidad de deslizamientos representan una práctica herramienta en la planificación urbana y de espacios naturales. Así, puede aplicarse a la determinación de los usos de terrenos, en el diseño de construcción civil y para la planificación de gran variedad de actividades. En este estudio se generaron dos tipos diferentes de mapas de susceptibilidad basados en GIS para la parte montañosa de la prefectura de Trikala en Tesalia (Grecia Central. Estos se llevaron a cabo usando dos métodos de correlación de los factores que pueden tener un efecto en la generación de deslizamientos. Los factores de desestabilización tenidos en cuenta

  14. Improving performances of the knee replacement surgery process by applying DMAIC principles.

    Science.gov (United States)

    Improta, Giovanni; Balato, Giovanni; Romano, Maria; Ponsiglione, Alfonso Maria; Raiola, Eliana; Russo, Mario Alessandro; Cuccaro, Patrizia; Santillo, Liberatina Carmela; Cesarelli, Mario

    2017-12-01

    The work is a part of a project about the application of the Lean Six Sigma to improve health care processes. A previously published work regarding the hip replacement surgery has shown promising results. Here, we propose an application of the DMAIC (Define, Measure, Analyse, Improve, and Control) cycle to improve quality and reduce costs related to the prosthetic knee replacement surgery by decreasing patients' length of hospital stay (LOS) METHODS: The DMAIC cycle has been adopted to decrease the patients' LOS. The University Hospital "Federico II" of Naples, one of the most important university hospitals in Southern Italy, participated in this study. Data on 148 patients who underwent prosthetic knee replacement between 2010 and 2013 were used. Process mapping, statistical measures, brainstorming activities, and comparative analysis were performed to identify factors influencing LOS and improvement strategies. The study allowed the identification of variables influencing the prolongation of the LOS and the implementation of corrective actions to improve the process of care. The adopted actions reduced the LOS by 42%, from a mean value of 14.2 to 8.3 days (standard deviation also decreased from 5.2 to 2.3 days). The DMAIC approach has proven to be a helpful strategy ensuring a significant decreasing of the LOS. Furthermore, through its implementation, a significant reduction of the average costs of hospital stay can be achieved. Such a versatile approach could be applied to improve a wide range of health care processes. © 2017 John Wiley & Sons, Ltd.

  15. Applying parcel-specific land-use data to map conflicts and convergences between agriculture and biodiversity in Denmark

    DEFF Research Database (Denmark)

    Levin, Gregor

    2010-01-01

    Provision of semi-natural habitats is an important service of agricultural ecosystems. Quality and extent of semi-natural habitats is closely linked with intensity of agricultural management. Grass-dominated habitat types often depend on extensive management in terms of grazing or mowing. Lack...... compositions of semi-natural habitats disappear. For Denmark, we apply parcel-specific data on agricultural land use to map convergences and conflicts between agriculture and biodiversity. We group land-uses into intensive and extensive and overlay these with a map of grass-dominated semi-natural habitats. 61...... % of habitats overlap with extensively managed land, indicating convergence between agriculture and biodiversity. In contrast, 13 % of habitats overlap with intensively managed land, pointing at severe conflicts between agriculture and biodiversity. 27 % of habitats are located outside any agricultural land...

  16. Concept mapping applied to the intersection between older adults' outdoor walking and the built and social environments.

    Science.gov (United States)

    Hanson, Heather M; Schiller, Claire; Winters, Meghan; Sims-Gould, Joanie; Clarke, Philippa; Curran, Eileen; Donaldson, Meghan G; Pitman, Beverley; Scott, Vicky; McKay, Heather A; Ashe, Maureen C

    2013-12-01

    For older adults, the ability to navigate walking routes in the outdoor environment allows them to remain active and socially engaged, facilitating community participation and independence. In order to enhance outdoor walking, it is important to understand the interaction of older adults within their local environments and the influence of broader stakeholder priorities that impact these environments. Thus, we aimed to synthesize perspectives from stakeholders to identify elements of the built and social environments that influence older adults' ability to walk outdoors. We applied a concept mapping approach with the input of diverse stakeholders (N=75) from British Columbia, Canada in 2012. A seven-cluster map best represented areas that influence older adults' outdoor walking. Priority areas identified included sidewalks, crosswalks, and neighborhood features. Individual perceptions and elements of the built and social environments intersect to influence walking behaviors, although targeted studies that address this area are needed. © 2013.

  17. Condition monitoring and signature analysis techniques as applied to Madras Atomic Power Station (MAPS) [Paper No.: VIA - 1

    International Nuclear Information System (INIS)

    Rangarajan, V.; Suryanarayana, L.

    1981-01-01

    The technique of vibration signature analysis for identifying the machine troubles in their early stages is explained. The advantage is that a timely corrective action can be planned to avoid breakdowns and unplanned shutdowns. At the Madras Atomic Power Station (MAPS), this technique is applied to regularly monitor vibrations of equipment and thus is serving as a tool for doing corrective maintenance of equipment. Case studies of application of this technique to main boiler feed pumps, moderation pump motors, centrifugal chiller, ventilation system fans, thermal shield ventilation fans, filtered water pumps, emergency process sea water pumps, and antifriction bearings of MAPS are presented. Condition monitoring during commissioning and subsequent operation could indicate defects. Corrective actions which were taken are described. (M.G.B.)

  18. Fuzzy cognitive maps for applied sciences and engineering from fundamentals to extensions and learning algorithms

    CERN Document Server

    2014-01-01

    Fuzzy Cognitive Maps (FCM) constitute cognitive models in the form of fuzzy directed graphs consisting of two basic elements: the nodes, which basically correspond to “concepts” bearing different states of activation depending on the knowledge they represent, and the “edges” denoting the causal effects that each source node exercises on the receiving concept expressed through weights. Weights take values in the interval [-1,1], which denotes the positive, negative or neutral causal relationship between two concepts. An FCM can be typically obtained through linguistic terms, inherent to fuzzy systems, but with a structure similar to the neural networks, which facilitates data processing, and has capabilities for training and adaptation. During the last 10 years, an exponential growth of published papers in FCMs was followed showing great impact potential. Different FCM structures and learning schemes have been developed, while numerous studies report their use in many contexts with highly successful m...

  19. Crystal orientation mapping applied to the Y-TZP/WC composite

    CERN Document Server

    Faryna, M; Sztwiertnia, K

    2002-01-01

    Crystal orientation measurements made by electron backscattered diffraction (EBSD) in the scanning electron microscope (SEM) and microscopic observations provided the basis for a quantitative investigation of microstructure in an yttria stabilized, tetragonal zirconia-based (Y-TZP) composite. Automatic crystal orientation mapping (ACOM) in a SEM can be preferable to transmission electron microscopy (TEM) for microstructural characterization, since no sample thinning is required, extensive crystal data is already available, and the analysis area is greatly increased. A composite with a 20 vol.% tungsten carbide (WC) content was chosen since it revealed crystal relationships between the matrix and carbide phase already established by TEM analysis. However, this composite was difficult to investigate in the EBSD/ SEM since it is non-conductive, the Y-TZP grain size is of the order of the system resolution, and the sample surface, though carefully prepared, reveals a distinctive microtopography. In this paper, so...

  20. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    Science.gov (United States)

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  1. Performance analysis of a compact and low-cost mapping-grade mobile laser scanning system

    Science.gov (United States)

    Julge, Kalev; Vajakas, Toivo; Ellmann, Artu

    2017-10-01

    The performance of a low-cost, self-contained, compact, and easy to deploy mapping-grade mobile laser scanning (MLS) system, which is composed of a light detection and ranging sensor Velodyne VLP-16 and a dual antenna global navigation satellite system/inertial navigation system SBG Systems Ellipse-D, is analyzed. The field tests were carried out in car-mounted and backpack modes for surveying road engineering structures (such as roads, parking lots, underpasses, and tunnels) and coastal erosion zones, respectively. The impact of applied calculation principles on trajectory postprocessing, direct georeferencing, and the theoretical accuracy of the system is analyzed. A calibration method, based on Bound Optimization BY Quadratic Approximation, for finding the boresight angles of an MLS system is proposed. The resulting MLS point clouds are compared with high-accuracy static terrestrial laser scanning data and survey-grade MLS data from a commercially manufactured MLS system. The vertical, horizontal, and relative accuracy are assessed-the root-mean-square error (RMSE) values were determined to be 8, 15, and 3 cm, respectively. Thus, the achieved mapping-grade accuracy demonstrates that this relatively compact and inexpensive self-assembled MLS can be successfully used for surveying the geometry and deformations of terrain, buildings, road, and other engineering structures.

  2. Effects of Concept Mapping Strategy on Learning Performance in Business and Economics Statistics

    Science.gov (United States)

    Chiou, Chei-Chang

    2009-01-01

    A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…

  3. The Use of Causal Mapping in the Design of Sustainability Performance Measurement Systems

    DEFF Research Database (Denmark)

    Parisi, Cristiana

    2013-01-01

    organisations’ strategic performance measurement systems (SPMSs). This study’s main contribution is the triangulation of multiple qualitative methods to enhance the reliability of causal maps. This innovative approach supports the use of causal mapping to extract managerial tacit knowledge in order to identify...

  4. Addressing the Influence of Hidden State on Wireless Network Optimizations using Performance Maps

    DEFF Research Database (Denmark)

    Højgaard-Hansen, Kim; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2015-01-01

    be used to optimize the use of the wireless net- work by predicting future network performance and scheduling the net- work communication for certain applications on mobile devices. However, other important factors influence the performance of the wireless communication such as changes in the propagation...... environment and resource sharing. In this work we extend the framework of performance maps for wireless networks by introducing network state as an abstraction for all other factors than location that influence the performance. Since network state might not always be directly observable the framework......Performance of wireless connectivity for network client devices is location dependent. It has been shown that it can be beneficial to collect network performance metrics along with location information to generate maps of the location dependent network performance. These performance maps can...

  5. Self-organizing maps applied to two-phase flow on natural circulation loop studies

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Leonardo F.; Cunha, Kelly de P.; Andrade, Delvonei A.; Sabundjian, Gaiane; Torres, Walmir M.; Macedo, Luiz A.; Rocha, Marcelo da S.; Masotti, Paulo H.F.; Mesquita, Roberto N. de, E-mail: rnavarro@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Two-phase flow of liquid and gas is found in many closed circuits using natural circulation for cooling purposes. Natural circulation phenomenon is important on recent nuclear power plant projects for heat removal on 'loss of pump power' or 'plant shutdown' accidents. The accuracy of heat transfer estimation has been improved based on models that require precise prediction of pattern transitions of flow. Self-Organizing Maps are trained to digital images acquired on natural circulation flow instabilities. This technique will allow the selection of the more important characteristics associated with each flow pattern, enabling a better comprehension of each observed instability. This periodic flow oscillation behavior can be observed thoroughly in this facility due its glass-made tubes transparency. The Natural Circulation Facility (Circuito de Circulacao Natural - CCN) installed at Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN, is an experimental circuit designed to provide thermal hydraulic data related to one and two phase flow under natural circulation conditions. (author)

  6. Automatic and efficient methods applied to the binarization of a subway map

    Science.gov (United States)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  7. Proportional odds model applied to mapping of disease resistance genes in plants

    Directory of Open Access Journals (Sweden)

    Maria Helena Spyrides-Cunha

    2000-03-01

    Full Text Available Molecular markers have been used extensively to map quantitative trait loci (QTL controlling disease resistance in plants. Mapping is usually done by establishing a statistical association between molecular marker genotypes and quantitative variations in disease resistance. However, most statistical approaches require a continuous distribution of the response variable, a requirement not always met since evaluation of disease resistance is often done using visual ratings based on an ordinal scale of disease severity. This paper discusses the application of the proportional odds model to the mapping of disease resistance genes in plants amenable to expression as ordinal data. The model was used to map two resistance QTL of maize to Puccinia sorghi. The microsatellite markers bngl166 and bngl669, located on chromosomes 2 and 8, respectively, were used to genotype F2 individuals from a segregating population. Genotypes at each marker locus were then compared by assessing disease severity in F3 plants derived from the selfing of each genotyped F2 plant based on an ordinal scale severity. The residual deviance and the chi-square score statistic indicated a good fit of the model to the data and the odds had a constant proportionality at each threshold. Single-marker analyses detected significant differences among marker genotypes at both marker loci, indicating that these markers were linked to disease resistance QTL. The inclusion of the interaction term after single-marker analysis provided strong evidence of an epistatic interaction between the two QTL. These results indicate that the proportional odds model can be used as an alternative to traditional methods in cases where the response variable consists of an ordinal scale, thus eliminating the problems of heterocedasticity, non-linearity, and the non-normality of residuals often associated with this type of data.Marcadores moleculares têm sido extensivamente usados para o mapeamento de loci de

  8. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  9. Constructing a strategy map for banking institutions with key performance indicators of the balanced scorecard.

    Science.gov (United States)

    Wu, Hung-Yi

    2012-08-01

    This study presents a structural evaluation methodology to link key performance indicators (KPIs) into a strategy map of the balanced scorecard (BSC) for banking institutions. Corresponding with the four BSC perspectives (finance, customer, internal business process, and learning and growth), the most important evaluation indicators of banking performance are synthesized from the relevant literature and screened by a committee of experts. The Decision Making Trial and Evaluation Laboratory (DEMATEL) method, a multiple criteria analysis tool, is then employed to determine the causal relationships between the KPIs, to identify the critical central and influential factors, and to establish a visualized strategy map with logical links to improve banking performance. An empirical application is provided as an example. According to the expert evaluations, the three most essential KPIs for banking performance are customer satisfaction, sales performance, and customer retention rate. The DEMATEL results demonstrate a clear road map to assist management in prioritizing the performance indicators and in focusing attention on the strategy-related activities of the crucial indicators. According to the constructed strategy map, management could better invest limited resources in the areas that need improvement most. Although these strategy maps of the BSC are not universal, the research results show that the presented approach is an objective and feasible way to construct strategy maps more justifiably. The proposed framework can be applicable to institutions in other industries as well. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Machine-learning classifiers applied to habitat and geological substrate mapping offshore South Carolina

    Science.gov (United States)

    White, S. M.; Maschmeyer, C.; Anderson, E.; Knapp, C. C.; Brantley, D.

    2017-12-01

    classification as the classifier confused flat parts with relatively flat sand data. 100% of testing data representing rocky portions of the seafloor were correctly classified. The use of machine-learning classifiers to determine seafloor-type provides a new solution to habitat mapping and offshore engineering problems.

  11. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  12. Factors Affecting Students' Performance and Practice on Map ...

    African Journals Online (AJOL)

    The percentage is used to show that the level of the performance and achievement of the students. The findings suggest that possible intervention to help the students score high academic achievement should focus on teachers' training, enabling students to work hard persevere to succeed, identifying effective study ...

  13. Mapping the Developmental Constraints on Working Memory Span Performance

    Science.gov (United States)

    Bayliss, Donna M.; Jarrold, Christopher; Baddeley, Alan D.; Gunn, Deborah M.; Leigh, Eleanor

    2004-01-01

    This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related…

  14. Mapping the developmental constraints on working memory span performance.

    Science.gov (United States)

    Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor

    2005-07-01

    This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.

  15. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    Science.gov (United States)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct

  16. Applying persistent scatterer interferometry for surface displacement mapping in the Azul open pit manganese mine (Amazon region) with TerraSAR-X StripMap data

    Science.gov (United States)

    Athayde Pinto, Carolina de; Paradella, Waldir Renato; Mura, José Claudio; Gama, Fabio Furlan; Ribeiro dos Santos, Athos; Silva, Guilherme Gregório; Hartwig, Marcos Eduardo

    2015-01-01

    The Azul mining complex, located in the Carajás Mineral Province, Amazon region, encompasses the most important manganese mine in Brazil. Vale S.A. company operates three simultaneous open pit excavations (mines 1, 2, and 3) in the area, which are conducted on rock alteration products of low geomechanical quality related to sandstones, siltstones, and a lateritic cover. In order to monitor ground deformation, 33 TerraSAR-X (TSX-1) StripMap images covering the period of March 2012-April 2013 were used in the investigation. An advanced differential interferometric synthetic aperture radar (A-DInSAR) approach based on persistent scatterer interferometry (PSI) using an interferometric point target analysis algorithm was applied, and the results showed that most of the area was considered stable during the time span of the synthetic aperture radar acquisitions. However, persistent scatterers (PS) with high deformation rates were mapped over a waste pile, probably related to settlements, and also along the north flank of mine 1, indicative of cut slope movements toward the center of the pit. A spatial relationship of geological structures with PS was observed for this sector of the mine, given by PS showing deformation rates concentrated along a structural corridor with faults, fractures, and folds related to the Carajás fault system. Though only ground-based radar measurements for wall benches of mine 1 were available for a short time period of the TSX-1 coverage, the PS movement patterns showed concordance with geotechnical field measurements. The investigation emphasized the important role that satellite-based A-DInSAR can play for deformation monitoring and risk assessment in this kind of mining area.

  17. Volume perfusion CT imaging of cerebral vasospasm: diagnostic performance of different perfusion maps

    Energy Technology Data Exchange (ETDEWEB)

    Othman, Ahmed E. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Afat, Saif; Nikoubashman, Omid; Mueller, Marguerite; Wiesmann, Martin; Brockmann, Carolin [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Schubert, Gerrit Alexander [RWTH Aachen University, Department of Neurosurgery, Aachen (Germany); Bier, Georg [Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Neuroradiology, Tuebingen (Germany); Brockmann, Marc A. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); University Hospital Mainz, Department of Neuroradiology, Mainz (Germany)

    2016-08-15

    In this study, we aimed to evaluate the diagnostic performance of different volume perfusion CT (VPCT) maps regarding the detection of cerebral vasospasm compared to angiographic findings. Forty-one datasets of 26 patients (57.5 ± 10.8 years, 18 F) with subarachnoid hemorrhage and suspected cerebral vasospasm, who underwent VPCT and angiography within 6 h, were included. Two neuroradiologists independently evaluated the presence and severity of vasospasm on perfusion maps on a 3-point Likert scale (0 - no vasospasm, 1 - vasospasm affecting <50 %, 2 - vasospasm affecting >50 % of vascular territory). A third neuroradiologist independently assessed angiography for the presence and severity of vasospasm on a 3-point Likert scale (0 - no vasospasm, 1 - vasospasm affecting < 50 %, 2 - vasospasm affecting > 50 % of vessel diameter). Perfusion maps of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and time to drain (TTD) were evaluated regarding diagnostic accuracy for cerebral vasospasm with angiography as reference standard. Correlation analysis of vasospasm severity on perfusion maps and angiographic images was performed. Furthermore, inter-reader agreement was assessed regarding findings on perfusion maps. Diagnostic accuracy for TTD and MTT was significantly higher than for all other perfusion maps (TTD, AUC = 0.832; MTT, AUC = 0.791; p < 0.001). TTD revealed higher sensitivity than MTT (p = 0.007). The severity of vasospasm on TTD maps showed significantly higher correlation levels with angiography than all other perfusion maps (p ≤ 0.048). Inter-reader agreement was (almost) perfect for all perfusion maps (kappa ≥ 0.927). The results of this study indicate that TTD maps have the highest sensitivity for the detection of cerebral vasospasm and highest correlation with angiography regarding the severity of vasospasm. (orig.)

  18. Using Mind Maps to Improve Medical Student Performance in a Pharmacology Course at Kunming Medical University.

    Science.gov (United States)

    Ying, Guo; Jianping, Xie; Haiyun, Luo; Xia, Li; Jianyu, Yang; Qun, Xuan; Jianyun, Yu

    2017-07-01

    To determine whether students using mind maps would improve their performance in a final examination at the end of lecture-based pharmacology course. Aquasi-experimental study. Kunming Medical University, from September 2014 to January 2015. One hundred and twenty-two (122) third year undergraduate medical students, starting a 48-hour lecturebased pharmacology course, volunteered to use mind maps as one of their study strategies (intervention group), while the remaining 100 students in the class continued to use their usual study strategies (control group) over the duration of the course. The performance of both groups in the final course examination was compared. Students in the intervention group also completed a questionnaire on the usefulness of mind maps during the course and in preparation for the final examination. The students' performance of intervention group was superior to performance of the control group in all parts of a multi-modal final examination. For the multiple choice questions and comprehensive scores, average marks of 45.97 ±7.22 and 68.07 ±12.77, respectively were acquired by the control group, and 51.77 ±4.95 (pcontrol group, and were all significantly higher at 8.00 (4.00) (p=0.024), 10.00 (2.00) (pmind maps helped them to prepare more efficiently for the final exam; 90.91% believed that mind maps helped them to better understand all of pharmacology. Ninety-one percent also thought that mind maps would help them to better understand other disciplines, and 86.36% students would like the lecturers to utilize mind mapping as an alternative to conventional teaching formats, such as the use of Power Point. The addition of mind maps to students' study of pharmacology at Kunming Medical University improved their performance in all aspects of a multi-modal final examination.

  19. Self-organizing maps of Kohonen (SOM) applied to multidimensional monitoring data of the IEA-R1 nuclear research reactor

    International Nuclear Information System (INIS)

    Affonso, Gustavo S.; Pereira, Iraci M.; Mesquita, Roberto N. de; Bueno, Elaine I.

    2011-01-01

    Multivariate statistics comprise a set of statistical methods used in situations where many variables are database space subsets. Initially applied to human, social and biological sciences, these methods are being applied to many other areas such as education, geology, chemistry, physics, engineering, and many others. This spectra expansion was possible due to recent technological development of computation hardware and software that allows high and complex databases to be treated iteratively enabling further analysis. Following this trend, the neural networks called Self-Organizing Maps are turning into a powerful tool on visualization of implicit and unknown correlations in big sized database sets. Originally created by Kohonen in 1981, it was applied to speech recognition tasks. The SOM is being used as a comparative parameter to evaluate the performance of new multidimensional analysis methodologies. Most of methods require good variable input selection criteria and SOM has contributed to clustering, classification and prediction of multidimensional engineering process variables. This work proposes a method of applying SOM to a set of 58 IEA-R1 operational variables at IPEN research reactor which are monitored by a Data Acquisition System (DAS). This data set includes variables as temperature, flow mass rate, coolant level, nuclear radiation, nuclear power and control bars position. DAS enables the creation and storage of historical data which are used to contribute to Failure Detection and Monitoring System development. Results show good agreement with previous studies using other methods as GMDH and other predictive methods. (author)

  20. Self-organizing maps of Kohonen (SOM) applied to multidimensional monitoring data of the IEA-R1 nuclear research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Affonso, Gustavo S.; Pereira, Iraci M.; Mesquita, Roberto N. de, E-mail: rnavarro@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine I., E-mail: ebueno@ifsp.gov.b [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), SP (Brazil)

    2011-07-01

    Multivariate statistics comprise a set of statistical methods used in situations where many variables are database space subsets. Initially applied to human, social and biological sciences, these methods are being applied to many other areas such as education, geology, chemistry, physics, engineering, and many others. This spectra expansion was possible due to recent technological development of computation hardware and software that allows high and complex databases to be treated iteratively enabling further analysis. Following this trend, the neural networks called Self-Organizing Maps are turning into a powerful tool on visualization of implicit and unknown correlations in big sized database sets. Originally created by Kohonen in 1981, it was applied to speech recognition tasks. The SOM is being used as a comparative parameter to evaluate the performance of new multidimensional analysis methodologies. Most of methods require good variable input selection criteria and SOM has contributed to clustering, classification and prediction of multidimensional engineering process variables. This work proposes a method of applying SOM to a set of 58 IEA-R1 operational variables at IPEN research reactor which are monitored by a Data Acquisition System (DAS). This data set includes variables as temperature, flow mass rate, coolant level, nuclear radiation, nuclear power and control bars position. DAS enables the creation and storage of historical data which are used to contribute to Failure Detection and Monitoring System development. Results show good agreement with previous studies using other methods as GMDH and other predictive methods. (author)

  1. Effects of Different Forms of Concept-Map Representation on Nursing Students’ Critical Thinking Performances

    Directory of Open Access Journals (Sweden)

    Chin-Yuan Lai

    2015-06-01

    Full Text Available Representation is important for problem solving. This study examined the effects of different forms of concept maps on nursing students’ performances of conceptualizing psychiatric patients’ problems. A quasi-experimental research design was adopted to investigate the effects. The participants were two classes of fourth-year students who were enrolled in a psychiatric nursing course in a nursing college. One class with 48 students served as the experimental group, and used web-based concepts map to represent patients’ problem. The other class with 50 students served as the control group, and used the traditional hierarchical concept mapping method. The results indicated that the concept maps drawn by the experimental group showed more nursing problem, supporting evidence, and relationships between nursing problems than those drawn by the control group. The web-based concept maps helped expand students’ thinking and promoted their causality reasoning. Different concept-map representation tools affected the process of students’ problem solving. The experimental learning activities promoted students’ understanding of concepts and ways of psychiatric patients’ care taking. To understand the effects of other types of concept maps, future research may guide students in using different forms of concept maps throughout the stages of nursing process.

  2. Optical high-performance computing: introduction to the JOSA A and Applied Optics feature.

    Science.gov (United States)

    Caulfield, H John; Dolev, Shlomi; Green, William M J

    2009-08-01

    The feature issues in both Applied Optics and the Journal of the Optical Society of America A focus on topics of immediate relevance to the community working in the area of optical high-performance computing.

  3. Mapping Plant Functional Types over Broad Mountainous Regions: A Hierarchical Soft Time-Space Classification Applied to the Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Danlu Cai

    2014-04-01

    Full Text Available Research on global climate change requires plant functional type (PFT products. Although several PFT mapping procedures for remote sensing imagery are being used, none of them appears to be specifically designed to map and evaluate PFTs over broad mountainous areas which are highly relevant regions to identify and analyze the response of natural ecosystems. We present a methodology for generating soft classifications of PFTs from remotely sensed time series that are based on a hierarchical strategy by integrating time varying integrated NDVI and phenological information with topography: (i Temporal variability: a Fourier transform of a vegetation index (MODIS NDVI, 2006 to 2010. (ii Spatial partitioning: a primary image segmentation based on a small number of thresholds applied to the Fourier amplitude. (iii Classification by a supervised soft classification step is based on a normalized distance metric constructed from a subset of Fourier coefficients and complimentary altitude data from a digital elevation model. Applicability and effectiveness is tested for the eastern Tibetan Plateau. A classification nomenclature is determined from temporally stable pixels in the MCD12Q1 time series. Overall accuracy statistics of the resulting classification reveal a gain of about 7% from 64.4% compared to 57.7% by the MODIS PFT products.

  4. From theory to practice: how to apply van Deth’s conceptual map in empirical political participation research

    DEFF Research Database (Denmark)

    Ohme, Jakob; de Vreese, Claes Holger; Albæk, Erik

    2018-01-01

    , it remains a theoretical endeavor that needs to prove its utility when applied to the diverse set of participatory activities. Our study empirically tests how recently emerging participatory activities, such as crowdfunding or urban gardening, can conceptually be combined with more traditional forms......In a time when digitally networked and unconventional activities challenge our understanding of political participation, van Deth (Acta Polit 49(3):349–367, 2014) has developed a map to consolidate previous attempts at conceptualizing political participation. He suggests a framework operating....... Our model furthermore indicates that the distinction between online and offline activities has decreased in relevance and that new and unconventional participation activities can be subsumed under van Deth’s four types of political participation....

  5. On the spatial and temporal resolution of land cover products for applied use in wind resource mapping

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Badger, Merete; Dellwik, Ebba

    as input for modelling the wind conditions over a Danish near-coastal region. The flow model results were compared to alternative use of USGS land cover. Significant variations in the wind speed were found between the two atmospheric flow model results. Furthermore the wind speed from the flow model...... was compared to meteorological observations taken in a tall mast and from ground based remote-sensing wind profiling lidars. It is shown that simulations using CORINE provide better wind flow results close to the surface as compared to those using USGS on the investigated site. The next step towards...... improvement of flow model inputs is to investigate in further detail applied use of satellite maps in forested areas. 75% of new land-based wind farms are planned in or near forests in Europe. In forested areas the near surface atmospheric flow is more challenging to calculate than in regions with low...

  6. Book Review of “Human Behaviour and the Social Environment: Models, Metaphors, and Maps for Applying Theoretical Perspectives to Practice”. 640 pages, Thomson Brooks/Cole, 2007, by James A. Forte

    OpenAIRE

    Moula, Alireza

    2008-01-01

    This voluminous book which draws on almost 1000 references provides an important theoretical base for practice. After an informative introduction about models, maps and metaphors, Forte provides an impressive presentation of several perspectives for use in practice; applied ecological theory, applied system theory, applied biology, applied cognitive science, applied psychodynamic theory, applied behaviourism, applied symbolic interactionism, applied social role theory, applied economic theory...

  7. Potential Performance Theory (PPT): A General Theory of Task Performance Applied to Morality

    Science.gov (United States)

    Trafimow, David; Rice, Stephen

    2008-01-01

    People can use a variety of different strategies to perform tasks and these strategies all have two characteristics in common. First, they can be evaluated in comparison with either an absolute or a relative standard. Second, they can be used at varying levels of consistency. In the present article, the authors develop a general theory of task…

  8. Heterophobia: Subverting Heterosexual Hegemony through Intermedial Applied Performance for Young People

    Science.gov (United States)

    Phillips, Hannah

    2016-01-01

    This article responds to intermediality through a case study of an intermedial applied performance for young people. "Heterophobia," a hybrid fusion of live performance, digital technology, social media and urban street art, aimed to challenge homophobia in schools and online. Intermediality was used as a tool to enhance young people's…

  9. Applying value stream mapping techniques to eliminate non-value-added waste for the procurement of endovascular stents

    International Nuclear Information System (INIS)

    Teichgräber, Ulf K.; Bucourt, Maximilian de

    2012-01-01

    Objectives: To eliminate non-value-adding (NVA) waste for the procurement of endovascular stents in interventional radiology services by applying value stream mapping (VSM). Materials and methods: The Lean manufacturing technique was used to analyze the process of material and information flow currently required to direct endovascular stents from external suppliers to patients. Based on a decision point analysis for the procurement of stents in the hospital, a present state VSM was drawn. After assessment of the current status VSM and progressive elimination of unnecessary NVA waste, a future state VSM was drawn. Results: The current state VSM demonstrated that out of 13 processes for the procurement of stents only 2 processes were value-adding. Out of the NVA processes 5 processes were unnecessary NVA activities, which could be eliminated. The decision point analysis demonstrated that the procurement of stents was mainly a forecast driven push system. The future state VSM applies a pull inventory control system to trigger the movement of a unit after withdrawal by using a consignment stock. Conclusion: VSM is a visualization tool for the supply chain and value stream, based on the Toyota Production System and greatly assists in successfully implementing a Lean system.

  10. Applying value stream mapping techniques to eliminate non-value-added waste for the procurement of endovascular stents.

    Science.gov (United States)

    Teichgräber, Ulf K; de Bucourt, Maximilian

    2012-01-01

    OJECTIVES: To eliminate non-value-adding (NVA) waste for the procurement of endovascular stents in interventional radiology services by applying value stream mapping (VSM). The Lean manufacturing technique was used to analyze the process of material and information flow currently required to direct endovascular stents from external suppliers to patients. Based on a decision point analysis for the procurement of stents in the hospital, a present state VSM was drawn. After assessment of the current status VSM and progressive elimination of unnecessary NVA waste, a future state VSM was drawn. The current state VSM demonstrated that out of 13 processes for the procurement of stents only 2 processes were value-adding. Out of the NVA processes 5 processes were unnecessary NVA activities, which could be eliminated. The decision point analysis demonstrated that the procurement of stents was mainly a forecast driven push system. The future state VSM applies a pull inventory control system to trigger the movement of a unit after withdrawal by using a consignment stock. VSM is a visualization tool for the supply chain and value stream, based on the Toyota Production System and greatly assists in successfully implementing a Lean system. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Capability Assessment and Performance Metrics for the Titan Multispectral Mapping Lidar

    Directory of Open Access Journals (Sweden)

    Juan Carlos Fernandez-Diaz

    2016-11-01

    Full Text Available In this paper we present a description of a new multispectral airborne mapping light detection and ranging (lidar along with performance results obtained from two years of data collection and test campaigns. The Titan multiwave lidar is manufactured by Teledyne Optech Inc. (Toronto, ON, Canada and emits laser pulses in the 1550, 1064 and 532 nm wavelengths simultaneously through a single oscillating mirror scanner at pulse repetition frequencies (PRF that range from 50 to 300 kHz per wavelength (max combined PRF of 900 kHz. The Titan system can perform simultaneous mapping in terrestrial and very shallow water environments and its multispectral capability enables new applications, such as the production of false color active imagery derived from the lidar return intensities and the automated classification of target and land covers. Field tests and mapping projects performed over the past two years demonstrate capabilities to classify five land covers in urban environments with an accuracy of 90%, map bathymetry under more than 15 m of water, and map thick vegetation canopies at sub-meter vertical resolutions. In addition to its multispectral and performance characteristics, the Titan system is designed with several redundancies and diversity schemes that have proven to be beneficial for both operations and the improvement of data quality.

  12. High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.

    Science.gov (United States)

    Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue

    2010-11-13

    Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.

  13. Achieving high signal-to-noise performance for a velocity-map imaging experiment

    International Nuclear Information System (INIS)

    Roberts, E.H.; Cavanagh, S.J.; Gibson, S.T.; Lewis, B.R.; Dedman, C.J.; Picker, G.J.

    2005-01-01

    Since the publication of the pioneering paper on velocity-map imaging in 1997, by Eppink and Parker [A.T.J.B. Eppink, D.H. Parker, Rev. Sci. Instrum. 68 (1997) 3477], numerous groups have applied this method in a variety of ways and to various targets. However, despite this interest, little attention has been given to the inherent difficulties and problems associated with this method. In implementing a velocity-map imaging system for photoelectron spectroscopy for the photo-detachment of anion radicals, we have developed a coaxial velocity-map imaging spectrometer. Examined are the advantages and disadvantages of such a system, in particular the sources of noise and the methods used to reduce it

  14. Phase Transition Mapping by Means of Neutron Imaging in SOFC Anode Supports During Reduction Under Applied Stress

    DEFF Research Database (Denmark)

    Makowska, Malgorzata; Strobl, M.; Lauridsen, E. M.

    2015-01-01

    Mechanical and electrochemical performance of layers composed of Ni-YSZ cermet in solid oxide fuel and electrolysis cells (SOC) depends on their microstructure and initial internal stresses. After sintering, the manufacturing conditions, i.e. temperature, atmosphere and loads, can influence...... the microstructure and in particular the internal stresses in the Ni-YSZ layer and thereby the cell performance. Spatially resolved observation of the phase transition during reduction can provide information on how parameters like temperature and external load influence the reaction progress. This information...... is crucial for optimization of the SOC performance. In this work the measurements with energy resolved neutron imaging of the phase transition during the NiOYSZ reduction performed at different temperatures with and without applied load, are presented. The results indicate a link between reduction rate...

  15. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  16. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  17. Scaffolding EFL Oral Performance through Story Maps and Podcasts and Students’ Attitudes toward it

    Directory of Open Access Journals (Sweden)

    Mohammed Pazhouhesh

    2014-11-01

    Full Text Available The present study sought to explore the impact of story maps and audio podcasts as scaffolds on oral proficiency of Iranian EFL learners. The quasi-experimental study was launched with 36 EFL undergraduates in three groups by adopting a counterbalanced 3  3 Latin squared design. All participants were indiscriminately, but in a specified order, exposed to the three treatment conditions of story retelling, story retelling plus story map, and story retelling plus podcast, and post-tested sequentially. The Latin square analysis of the oral assessment scale showed statistically meaningful differences under the treatment conditions for the groups. The post-hoc test also showed overachievements of the participants under the treatment conditions of story retelling plus story map and story retelling plus podcasts. The performance under podcast condition was significantly better than performances under the story map and short story conditions. The post-experiment opinion survey showed the learners’ preferences for and positive attitudes towards podcast and story map as scaffolds in developing EFL oral proficiency. The participants welcomed integration of the scaffolds into EFL speaking courses.

  18. Transparency about multidimensional activities and performance: What can U-map and U-multirank contribute?

    NARCIS (Netherlands)

    Westerheijden, Donald F.; Rosa, Maria Joao; Amaral, Alberto

    2014-01-01

    Two new, user-driven and web-based transparency tools for higher education are presented: U-Map, a classification of higher education institutions according to their actual activities, and U-Multirank, a multidimensional ranking of higher education institutions’ and study fields’ performances. The

  19. Viewing or Visualising Which Concept Map Strategy Works Best on Problem-Solving Performance?

    Science.gov (United States)

    Lee, Youngmin; Nelson, David W.

    2005-01-01

    The purpose of this study was to investigate the effects of two types of maps (generative vs. completed) and the amount of prior knowledge (high vs. low) on well-structured and ill-structured problem-solving performance. Forty-four undergraduates who were registered in an introductory instructional technology course participated in the study.…

  20. Designs of Concept Maps and Their Impacts on Readers' Performance in Memory and Reasoning while Reading

    Science.gov (United States)

    Tzeng, Jeng-Yi

    2010-01-01

    From the perspective of the Fuzzy Trace Theory, this study investigated the impacts of concept maps with two strategic orientations (comprehensive and thematic representations) on readers' performance of cognitive operations (such as perception, verbatim memory, gist reasoning and syntheses) while the readers were reading two history articles that…

  1. Globally Consistent Indoor Mapping via a Decoupling Rotation and Translation Algorithm Applied to RGB-D Camera Output

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2017-10-01

    Full Text Available This paper presents a novel RGB-D 3D reconstruction algorithm for the indoor environment. The method can produce globally-consistent 3D maps for potential GIS applications. As the consumer RGB-D camera provides a noisy depth image, the proposed algorithm decouples the rotation and translation for a more robust camera pose estimation, which makes full use of the information, but also prevents inaccuracies caused by noisy depth measurements. The uncertainty in the image depth is not only related to the camera device, but also the environment; hence, a novel uncertainty model for depth measurements was developed using Gaussian mixture applied to multi-windows. The plane features in the indoor environment contain valuable information about the global structure, which can guide the convergence of camera pose solutions, and plane and feature point constraints are incorporated in the proposed optimization framework. The proposed method was validated using publicly-available RGB-D benchmarks and obtained good quality trajectory and 3D models, which are difficult for traditional 3D reconstruction algorithms.

  2. Applying a social network analysis (SNA) approach to understanding radiologists' performance in reading mammograms

    Science.gov (United States)

    Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah

    2017-03-01

    Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.

  3. Applying Best Business Practices from Corporate Performance Management to DoD

    Science.gov (United States)

    2013-01-01

    leading or governing large, complex corporations and are experienced in creating reliable solutions to complex management issues guided by best business ...recommendations and effective solutions aimed at improving DoD. Defense Business Board Corporate Performance Management REPORT FY13-03 Task...Group 1 Applying Best Business Practices from Corporate Performance Management to DoD TASK The Deputy Secretary of Defense (DEPSECDEF

  4. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  5. Students as Employees: Applying Performance Management Principles in the Management Classroom

    Science.gov (United States)

    Gillespie, Treena L.; Parry, Richard O.

    2009-01-01

    The student-as-employee metaphor emphasizes student accountability and participation in learning and provides instructors with work-oriented methods for creating a productive class environment. The authors propose that the tenets of performance management in work organizations can be applied to the classroom. In particular, they focus on three…

  6. Applying self-organizing map and modified radial based neural network for clustering and routing optimal path in wireless network

    Science.gov (United States)

    Hoomod, Haider K.; Kareem Jebur, Tuka

    2018-05-01

    Mobile ad hoc networks (MANETs) play a critical role in today’s wireless ad hoc network research and consist of active nodes that can be in motion freely. Because it consider very important problem in this network, we suggested proposed method based on modified radial basis function networks RBFN and Self-Organizing Map SOM. These networks can be improved by the use of clusters because of huge congestion in the whole network. In such a system, the performance of MANET is improved by splitting the whole network into various clusters using SOM. The performance of clustering is improved by the cluster head selection and number of clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. Proposed routing algorithm depends on the group of factors and parameters to select the path between two points in the wireless network. The SOM clustering average time (1-10 msec for stall nodes) and (8-75 msec for mobile nodes). While the routing time range (92-510 msec).The proposed system is faster than the Dijkstra by 150-300%, and faster from the RBFNN (without modify) by 145-180%.

  7. The pro children intervention: applying the intervention mapping protocol to develop a school-based fruit and vegetable promotion programme.

    Science.gov (United States)

    Pérez-Rodrigo, Carmen; Wind, Marianne; Hildonen, Christina; Bjelland, Mona; Aranceta, Javier; Klepp, Knut-Inge; Brug, Johannes

    2005-01-01

    The importance of careful theory-based intervention planning is recognized for fruit and vegetable promotion. This paper describes the application of the Intervention Mapping (IM) protocol to develop the Pro Children intervention to promote consumption of fruit and vegetable among 10- to 13-year-old schoolchildren. Based on a needs assessment, promotion of intake of fruit and vegetable was split into performance objectives and related personal, social and environmental determinants. Crossing the performance objectives with related important and changeable determinants resulted in a matrix of learning and change objectives for which appropriate educational strategies were identified. Theoretically similar but culturally relevant interventions were designed, implemented and evaluated in Norway, the Netherlands and Spain during 2 school years. Programme activities included provision of fruits and vegetables in the schools, guided classroom activities, computer-tailored feedback and advice for children, and activities to be completed at home with the family. Additionally, optional intervention components for community reinforcement included incorporation of mass media, school health services or grocery stores. School project committees were supported. The Pro Children intervention was carefully developed based on the IM protocol that resulted in a comprehensive school-based fruit and vegetable promotion programme, but culturally sensible and locally relevant. (c) 2005 S. Karger AG, Basel

  8. Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis

    Science.gov (United States)

    Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.

    2012-01-01

    MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.

  9. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  10. Geographic Information Systems and geomorphological mapping applied to landslide inventory and susceptibility mapping in El Estado river, Pico de Orizaba, Mexico

    Directory of Open Access Journals (Sweden)

    José Fernando Aceves Quesada

    2016-11-01

    Full Text Available With the aim of raising awareness on the prevention of landslide disasters, this work develops a methodology that incorporates geomorphological mapping into the mapping of landslide susceptibility using Geographic Information Systems (GIS and Multiple Logistic Regression (MLR. In Mexico, some studies have evaluated the stability of hillsides using GIS. However, these studies set a general framework and guidance (that includes basic concepts and explanations of landslide classification, triggering mechanisms, criteria, considerations, and analysis for landslide hazard reconnaissance, etc. for preparing a landslide atlas at state and city levels. So far, these have not developed a practical and standardized approach incorporating geomorphological maps into the landslide inventory using GIS. This paper describes the analysis conducted to develop an analytical technique and morphometric analysis for a multi-temporal landslide inventory. Three data management levels are used to create GIS thematic layers. For the first level, analogue topographic, geological, land-use, and climate paper are converted to raster format, georeferenced, and incorporated as GIS thematic layers. For the second level, five layers are derived from topographic elevation data: slope angles, slope curvature, contributing area, flow direction, and saturation. For the third level, thematic maps are derived from the previous two levels of data: a hypsometric map (heuristically classified to highlight altimetric levels, a reclassified slope map (allowing to highlight differences in relief , and a morphographic map (derived from a heuristic reclassification of the slope map to highlight volcanic landforms. The theoretical aspects of geomorphological mapping contribute to set the conceptual basis to support landslide mapping. The GIS thematic layers provide context and establish an overall characterization of landslide processes within the watershed. Through the retrieval and on

  11. A suggested approach toward measuring sorption and applying sorption data to repository performance assessment

    International Nuclear Information System (INIS)

    Rundberg, R.S.

    1992-01-01

    The prediction of radionuclide migration for the purpose of assessing the safety of a nuclear waste repository will be based on a collective knowledge of hydrologic and geochemical properties of the surrounding rock and groundwater. This knowledge along with assumption about the interactions of radionuclides with groundwater and minerals form the scientific basis for a model capable of accurately predicting the repository's performance. Because the interaction of radionuclides in geochemical systems is known to be complicated, several fundamental and empirical approaches to measuring the interaction between radionuclides and the geologic barrier have been developed. The approaches applied to the measurement of sorption involve the use of pure minerals, intact, or crushed rock in dynamic and static experiments. Each approach has its advantages and disadvantages. There is no single best method for providing sorption data for performance assessment models which can be applied without invoking information derived from multiple experiments. 53 refs., 12 figs

  12. Analytical Performance Verification of FCS-MPC Applied to Power Electronic Converters

    DEFF Research Database (Denmark)

    Novak, Mateja; Dragicevic, Tomislav; Blaabjerg, Frede

    2017-01-01

    Since the introduction of finite control set model predictive control (FCS-MPC) in power electronics the algorithm has been missing an important aspect that would speed up its implementation in industry: a simple method to verify the algorithm performance. This paper proposes to use a statistical...... model checking (SMC) method for performance evaluation of the algorithm applied to power electronics converters. SMC is simple to implement, intuitive and it requires only an operational model of the system that can be simulated and checked against properties. Device under test for control algorithm...

  13. Performance estimation of Tesla turbine applied in small scale Organic Rankine Cycle (ORC) system

    International Nuclear Information System (INIS)

    Song, Jian; Gu, Chun-wei; Li, Xue-song

    2017-01-01

    Highlights: • One-dimensional model of the Tesla turbine is improved and applied in ORC system. • Working fluid properties and system operating conditions impact efficiency. • The influence of turbine efficiency on ORC system performance is evaluated. • Potential of using Tesla turbine in ORC systems is estimated. - Abstract: Organic Rankine Cycle (ORC) system has been proven to be an effective method for the low grade energy utilization. In small scale applications, the Tesla turbine offers an attractive option for the organic expander if an efficient design can be achieved. The Tesla turbine is simple in structure and is easy to be manufactured. This paper improves the one-dimensional model for the Tesla turbine, which adopts a non-dimensional formulation that identifies the dimensionless parameters that dictates the performance features of the turbine. The model is used to predict the efficiency of a Tesla turbine that is applied in a small scale ORC system. The influence of the working fluid properties and the operating conditions on the turbine performance is evaluated. Thermodynamic analysis of the ORC system with different organic working fluids and under various operating conditions is conducted. The simulation results reveal that the ORC system can generate a considerable net power output. Therefore, the Tesla turbine can be regarded as a potential choice to be applied in small scale ORC systems.

  14. Transfer map approach to an optical effects of energy degraders on the performance of fragment separators

    International Nuclear Information System (INIS)

    Erdelyi, B.; Bandura, L.; Nolen, J.

    2009-01-01

    A second order analytical and an arbitrary order numerical procedure is developed for the computation of transfer maps of energy degraders. The incorporation of the wedges into the optics of fragment separators for next-generation exotic beam facilities, their optical effects, and the optimization of their performance is studied in detail. It is shown how to place and shape the degraders in the system such that aberrations are minimized and resolving powers are maximized

  15. Uranium exploration data and techniques applied to the preparation of radioelement maps. Proceedings of a technical committee meeting

    International Nuclear Information System (INIS)

    1997-11-01

    The report reviews the advantage and pitfalls of using uranium exploration data and techniques as well as other methods for the preparation of radioelement and radon maps for baseline information in environmental studies and monitoring

  16. Uranium exploration data and techniques applied to the preparation of radioelement maps. Proceedings of a technical committee meeting

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The report reviews the advantage and pitfalls of using uranium exploration data and techniques as well as other methods for the preparation of radioelement and radon maps for baseline information in environmental studies and monitoring. Refs, figs, tabs.

  17. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A novel multispectral glacier mapping method and its performance in Greenland

    Science.gov (United States)

    Citterio, M.; Fausto, R. S.; Ahlstrom, A. P.; Andersen, S. B.

    2014-12-01

    Multispectral land surface classification methods are widely used for mapping glacier outlines. Significant post-classification manual editing is typically required, and mapping glacier outlines over larger regions remains a rather labour intensive task. In this contribution we introduce a novel method for mapping glacier outlines from multispectral satellite imagery, requiring only minor manual editing.Over the last decade GLIMS (Global Land Ice Measurements from Space) improved the availability of glacier outlines, and in 2012 the Randolph Glacier Inventory (RGI) attained global coverage by compiling existing and new data sources in the wake of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). With the launch of Landsat 8 in 2013 and the upcoming ESA (European Space Agency) Sentinel 2 missions, the availability of multispectral imagery may grow faster than our ability to process it into timely and reliable glacier outline products. Improved automatic classification methods would enable a full exploitation of these new data sources.We outline the theoretical basis of the proposed classification algorithm, provide a step by step walk-through from raw imagery to finished ice cover grids and vector glacier outlines, and evaluate the performance of the new method in mapping the outlines of glaciers, ice caps and the Greenland Ice Sheet from Landsat 8 OLI imagery. The classification output is compared against manually digitized ice margin positions, the RGI vectors, and the PROMICE (Programme for Monitoring of the Greenland Ice Sheet) aerophotogrammetric map of Greenland ice masses over a sector of the Disko Island surge cluster in West Greenland, the Qassimiut ice sheet lobe in South Greenland, and the A.P. Olsen ice cap in NE Greenland.

  19. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    Science.gov (United States)

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  20. Self-Organizing Maps Neural Networks Applied to the Classification of Ethanol Samples According to the Region of Commercialization

    Directory of Open Access Journals (Sweden)

    Aline Regina Walkoff

    2017-10-01

    Full Text Available Physical-chemical analysis data were collected, from 998 ethanol samples of automotive ethanol commercialized in the northern, midwestern and eastern regions of the state of Paraná. The data presented self-organizing maps (SOM neural networks, which classified them according to those regions. The self-organizing maps best configuration had a 45 x 45 topology and 5000 training epochs, with a final learning rate of 6.7x10-4, a final neighborhood relationship of 3x10-2 and a mean quantization error of 2x10-2. This neural network provided a topological map depicting three separated groups, each one corresponding to samples of a same region of commercialization. Four maps of weights, one for each parameter, were presented. The network established the pH was the most important variable for classification and electrical conductivity the least one. The self-organizing maps application allowed the segmentation of alcohol samples, therefore identifying them according to the region of commercialization. DOI: http://dx.doi.org/10.17807/orbital.v9i4.982

  1. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM Techniques

    Directory of Open Access Journals (Sweden)

    Kamarulzaman Kamarudin

    2014-12-01

    Full Text Available This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM techniques (i.e., Gmapping and Hector SLAM using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS. The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect’s depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  2. How do task characteristics affect learning and performance? The roles of variably mapped and dynamic tasks.

    Science.gov (United States)

    Macnamara, Brooke N; Frank, David J

    2018-05-01

    For well over a century, scientists have investigated individual differences in performance. The majority of studies have focused on either differences in practice, or differences in cognitive resources. However, the predictive ability of either practice or cognitive resources varies considerably across tasks. We are the first to examine task characteristics' impact on learning and performance in a complex task while controlling for other task characteristics. In 2 experiments we test key theoretical task characteristic thought to moderate the relationship between practice, cognitive resources, and performance. We devised a task where each of several key task characteristics can be manipulated independently. Participants played 5 rounds of a game similar to the popular tower defense videogame Plants vs. Zombies where both cognitive load and game characteristics were manipulated. In Experiment 1, participants either played a consistently mapped version-the stimuli and the associated meaning of their properties were constant across the 5 rounds-or played a variably mapped version-the stimuli and the associated meaning of their properties changed every few minutes. In Experiment 2, participants either played a static version-that is, turn taking with no time pressure-or played a dynamic version-that is, the stimuli moved regardless of participants' response rates. In Experiment 1, participants' accuracy and efficiency were substantially hindered in the variably mapped conditions. In Experiment 2, learning and performance accuracy were hindered in the dynamic conditions, especially when under cognitive load. Our results suggest that task characteristics impact the relative importance of cognitive resources and practice on predicting learning and performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Performance of T2 Maps in the Detection of Prostate Cancer.

    Science.gov (United States)

    Chatterjee, Aritrick; Devaraj, Ajit; Mathew, Melvy; Szasz, Teodora; Antic, Tatjana; Karczmar, Gregory S; Oto, Aytekin

    2018-05-03

    This study compares the performance of T2 maps in the detection of prostate cancer (PCa) in comparison to T2-weighted (T2W) magnetic resonance images. The prospective study was institutional review board approved. Consenting patients (n = 45) with histologic confirmed PCa underwent preoperative 3-T magnetic resonance imaging with or without endorectal coil. Two radiologists, working independently, marked regions of interests (ROIs) on PCa lesions separately on T2W images and T2 maps. Each ROI was assigned a score of 1-5 based on the confidence in accurately detecting cancer, with 5 being the highest confidence. Subsequently, the histologically confirmed PCa lesions (n = 112) on whole-mount sections were matched with ROIs to calculate sensitivity, positive predictive value (PPV), and radiologist confidence score. Quantitative T2 values of PCa and benign tissue ROIs were measured. Sensitivity and confidence score for PCa detection were similar for T2W images (51%, 4.5 ± 0.8) and T2 maps (52%, 4.5 ± 0.6). However, PPV was significantly higher (P = .001) for T2 maps (88%) compared to T2W (72%) images. The use of endorectal coils nominally improved sensitivity (T2W: 55 vs 47%, T2 map: 54% vs 48%) compared to the use of no endorectal coils, but not the PPV and the confidence score. Quantitative T2 values for PCa (105 ± 28 milliseconds) were significantly (P = 9.3 × 10 -14 ) lower than benign peripheral zone tissue (211 ± 71 milliseconds), with moderate significant correlation with Gleason score (ρ = -0.284). Our study shows that review of T2 maps by radiologists has similar sensitivity but higher PPV compared to T2W images. Additional quantitative information obtained from T2 maps is helpful in differentiating cancer from normal prostate tissue and determining its aggressiveness. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  4. Geometrical Model of Solar Radiation Pressure Based on High-Performing Galileo Clocks - First Geometrical Mapping of the Yarkowsky effect

    Science.gov (United States)

    Svehla, Drazen; Rothacher, Markus; Hugentobler, Urs; Steigenberger, Peter; Ziebart, Marek

    2014-05-01

    Solar radiation pressure is the main source of errors in the precise orbit determination of GNSS satellites. All deficiencies in the modeling of Solar radiation pressure map into estimated terrestrial reference frame parameters as well as into derived gravity field coefficients and altimetry results when LEO orbits are determined using GPS. Here we introduce a new approach to geometrically map radial orbit perturbations of GNSS satellites using highly-performing clocks on board the first Galileo satellites. Only a linear model (time bias and time drift) needs to be removed from the estimated clock parameters and the remaining clock residuals map all radial orbit perturbations along the orbit. With the independent SLR measurements, we show that a Galileo clock is stable enough to map radial orbit perturbations continuously along the orbit with a negative sign in comparison to SLR residuals. Agreement between the SLR residuals and the clock residuals is at the 1 cm RMS for an orbit arc of 24 h. Looking at the clock parameters determined along one orbit revolution over a period of one year, we show that the so-called SLR bias in Galileo and GPS orbits can be explained by the translation of the determined orbit in the orbital plane towards the Sun. This orbit translation is due to thermal re-radiation and not accounting for the Sun elevation in the parameterization of the estimated Solar radiation pressure parameters. SLR ranging to GNSS satellites takes place typically at night, e.g. between 6 pm and 6 am local time when the Sun is in opposition to the satellite. Therefore, SLR observes only one part of the GNSS orbit with a negative radial orbit error that is mapped as an artificial bias in SLR observables. The Galileo clocks clearly show orbit translation for all Sun elevations: the radial orbit error is positive when the Sun is in conjuction (orbit noon) and negative when the Sun is in opposition (orbit midnight). The magnitude of this artificial negative SLR bias

  5. 20 CFR 666.420 - Under what circumstances may a sanction be applied to local areas for poor performance?

    Science.gov (United States)

    2010-04-01

    ... applied to local areas for poor performance? 666.420 Section 666.420 Employees' Benefits EMPLOYMENT AND... sanction be applied to local areas for poor performance? (a) If a local area fails to meet the levels of... achieving poor levels of performance; or (3) Requires other appropriate measures designed to improve the...

  6. Applying probabilistic well-performance parameters to assessments of shale-gas resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy

    2010-01-01

    In assessing continuous oil and gas resources, such as shale gas, it is important to describe not only the ultimately producible volumes, but also the expected well performance. This description is critical to any cost analysis or production scheduling. A probabilistic approach facilitates (1) the inclusion of variability in well performance within a continuous accumulation, and (2) the use of data from developed accumulations as analogs for the assessment of undeveloped accumulations. In assessing continuous oil and gas resources of the United States, the U.S. Geological Survey analyzed production data from many shale-gas accumulations. Analyses of four of these accumulations (the Barnett, Woodford, Fayetteville, and Haynesville shales) are presented here as examples of the variability of well performance. For example, the distribution of initial monthly production rates for Barnett vertical wells shows a noticeable change with time, first increasing because of improved completion practices, then decreasing from a combination of decreased reservoir pressure (in infill wells) and drilling in less productive areas. Within a partially developed accumulation, historical production data from that accumulation can be used to estimate production characteristics of undrilled areas. An understanding of the probabilistic relations between variables, such as between initial production and decline rates, can improve estimates of ultimate production. Time trends or spatial trends in production data can be clarified by plots and maps. The data can also be divided into subsets depending on well-drilling or well-completion techniques, such as vertical in relation to horizontal wells. For hypothetical or lightly developed accumulations, one can either make comparisons to a specific well-developed accumulation or to the entire range of available developed accumulations. Comparison of the distributions of initial monthly production rates of the four shale-gas accumulations that were

  7. Performance Analysis of a Neuro-PID Controller Applied to a Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Saeed Pezeshki

    2012-11-01

    Full Text Available The performance of robot manipulators with nonadaptive controllers might degrade significantly due to the open loop unstable system and the effect of some uncertainties on the robot model or environment. A novel Neural Network PID controller (NNP is proposed in order to improve the system performance and its robustness. The Neural Network (NN technique is applied to compensate for the effect of the uncertainties of the robot model. With the NN compensator introduced, the system errors and the NN weights with large dispersion are guaranteed to be bounded in the Lyapunov sense. The weights of the NN compensator are adaptively tuned. The simulation results show the effectiveness of the model validation approach and its efficiency to guarantee a stable and accurate trajectory tracking process in the presence of uncertainties.

  8. Projective mapping

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender Laurentius Petrus

    2012-01-01

    by the practical testing environment. As a result of the changes, a reasonable assumption would be to question the consequences caused by the variations in method procedures. Here, the aim is to highlight the proven or hypothetic consequences of variations of Projective Mapping. Presented variations will include...... instructions and influence heavily the product placements and the descriptive vocabulary (Dehlholm et.al., 2012b). The type of assessors performing the method influences results with an extra aspect in Projective Mapping compared to more analytical tests, as the given spontaneous perceptions are much dependent......Projective Mapping (Risvik et.al., 1994) and its Napping (Pagès, 2003) variations have become increasingly popular in the sensory field for rapid collection of spontaneous product perceptions. It has been applied in variations which sometimes are caused by the purpose of the analysis and sometimes...

  9. Improvements in Off Design Aeroengine Performance Prediction Using Analytic Compressor Map Interpolation

    Science.gov (United States)

    Mist'e, Gianluigi Alberto; Benini, Ernesto

    2012-06-01

    Compressor map interpolation is usually performed through the introduction of auxiliary coordinates (β). In this paper, a new analytical bivariate β function definition to be used in compressor map interpolation is studied. The function has user-defined parameters that must be adjusted to properly fit to a single map. The analytical nature of β allows for rapid calculations of the interpolation error estimation, which can be used as a quantitative measure of interpolation accuracy and also as a valid tool to compare traditional β function interpolation with new approaches (artificial neural networks, genetic algorithms, etc.). The quality of the method is analyzed by comparing the error output to the one of a well-known state-of-the-art methodology. This comparison is carried out for two different types of compressor and, in both cases, the error output using the method presented in this paper is found to be consistently lower. Moreover, an optimization routine able to locally minimize the interpolation error by shape variation of the β function is implemented. Further optimization introducing other important criteria is discussed.

  10. Applying the balanced scorecard to local public health performance measurement: deliberations and decisions.

    Science.gov (United States)

    Weir, Erica; d'Entremont, Nadine; Stalker, Shelley; Kurji, Karim; Robinson, Victoria

    2009-05-08

    All aspects of the heath care sector are being asked to account for their performance. This poses unique challenges for local public health units with their traditional focus on population health and their emphasis on disease prevention, health promotion and protection. Reliance on measures of health status provides an imprecise and partial picture of the performance of a health unit. In 2004 the provincial Institute for Clinical Evaluative Sciences based in Ontario, Canada introduced a public-health specific balanced scorecard framework. We present the conceptual deliberations and decisions undertaken by a health unit while adopting the framework. Posing, pondering and answering key questions assisted in applying the framework and developing indicators. Questions such as: Who should be involved in developing performance indicators? What level of performance should be measured? Who is the primary intended audience? Where and how do we begin? What types of indicators should populate the health status and determinants quadrant? What types of indicators should populate the resources and services quadrant? What type of indicators should populate the community engagement quadrant? What types of indicators should populate the integration and responsiveness quadrants? Should we try to link the quadrants? What comparators do we use? How do we move from a baseline report card to a continuous quality improvement management tool? An inclusive, participatory process was chosen for defining and creating indicators to populate the four quadrants. Examples of indicators that populate the four quadrants of the scorecard are presented and key decisions are highlighted that facilitated the process.

  11. Performance and Stability Enhancement of Perovskite-Type Nanomaterials Applied for Carbon Capture Utilizing Oxyfuel Combustion

    Directory of Open Access Journals (Sweden)

    Qiuwan Shen

    2017-02-01

    Full Text Available A new series of Ba-Co-Operovskite-type oxygen carriers has been successfully synthesized by the microwave-assisted sol-gel method and further applied for producing an O2/CO2 mixture gas. The oxygen adsorption/desorption performance of synthesized samples was studied in a fixed-bed reactor system. Effects of A/B-site substitution on the oxygen desorption performance of Ba-Co-O–based perovskites are also included. Furthermore, the effects of operating conditions including the adsorption time and temperature as well as the desorption temperature on oxygen production performance were investigated in detail. The results indicated that BaCoO3-δ exhibited an excellent oxygen desorption performance among the synthesized A/B-site–substituted ACoO3-δ and BaBO3-δ samples, and that the optimal adsorption time, adsorption temperature and desorption temperatureforBaCoO3-δ were determined to be 20min, 850◦Cand850◦C, respectively, in this study.

  12. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  13. Thermal buffering performance of composite phase change materials applied in low-temperature protective garments

    Science.gov (United States)

    Yang, Kai; Jiao, Mingli; Yu, Yuanyuan; Zhu, Xueying; Liu, Rangtong; Cao, Jian

    2017-07-01

    Phase change material (PCM) is increasingly being applied in the manufacturing of functional thermo-regulated textiles and garments. This paper investigated the thermal buffering performance of different composite PCMs which are suitable for the application in functional low-temperature protective garments. First, according to the criteria selecting PCM for functional textiles/garments, three kinds of pure PCM were selected as samples, which were n-hexadecane, n-octadecane and n-eicosane. To get the adjustable phase change temperature range and higher phase change enthalpy, three kinds of composite PCM were prepared using the above pure PCM. To evaluate the thermal buffering performance of different composite PCM samples, the simulated low-temperature experiments were performed in the climate chamber, and the skin temperature variation curves in three different low temperature conditions were obtained. Finally composite PCM samples’ thermal buffering time, thermal buffering capacity and thermal buffering efficiency were calculated. Results show that the comprehensive thermal buffering performance of n-octadecane and n-eicosane composite PCM is the best.

  14. a Performance Comparison of Feature Detectors for Planetary Rover Mapping and Localization

    Science.gov (United States)

    Wan, W.; Peng, M.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Teng, B.; Mao, X.; Zhao, Q.; Xin, X.; Jia, M.

    2017-07-01

    Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.

  15. Neural signatures of Trail Making Test performance: Evidence from lesion-mapping and neuroimaging studies.

    Science.gov (United States)

    Varjacic, Andreja; Mantini, Dante; Demeyere, Nele; Gillebert, Celine R

    2018-03-27

    The Trail Making Test (TMT) is an extensively used neuropsychological instrument for the assessment of set-switching ability across a wide range of neurological conditions. However, the exact nature of the cognitive processes and associated brain regions contributing to the performance on the TMT remains unclear. In this review, we first introduce the TMT by discussing its administration and scoring approaches. We then examine converging evidence and divergent findings concerning the brain regions related to TMT performance, as identified by lesion-symptom mapping studies conducted in brain-injured patients and functional magnetic resonance imaging studies conducted in healthy participants. After addressing factors that may account for the heterogeneity in the brain regions reported by these studies, we identify future research endeavours that may permit disentangling the different processes contributing to TMT performance and relating them to specific brain circuits. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. A PERFORMANCE COMPARISON OF FEATURE DETECTORS FOR PLANETARY ROVER MAPPING AND LOCALIZATION

    Directory of Open Access Journals (Sweden)

    W. Wan

    2017-07-01

    Full Text Available Feature detection and matching are key techniques in computer vision and robotics, and have been successfully implemented in many fields. So far there is no performance comparison of feature detectors and matching methods for planetary mapping and rover localization using rover stereo images. In this research, we present a comprehensive evaluation and comparison of six feature detectors, including Moravec, Förstner, Harris, FAST, SIFT and SURF, aiming for optimal implementation of feature-based matching in planetary surface environment. To facilitate quantitative analysis, a series of evaluation criteria, including distribution evenness of matched points, coverage of detected points, and feature matching accuracy, are developed in the research. In order to perform exhaustive evaluation, stereo images, simulated under different baseline, pitch angle, and interval of adjacent rover locations, are taken as experimental data source. The comparison results show that SIFT offers the best overall performance, especially it is less sensitive to changes of image taken at adjacent locations.

  17. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    Science.gov (United States)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  18. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    Directory of Open Access Journals (Sweden)

    Ruijian Zhang

    2017-12-01

    Full Text Available Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the assessment of water quality will be far more efficient, and by generating the decision tree, the prediction will be quite accurate. The drawback of the machine learning modeling is that the execution takes quite long time, especially when we employ a better accuracy but more time-consuming algorithm in clustering. Therefore, we applied the high performance computing (HPC System to deal with this problem. Up to now, the pilot experiments have achieved very promising preliminary results. The visualized water quality assessment and prediction obtained from this project would be published in an interactive website so that the public and the environmental managers could use the information for their decision making.

  19. Cameco engineered tailings program: linking applied research with industrial processes for improved tailings performance

    International Nuclear Information System (INIS)

    Kotzer, T.G.

    2010-01-01

    'Full text:' Mine tailings at Cameco's operations are by-products of milling uranium ore having variable concentrations of uranium, metals, oxyanions and trace elements or elements of concern (EOC). Cameco has undertaken an Engineered Tailings (ET) program to optimize tailings performance and minimize environmental EOC impacts, regardless of the milled ore source. Applied geochemical and geotechnical tailings research is key within the ET program. In-situ drilling and experimental programs are used to understand long-term tailings behaviour and help validate source term predictions. Within this, the ET program proactively aids in the development of mill-based processes for production of tailings having improved long-term stability. (author)

  20. Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems

    Directory of Open Access Journals (Sweden)

    Sol Ji Kang

    2015-01-01

    Full Text Available With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.

  1. THE PERFORMANCE ANALYSIS OF A UAV BASED MOBILE MAPPING SYSTEM PLATFORM

    Directory of Open Access Journals (Sweden)

    M. L. Tsai

    2013-08-01

    Full Text Available To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG based fixed-wing Unmanned Aerial Vehicle (UAV photogrammetric platform where an Inertial Navigation System (INS/Global Positioning System (GPS integrated Positioning and Orientation System (POS system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP. The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP generation, and feature point measurements, is less than one hour.

  2. The Performance Analysis of a Uav Based Mobile Mapping System Platform

    Science.gov (United States)

    Tsai, M. L.; Chiang, K. W.; Lo, C. F.; Ch, C. H.

    2013-08-01

    To facilitate applications such as environment detection or disaster monitoring, the development of rapid low cost systems for collecting near real-time spatial information is very critical. Rapid spatial information collection has become an emerging trend for remote sensing and mapping applications. This study develops a Direct Georeferencing (DG) based fixed-wing Unmanned Aerial Vehicle (UAV) photogrammetric platform where an Inertial Navigation System (INS)/Global Positioning System (GPS) integrated Positioning and Orientation System (POS) system is implemented to provide the DG capability of the platform. The performance verification indicates that the proposed platform can capture aerial images successfully. A flight test is performed to verify the positioning accuracy in DG mode without using Ground Control Points (GCP). The preliminary results illustrate that horizontal DG positioning accuracies in the x and y axes are around 5 m with 300 m flight height. The positioning accuracy in the z axis is less than 10 m. Such accuracy is good for near real-time disaster relief. The DG ready function of proposed platform guarantees mapping and positioning capability even in GCP free environments, which is very important for rapid urgent response for disaster relief. Generally speaking, the data processing time for the DG module, including POS solution generalization, interpolation, Exterior Orientation Parameters (EOP) generation, and feature point measurements, is less than one hour.

  3. Stirling Convertor Performance Mapping Test Results for Future Radioisotope Power Systems

    Science.gov (United States)

    Qiu, Songgang; Peterson, Allen A.; Faultersack, Franklyn D.; Redinger, Darin L.; Augenblick, John E.

    2004-02-01

    Long-life radioisotope-fueled generators based on free-piston Stirling convertors are an energy-conversion solution for future space applications. The high efficiency of Stirling machines makes them more attractive than the thermoelectric generators currently used in space. Stirling Technology Company (STC) has been performance-testing its Stirling generators to provide data for potential system integration contractors. This paper describes the most recent test results from the STC RemoteGen™ 55 W-class Stirling generators (RG-55). Comparisons are made between the new data and previous Stirling thermodynamic simulation models. Performance-mapping tests are presented including variations in: internal charge pressure, cold end temperature, hot end temperature, alternator temperature, input power, and variation of control voltage.

  4. Performance of Multilevel Coding Schemes with Different Decoding Methods and Mapping Strategies in Mobile Fading Channels

    Institute of Scientific and Technical Information of China (English)

    YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang

    2001-01-01

    Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.

  5. Applying the balanced scorecard to local public health performance measurement: deliberations and decisions

    Directory of Open Access Journals (Sweden)

    Kurji Karim

    2009-05-01

    Full Text Available Abstract Background All aspects of the heath care sector are being asked to account for their performance. This poses unique challenges for local public health units with their traditional focus on population health and their emphasis on disease prevention, health promotion and protection. Reliance on measures of health status provides an imprecise and partial picture of the performance of a health unit. In 2004 the provincial Institute for Clinical Evaluative Sciences based in Ontario, Canada introduced a public-health specific balanced scorecard framework. We present the conceptual deliberations and decisions undertaken by a health unit while adopting the framework. Discussion Posing, pondering and answering key questions assisted in applying the framework and developing indicators. Questions such as: Who should be involved in developing performance indicators? What level of performance should be measured? Who is the primary intended audience? Where and how do we begin? What types of indicators should populate the health status and determinants quadrant? What types of indicators should populate the resources and services quadrant? What type of indicators should populate the community engagement quadrant? What types of indicators should populate the integration and responsiveness quadrants? Should we try to link the quadrants? What comparators do we use? How do we move from a baseline report card to a continuous quality improvement management tool? Summary An inclusive, participatory process was chosen for defining and creating indicators to populate the four quadrants. Examples of indicators that populate the four quadrants of the scorecard are presented and key decisions are highlighted that facilitated the process.

  6. Applying Required Navigation Performance Concept for Traffic Management of Small Unmanned Aircraft Systems

    Science.gov (United States)

    Jung, Jaewoo; D'Souza, Sarah N.; Johnson, Marcus A.; Ishihara, Abraham K.; Modi, Hemil C.; Nikaido, Ben; Hasseeb, Hashmatullah

    2016-01-01

    In anticipation of a rapid increase in the number of civil Unmanned Aircraft System(UAS) operations, NASA is researching prototype technologies for a UAS Traffic Management (UTM) system that will investigate airspace integration requirements for enabling safe, efficient low-altitude operations. One aspect a UTM system must consider is the correlation between UAS operations (such as vehicles, operation areas and durations), UAS performance requirements, and the risk to people and property in the operational area. This paper investigates the potential application of the International Civil Aviation Organizations (ICAO) Required Navigation Performance (RNP) concept to relate operational risk with trajectory conformance requirements. The approach is to first define a method to quantify operational risk and then define the RNP level requirement as a function of the operational risk. Greater operational risk corresponds to more accurate RNP level, or smaller tolerable Total System Error (TSE). Data from 19 small UAS flights are used to develop and validate a formula that defines this relationship. An approach to assessing UAS-RNP conformance capability using vehicle modeling and wind field simulation is developed to investigate how this formula may be applied in a future UTM system. The results indicate the modeled vehicles flight path is robust to the simulated wind variation, and it can meet RNP level requirements calculated by the formula. The results also indicate how vehicle-modeling fidelity may be improved to adequately verify assessed RNP level.

  7. Applying Importance-Performance Analysis as a Service Quality Measure in Food Service Industry

    Directory of Open Access Journals (Sweden)

    Gwo-Hshiung Tzeng

    2011-09-01

    Full Text Available As the global economy becomes a service oriented economy, food service accounts for over 20% of service revenue, with an annual growth rate of more than 3%. Compared to physical products, service features are invisible, and the production and sale occurs simultaneously. There is not easy to measure the performance of service. Therefore, the service quality of catering services is considered to be an important topic of service management. According Market Intelligence & Consulting Institute (MIC to apply blog text analyzing to point out top 10 restaurants of blog in Taiwan, what it’s popular restaurant in food service industries. This paper attempts to identify both the importance and performance of restaurant service quality in the Taiwan food service industry using the SERVQUAL and IPA model. We can conclude with certainty that three methods (SERVQUAL, IF and IPA are able to explain significant amount of service quality. At the same time, the service quality factors of IPA model had more comprehensive consideration in comparison to those of SERVQUAL and IF.

  8. NEURO-FUZZY MODELING APPLIED IN PROGRAM MANAGEMENT TO INCREASE LOCAL PUBLIC ADMINISTRATION PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Adrian-Mihai Zaharia-Radulescu

    2016-07-01

    Full Text Available One of the challenges in local public administration is dealing with an increasing number of competing requests coming from the communities they serve. The traditional approach would be to handle each request as a standalone project and be prioritized according to benefits and budget available. More and more nowadays program management is becoming a standard approach in managing the initiatives of local public administration. Program management approach is itself an enabler for performance in public sector organizations by allowing an organization to better coordinate its efforts and resources in managing a portfolio of projects. This paper aims to present how neuro-fuzzy modeling applied in program management can help an organization to increase its performance. Neuro-fuzzy modeling would lead organizations one step further by allowing them to simulate different scenarios and manage better the risks accompanying their initiatives. The research done by the authors is theoretical and combines knowledge from different areas and a neuro-fuzzy model is proposed and discussed.

  9. High resolution spectroscopic mapping imaging applied in situ to multilayer structures for stratigraphic identification of painted art objects

    Science.gov (United States)

    Karagiannis, Georgios Th.

    2016-04-01

    The development of non-destructive techniques is a reality in the field of conservation science. These techniques are usually not so accurate, as the analytical micro-sampling techniques, however, the proper development of soft-computing techniques can improve their accuracy. In this work, we propose a real-time fast acquisition spectroscopic mapping imaging system that operates from the ultraviolet to mid infrared (UV/Vis/nIR/mIR) area of the electromagnetic spectrum and it is supported by a set of soft-computing methods to identify the materials that exist in a stratigraphic structure of paint layers. Particularly, the system acquires spectra in diffuse-reflectance mode, scanning in a Region-Of-Interest (ROI), and having wavelength range from 200 up to 5000 nm. Also, a fuzzy c-means clustering algorithm, i.e., the particular soft-computing algorithm, produces the mapping images. The evaluation of the method was tested on a byzantine painted icon.

  10. Applying a new unequally weighted feature fusion method to improve CAD performance of classifying breast lesions

    Science.gov (United States)

    Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.

  11. The improvement of environmental performances by applying ISO 14001 standard: A case study

    Directory of Open Access Journals (Sweden)

    Živković Snežana

    2013-01-01

    Full Text Available This paper presents the analysis of the advantages of applying ISO 14001 system in an environmental protection management system. The environmental protection management system which is not licensed, i.e., compatible with the principles and standard pre-conditions considerably increases the plausibility for ecological risk. There are some issues that remain to be solved in the areas which are not expressed by financial values only but also have a non-financial character with the aim of expanding markets, company image improvement and improvement of the environmental performance indicators. By improving a company’s environmental management system efficiency we expect to achieve the minimization and elimination of damaging influences on the environment which are the consequence of company’s activities. A case study in the Oil Refinery Belgrade (RNB analyses the implementation of the standard ISO 14001:2004 into its environment protection management system, particularly emphasizing the company’s own way of evaluating the environment aspects with the aim of establishing results of ecological performances indicators improvement. The average values of the first ecological indicator of the plant, the total amount of the waste waters in m3 per a ton of product, clearly show the downturn trend, which is confirmed by the proportional reduction of the second ecological plant indicator, that is by the flocculants consumption (Al2(SO43, Na2CO3 in kg per m3 of the waste water of the Oil Refinery of Belgrade for the given period 2008-2010. Case study RNB confirms the improvement of environmental performances using the ISO 14001 standard.

  12. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  13. Water erosion susceptibility mapping by applying Stochastic Gradient Treeboost to the Imera Meridionale River Basin (Sicily, Italy)

    Science.gov (United States)

    Angileri, Silvia Eleonora; Conoscenti, Christian; Hochschild, Volker; Märker, Michael; Rotigliano, Edoardo; Agnesi, Valerio

    2016-06-01

    Soil erosion by water constitutes a serious problem affecting various countries. In the last few years, a number of studies have adopted statistical approaches for erosion susceptibility zonation. In this study, the Stochastic Gradient Treeboost (SGT) was tested as a multivariate statistical tool for exploring, analyzing and predicting the spatial occurrence of rill-interrill erosion and gully erosion. This technique implements the stochastic gradient boosting algorithm with a tree-based method. The study area is a 9.5 km2 river catchment located in central-northern Sicily (Italy), where water erosion processes are prevalent, and affect the agricultural productivity of local communities. In order to model soil erosion by water, the spatial distribution of landforms due to rill-interrill and gully erosion was mapped and 12 environmental variables were selected as predictors. Four calibration and four validation subsets were obtained by randomly extracting sets of negative cases, both for rill-interrill erosion and gully erosion models. The results of validation, based on receiving operating characteristic (ROC) curves, showed excellent to outstanding accuracies of the models, and thus a high prediction skill. Moreover, SGT allowed us to explore the relationships between erosion landforms and predictors. A different suite of predictor variables was found to be important for the two models. Elevation, aspect, landform classification and land-use are the main controlling factors for rill-interrill erosion, whilst the stream power index, plan curvature and the topographic wetness index were the most important independent variables for gullies. Finally, an ROC plot analysis made it possible to define a threshold value to classify cells according to the presence/absence of the two erosion processes. Hence, by heuristically combining the resulting rill-interrill erosion and gully erosion susceptibility maps, an integrated water erosion susceptibility map was created. The

  14. A computational approach for functional mapping of quantitative trait loci that regulate thermal performance curves.

    Directory of Open Access Journals (Sweden)

    John Stephen Yap

    2007-06-01

    Full Text Available Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm.

  15. A new methodology for strategic planning using technological maps and detection of emerging research fronts applied to radiopharmacy

    International Nuclear Information System (INIS)

    Didio, Robert Joseph

    2011-01-01

    This research aims the development of a new methodology to support the strategic planning, using the process of elaboration of technological maps (TRM - Technological Roadmaps), associated with application of the detection process of emerging fronts of research in databases of scientific publications and patents. The innovation introduced in this research is the customization of the process of TRM to the radiopharmacy and, specifically, its association to the technique of detection of emerging fronts of research, in order to prove results and to establish a new and very useful methodology to the strategic planning of this area of businesses. The business unit DIRF - Diretoria de Radiofarmacia - of IPEN CNEN/SP was used as base of the study and implementation of this methodology presented in this work. (author)

  16. PERFORMANCE IMPROVEMENT OF IDMA SCHEME USING CHAOTIC MAP INTERLEAVERS FOR FUTURE RADIO COMMUNICATION

    Directory of Open Access Journals (Sweden)

    Aasheesh Shukla

    2017-06-01

    Full Text Available In this paper, chaos based interleavers are proposed for the performance improvement of Interleave Division Multiple Access (IDMA henceforth for future radio communication (FRC requirements. ‘IDMA’ can be mean as the modified case of direct sequence code division multiple access (DS-CDMA with the same spreading sequences and user specific interleavers for distinguishing the users in multi-user environment. In IDMA systems, the roles of interleavers are pre-eminent and an efficient interleaver contributes in optimizing the system performance. Random interleaver is the popular and basic taxonomy in IDMA. The performance of chaos based interleavers is compared to random interleaver. Simulation results authenticate the performance of chaos based IDMA. Further the proposed chaotic map-interleavers have the less computational complexity and efficient in bandwidth compared to the existing prevailing interleaver algorithms in the domain. The IDMA system model uses a BPSK modulation and repetition coder with a code rate of ½. The system is simulated in MATLAB and results show the BER superiority of chaotic interleaver based IDMA without the need of extra storage resources and less computational complexity.

  17. Sustainable-value stream mapping to evaluate sustainability performance: case study in an Indonesian furniture company

    Directory of Open Access Journals (Sweden)

    Hartini Sri

    2018-01-01

    Full Text Available Lean manufacturing tools do not consider environmental and societal benefits. The conventional value stream mapping (VSM methodology examines the economics of a manufacturing line, most of which are in regards to time (cycle time, lead time, change-out time, etc.. Incorporating the capability to capture environmental and societal performance visually through VSMs will increase its usefulness as a tool that can be used to assess manufacturing operations from a sustainability perspective. A number of studies have addressed the extension of VSM to incorporate additional criteria. A vast majority of these efforts have focused on adding energy-related metrics to VSMs, while several other studies refer to ‘sustainable’ VSM by including environmental performance in conventional VSMs. This research has developed a method for VSM integrated with environment metric and social metric for ensuring sustainable manufacture. The proposed technique is capable of visualizing and evaluating manufacturing process performance from sustainability view point. The capability of proposed technique has been tested by an application study on furniture company. The study provides insights to practitioners to visualize process performance in economic, environment and social metric.

  18. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    Science.gov (United States)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  19. DOES GENDER EQUALITY LEAD TO BETTER-PERFORMING ECONOMIES? A BAYESIAN CAUSAL MAP APPROACH

    Directory of Open Access Journals (Sweden)

    Yelda YÜCEL

    2017-01-01

    Full Text Available This study explores the existence of relationships between gender inequalities –represented by the components of the World Economic Forum (WEF Global Gender Gap Index– and the major macroeconomic indicators. The relationships within gender inequalities in education, the labour market, health and the political arena, and between gender inequalities and gross macroeconomic aggregates were modelled with the Bayesian Causal Map, an effective tool that is used to analyze cause-effect relations and conditional dependencies between variables. A data set of 128 countries during the period 2007–2011 is used. Findings reveal that some inequalities have high levels of interaction with each other. In addition, eradicating gender inequalities is found to be associated with better economic performance, mainly in the form of higher gross domestic product growth, investment, and competitiveness.

  20. Multivariate regression applied to the performance optimization of a countercurrent ultracentrifuge - a preliminary study

    International Nuclear Information System (INIS)

    Migliavacca, Elder; Andrade, Delvonei Alves de

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting in order to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 173 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process control variables, which significantly influence the δU values, are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow F and cut θ . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heterocedasticity with any regression model variable. The response curves are made relating the separative power with the control variables F and θ, to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  1. Plume Tracker: Interactive mapping of volcanic sulfur dioxide emissions with high-performance radiative transfer modeling

    Science.gov (United States)

    Realmuto, Vincent J.; Berk, Alexander

    2016-11-01

    We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.

  2. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Grootjans, Willem [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology; Meeuwis, Antoi P.W.; Gotthardt, Martin; Visser, Eric P. [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Slump, Cornelis H. [Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Geus-Oei, Lioe-Fee de [Radboud Univ. Medical Center, Nijmegen (Netherlands). Dept. of Radiology and Nuclear Medicine; Univ. Twente, Enschede (Netherlands). MIRA Inst. for Biomedical Technology and Technical Medicine; Leiden Univ. Medical Center (Netherlands). Dept. of Radiology

    2016-07-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17 mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4

  3. 40 CFR 60.1025 - Do subpart E new source performance standards also apply to my municipal waste combustion unit?

    Science.gov (United States)

    2010-07-01

    ... standards also apply to my municipal waste combustion unit? 60.1025 Section 60.1025 Protection of... NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which... municipal waste combustion unit? If this subpart AAAA applies to your municipal waste combustion unit, then...

  4. Long-Term Annual Mapping of Four Cities on Different Continents by Applying a Deep Information Learning Method to Landsat Data

    Directory of Open Access Journals (Sweden)

    Haobo Lyu

    2018-03-01

    Full Text Available Urbanization is a substantial contributor to anthropogenic environmental change, and often occurs at a rapid pace that demands frequent and accurate monitoring. Time series of satellite imagery collected at fine spatial resolution using stable spectral bands over decades are most desirable for this purpose. In practice, however, temporal spectral variance arising from variations in atmospheric conditions, sensor calibration, cloud cover, and other factors complicates extraction of consistent information on changes in urban land cover. Moreover, the construction and application of effective training samples is time-consuming, especially at continental and global scales. Here, we propose a new framework for satellite-based mapping of urban areas based on transfer learning and deep learning techniques. We apply this method to Landsat observations collected during 1984–2016 and extract annual records of urban areas in four cities in the temperate zone (Beijing, New York, Melbourne, and Munich. The method is trained using observations of Beijing collected in 1999, and then used to map urban areas in all target cities for the entire 1984–2016 period. The method addresses two central challenges in long term detection of urban change: temporal spectral variance and a scarcity of training samples. First, we use a recurrent neural network to minimize seasonal urban spectral variance. Second, we introduce an automated transfer strategy to maximize information gain from limited training samples when applied to new target cities in similar climate zones. Compared with other state-of-the-art methods, our method achieved comparable or even better accuracy: the average change detection accuracy during 1984–2016 is 89% for Beijing, 94% for New York, 93% for Melbourne, and 89% for Munich, and the overall accuracy of single-year urban maps is approximately 96 ± 3% among the four target cities. The results demonstrate the practical potential and suitability

  5. Numerical investigation and performance characteristic mapping of an Archimedean screw hydroturbine

    Science.gov (United States)

    Schleicher, W. Chris

    Computational Fluid Dynamics (CFD) is a crucial tool in the design and analysis of hydraulic machinery, especially in the design of a micro hydro turbine. The micro hydro turbine in question is for a low head (less than 60 meters), low volumetric flow rate (0.005 m3/s to 0.5 m 3/s) application with rotation rates varying from 200 RPM to 1500 RPM. The design of the runner geometry is discussed, specifically a non-uniform Archimedean Spiral with an outer diameter of 6 inches and length of 19.5 inches. The transient simulation method, making use of a frame of reference change and a rotating mesh between time-steps, is explained as well as the corresponding boundary conditions. Both simulation methods are compared and are determined to produce similar results. The rotating frame of reference method was determined to be the most suitable method for the mapping of performance characteristic such as required head, torque, power, and efficiency. Results of simulations for a non-uniform Archimedean Spiral are then presented. First, a spectral and temporal convergence study is conducted to make sure that the results are independent of time-step and mesh selection. Performance characteristics of a non-uniform pitched blade turbine are determined for a wide range of volumetric flow rates and rotation rates. The maximum efficiency of the turbine is calculated around 72% for the design of the turbine blade considered in the present study.

  6. Concept mapping improves academic performance in problem solving questions in biochemistry subject.

    Science.gov (United States)

    Baig, Mukhtiar; Tariq, Saba; Rehman, Rehana; Ali, Sobia; Gazzaz, Zohair J

    2016-01-01

    To assess the effectiveness of concept mapping (CM) on the academic performance of medical students' in problem-solving as well as in declarative knowledge questions and their perception regarding CM. The present analytical and questionnaire-based study was carried out at Bahria University Medical and Dental College (BUMDC), Karachi, Pakistan. In this analytical study, students were assessed with problem-solving questions (A-type MCQs), and declarative knowledge questions (short essay questions), and 50% of the questions were from the topics learned by CM. Students also filled a 10-item, 3-point Likert scale questionnaire about their perception regarding the effectiveness of the CM approach, and two open-ended questions were also asked. There was a significant difference in the marks obtained in those problem-solving questions, which were learned by CM as compared to those topics which were taught by the traditional lectures (pacademic performance in problem solving but not in declarative knowledge questions. Students' perception about the effectiveness of CM was overwhelmingly positive.

  7. Mapping performance of the fishery industries innovation: A survey in the North Coast of Java

    Science.gov (United States)

    Yusuf, M.; Legowo, A. M.; Albaarri, A. N.; Darmanto, Y. S.; Agustini, T. W.; Setyastuti, A. I.

    2018-01-01

    This study aimed to put the performance indicators of industry innovation fisheries which would be used as inputs to create innovation strategies in order to win the market competition, especially in USA. Survey and in-depth interviews were conducted on 10 industries with shrimp, tuna and crab commodities representing the fishery industry in Indonesia to USA export destination. The result of mapping performance of innovation indicators of Indonesian Fishery Industry resulted the 10’s strategies alternative to win the market. Based on survey result indicate that “the regulation of catch and/or harvest of cultivation factor” is considered the weakest factor in develop innovation with a score of 3.3, while the international trade factor are considered as the strongest factor in developing innovation development with scores 5,0. Aggressive strategy by strengthening the strength owned by the internal industry was by always looking at the opportunity, so that they could take the opportunity to win the market competition at the right time.

  8. A new framework for performance evaluation system using strategy map: A case study of Islamic Azad University of Semnan

    Directory of Open Access Journals (Sweden)

    Afsaneh Mozaffari

    2013-04-01

    Full Text Available During the past few years, there have been extensive developments on Islamic Azad University, which has led on reduction of managerial flexibility. Therefore, these organizations concentrate on their strategic management via usage of the Balanced Model such as Balanced Score Card (BSC to consider different organizational perspectives and it is important to have good description of organizational strategies and goals. The strategy map is a primary factor to assess the performance in different organizational activities. In this paper, the performance evaluation system of Islamic Azad University of Semnan is designed by the utilization of strategy map as a prominent part of BSC.

  9. An Applied Study on the Decontamination and Decommissioning of the Map Tube Facility 317 Area Argonne National Laboratory, Chicago

    Energy Technology Data Exchange (ETDEWEB)

    Varley, Geoff; Rusch, Chris [NAC International, Henley-on-Thames (United Kingdom)

    2005-01-01

    The Map Tube Facility (MTF) was a large concrete block structure constructed in 1952 at the Argonne National Laboratory site in the United States, for the purpose of storing radioactive waste. The block contained 129 storage tubes that were positioned vertically in the block during construction. From 1952 though the early 1980s, the MTF was used to store containers of highly radioactive materials. The items stored included: - Nuclear fuel elements, - Nuclear reactor components, - Materials samples, - Irradiated metal objects (bolts, wire, rods, etc), - Concrete-encased objects. After MTF operations were discontinued in the early 1980s, most of the materials were removed from most of the tubes. Decontamination and decommissioning of the MTF tool place in 1994. The objective was to eliminate the radiological and chemical materials within the MTF tubes to prevent ground water and soil contamination. Once these materials were removed, the block would no longer be a source of contamination (chemical or radioactive) and could then remain in place without risk to the environment. The decontamination scope included the following actions. 1. Mechanically clean each tube (wire brush), 2. Dewater each tube, 3. Remove the debris and sludge from the bottom of each tube, 4. Fill each tube with concrete, 5. Remove the tubes using a core drilling technique. Project constraints precluded the use of excavation around the facility and sectioning of the MTF block or simple demolition, which led to the use of the core drilling technique. The cost of decommissioning the MTF was approximately $2.6 million (1994 money values). Escalating this at 2.5 percent per year to January 2005 and converting to Swedish currency at the current exchange rate (July 2005 approximately 7.6 SEK/$) gives an equivalent cost today of MSEK 25. The AT facility in Studsvik is considerably larger than the MTF facility in Argonne, between six and seven times in terms of volume but with storage tube depth somewhat

  10. An Applied Study on the Decontamination and Decommissioning of the Map Tube Facility 317 Area Argonne National Laboratory, Chicago

    International Nuclear Information System (INIS)

    Varley, Geoff; Rusch, Chris

    2005-01-01

    The Map Tube Facility (MTF) was a large concrete block structure constructed in 1952 at the Argonne National Laboratory site in the United States, for the purpose of storing radioactive waste. The block contained 129 storage tubes that were positioned vertically in the block during construction. From 1952 though the early 1980s, the MTF was used to store containers of highly radioactive materials. The items stored included: - Nuclear fuel elements, - Nuclear reactor components, - Materials samples, - Irradiated metal objects (bolts, wire, rods, etc), - Concrete-encased objects. After MTF operations were discontinued in the early 1980s, most of the materials were removed from most of the tubes. Decontamination and decommissioning of the MTF tool place in 1994. The objective was to eliminate the radiological and chemical materials within the MTF tubes to prevent ground water and soil contamination. Once these materials were removed, the block would no longer be a source of contamination (chemical or radioactive) and could then remain in place without risk to the environment. The decontamination scope included the following actions. 1. Mechanically clean each tube (wire brush), 2. Dewater each tube, 3. Remove the debris and sludge from the bottom of each tube, 4. Fill each tube with concrete, 5. Remove the tubes using a core drilling technique. Project constraints precluded the use of excavation around the facility and sectioning of the MTF block or simple demolition, which led to the use of the core drilling technique. The cost of decommissioning the MTF was approximately $2.6 million (1994 money values). Escalating this at 2.5 percent per year to January 2005 and converting to Swedish currency at the current exchange rate (July 2005 approximately 7.6 SEK/$) gives an equivalent cost today of MSEK 25. The AT facility in Studsvik is considerably larger than the MTF facility in Argonne, between six and seven times in terms of volume but with storage tube depth somewhat

  11. Developing and applying mobility performance measures for freight transportation in urban areas.

    Science.gov (United States)

    2010-12-01

    This report summarizes the activities performed in a one-year study with the objective to develop an : understanding of the interrelationships of urban goods movement and congestion and identify performance : measures that will help evaluate the impa...

  12. An Examination of the Effects of Argument Mapping on Students' Memory and Comprehension Performance

    Science.gov (United States)

    Dwyer, Christopher P.; Hogan, Michael J.; Stewart, Ian

    2013-01-01

    Argument mapping (AM) is a method of visually diagramming arguments to allow for easy comprehension of core statements and relations. A series of three experiments compared argument map reading and construction with hierarchical outlining, text summarisation, and text reading as learning methods by examining subsequent memory and comprehension…

  13. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    NARCIS (Netherlands)

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  14. Supporting Problem-Solving Performance Through the Construction of Knowledge Maps

    Science.gov (United States)

    Lee, Youngmin; Baylor, Amy L.; Nelson, David W.

    2005-01-01

    The purpose of this article is to provide five empirically-derived guidelines for knowledge map construction tools that facilitate problem solving. First, the combinational representation principle proposes that conceptual and corresponding procedural knowledge should be represented together (rather than separately) within the knowledge map.…

  15. Automatic reduction of large X-ray fluorescence data-sets applied to XAS and mapping experiments

    International Nuclear Information System (INIS)

    Martin Montoya, Ligia Andrea

    2017-02-01

    In this thesis two automatic methods for the reduction of large fluorescence data sets are presented. The first method is proposed in the framework of BioXAS experiments. The challenge of this experiment is to deal with samples in ultra dilute concentrations where the signal-to-background ratio is low. The experiment is performed in fluorescence mode X-ray absorption spectroscopy with a 100 pixel high-purity Ge detector. The first step consists on reducing 100 fluorescence spectra into one. In this step, outliers are identified by means of the shot noise. Furthermore, a fitting routine which model includes Gaussian functions for the fluorescence lines and exponentially modified Gaussian (EMG) functions for the scattering lines (with long tails at lower energies) is proposed to extract the line of interest from the fluorescence spectrum. Additionally, the fitting model has an EMG function for each scattering line (elastic and inelastic) at incident energies where they start to be discerned. At these energies, the data reduction is done per detector column to include the angular dependence of scattering. In the second part of this thesis, an automatic method for texts separation on palimpsests is presented. Scanning X-ray fluorescence is performed on the parchment, where a spectrum per scanned point is collected. Within this method, each spectrum is treated as a vector forming a basis which is to be transformed so that the basis vectors are the spectra of each ink. Principal Component Analysis is employed as an initial guess of the seek basis. This basis is further transformed by means of an optimization routine that maximizes the contrast and minimizes the non-negative entries in the spectra. The method is tested on original and self made palimpsests.

  16. Performance Comparison of Reputation Assessment Techniques Based on Self-Organizing Maps in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sabrina Sicari

    2017-01-01

    Full Text Available Many solutions based on machine learning techniques have been proposed in literature aimed at detecting and promptly counteracting various kinds of malicious attack (data violation, clone, sybil, neglect, greed, and DoS attacks, which frequently affect Wireless Sensor Networks (WSNs. Besides recognizing the corrupted or violated information, also the attackers should be identified, in order to activate the proper countermeasures for preserving network’s resources and to mitigate their malicious effects. To this end, techniques adopting Self-Organizing Maps (SOM for intrusion detection in WSN were revealed to represent a valuable and effective solution to the problem. In this paper, the mechanism, namely, Good Network (GoNe, which is based on SOM and is able to assess the reliability of the sensor nodes, is compared with another relevant and similar work existing in literature. Extensive performance simulations, in terms of nodes’ classification, attacks’ identification, data accuracy, energy consumption, and signalling overhead, have been carried out in order to demonstrate the better feasibility and efficiency of the proposed solution in WSN field.

  17. Use of geological mapping tools to improve the hydraulic performance of SuDS.

    Science.gov (United States)

    Bockhorn, Britta; Klint, Knud Erik Strøyberg; Jensen, Marina Bergen; Møller, Ingelise

    2015-01-01

    Most cities in Denmark are situated on low permeable clay rich deposits. These sediments are of glacial origin and range among the most heterogeneous, with hydraulic conductivities spanning several orders of magnitude. This heterogeneity has obvious consequences for the sizing of sustainable urban drainage systems (SuDS). We have tested methods to reveal geological heterogeneity at field scale to identify the most suitable sites for the placement of infiltration elements and to minimize their required size. We assessed the geological heterogeneity of a clay till plain in Eastern Jutland, Denmark measuring the shallow subsurface resistivity with a geoelectrical multi-electrode system. To confirm the resistivity data we conducted a spear auger mapping. The exposed sediments ranged from clay tills over sandy clay tills to sandy tills and correspond well to the geoelectrical data. To verify the value of geological information for placement of infiltration elements we carried out a number of infiltration tests on geologically different areas across the field, and we observed infiltration rates two times higher in the sandy till area than in the clay till area, thus demonstrating that the hydraulic performance of SuDS can be increased considerably and oversizing avoided if field geological heterogeneity is revealed before placing SuDS.

  18. elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling.

    Directory of Open Access Journals (Sweden)

    Charlotte Herzeel

    Full Text Available elPrep is a high-performance tool for preparing sequence alignment/map files for variant calling in sequencing pipelines. It can be used as a replacement for SAMtools and Picard for preparation steps such as filtering, sorting, marking duplicates, reordering contigs, and so on, while producing identical results. What sets elPrep apart is its software architecture that allows executing preparation pipelines by making only a single pass through the data, no matter how many preparation steps are used in the pipeline. elPrep is designed as a multithreaded application that runs entirely in memory, avoids repeated file I/O, and merges the computation of several preparation steps to significantly speed up the execution time. For example, for a preparation pipeline of five steps on a whole-exome BAM file (NA12878, we reduce the execution time from about 1:40 hours, when using a combination of SAMtools and Picard, to about 15 minutes when using elPrep, while utilising the same server resources, here 48 threads and 23GB of RAM. For the same pipeline on whole-genome data (NA12878, elPrep reduces the runtime from 24 hours to less than 5 hours. As a typical clinical study may contain sequencing data for hundreds of patients, elPrep can remove several hundreds of hours of computing time, and thus substantially reduce analysis time and cost.

  19. Pressure mapping and performance of the compression bandage/garment for venous leg ulcer treatment.

    Science.gov (United States)

    Ghosh, S; Mukhopadhyay, A; Sikka, M; Nagla, K S

    2008-08-01

    A study has been conducted on the commercially available compression bandages as regards their performance with time. Pressure mapping of these bandages has been done using a fabricated pressure-measuring device on a mannequin leg to see the effect on pressure due to creep, fabric friction and angle of bandaging. The results show that the creep behavior, frictional behavior and the angle of bandaging have a significant effect on the pressure profile generated by the bandages during application. The regression analysis shows that the surface friction restricts the slippage in a multilayer system. Also the diameters of the limb and the amount of stretch given to the bandage during application have definite impact on the bandage pressure. In case of compression garments, washing improves the pressure generated but not to the extent of the pressure of a virgin garment. Comparing the two compression materials i.e. bandage and garment, it is found that the presence of higher percentage of elastomeric material and a highly close construction in case of garment provides better holding power and a more homogeneous pressure distribution.

  20. Regular Routes: Deep Mapping a Performative Counterpractice for the Daily Commute 1

    Directory of Open Access Journals (Sweden)

    Laura Bissell

    2015-09-01

    Full Text Available This article offers a textual “deep map” of a series of experimental commutes undertaken in the west of Scotland in 2014. Recent developments in the field of transport studies have reconceived travel time as a far richer cultural experience than in previously utilitarian and economic approaches to the “problem” of commuting. Understanding their own commutes in these terms—as spaces of creativity, productivity and transformation—the authors trace the development of a performative “counterpractice” for their daily journeys between home and work. Deep mapping—as a form of “theory-informed story-telling”—is employed as a productive strategy to document this reimagination of ostensibly quotidian and functional travel. Importantly, this particular stage of the project is not presented as an end-point. Striving to develop an ongoing creative engagement with landscape, the authors continue this exploratory mobile research by connecting to other commuters’ journeys, and proposing a series of “strategies” for reimagining the daily commute; a list of prompts for future action within the routines and spaces of commuting. A range of alternative approaches to commuting are offered here to anyone who regularly travels to and from work to employ or develop as they wish, extending the mapping process to other routes and contexts.

  1. Residual recovery and yield performance of nitrogen fertilizer applied at sugarcane planting

    Directory of Open Access Journals (Sweden)

    Henrique Coutinho Junqueira Franco

    2015-12-01

    Full Text Available ABSTRACTThe low effectiveness of nitrogen fertilizer (N is a substantial concern that threatens global sugarcane production. The aim of the research reported in this paper was to assess the residual effect of N-fertilizer applied at sugarcane planting over four crop seasons in relation to sugarcane crop yield. Toward this end three field experiments were established in the state of São Paulo, Brazil, during February of 2005 and July of 2009, in a randomized block design with four treatments: 0, 40, 80 and 120 kg ha−1 of N applied as urea during sugarcane planting. Within each plot, a microplot was established to which 15N-labeled urea was applied. The application of N at planting increased plant cane yield in two of the three sites and sucrose content at the other, whereas the only residual effect was higher sucrose content in one of the following ratoons. The combined effect was an increase in sugar yield for three of the 11 crop seasons evaluated. Over the crop cycle of a plant cane and three ratoon crops, only 35 % of the applied N was recovered, split 75, 13, 7 and 5 % in the plant cane, first, second and third ratoons, respectively. These findings document the low efficiency of N recovery by sugarcane, which increases the risk that excessive N fertilization will reduce profitability and have an adverse effect on the environment.

  2. Towards Sustainable Performance Measurement Frameworks for Applied Research in Canadian Community Colleges and Institutes

    Science.gov (United States)

    Williams, Keith

    2014-01-01

    Applied Research (AR) in Canadian community colleges is driven by a mandate, via the collective voice of Colleges and Institutes Canada--a national voluntary membership association of publicly supported colleges and related institutions--to address issues of interest to industry, government, and/or community. AR is supported through significant…

  3. Perceptual-cognitive expertise in sport: some considerations when applying the expert performance approach.

    Science.gov (United States)

    Williams, A Mark; Ericsson, K Anders

    2005-06-01

    The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.

  4. Applying the roofline performance model to the intel xeon phi knights landing processor

    OpenAIRE

    Doerfler, D; Deslippe, J; Williams, S; Oliker, L; Cook, B; Kurth, T; Lobet, M; Malas, T; Vay, JL; Vincenti, H

    2016-01-01

    � Springer International Publishing AG 2016. The Roofline Performance Model is a visually intuitive method used to bound the sustained peak floating-point performance of any given arithmetic kernel on any given processor architecture. In the Roofline, performance is nominally measured in floating-point operations per second as a function of arithmetic intensity (operations per byte of data). In this study we determine the Roofline for the Intel Knights Landing (KNL) processor, determining t...

  5. Development of Design Concept and Applied Technology for RCP Performance Test Facility

    International Nuclear Information System (INIS)

    Park, Sang Jin; Lee, Jung Ho; Yoon, Seok Ho

    2010-02-01

    Performance test facility for RCP (reactor coolant pump) is essential to verify the performance and reliability of RCP before installation in the nuclear power plant. The development of RCP for new-type reactor and the performance verification of hydraulic revolving body also needs the RCP test facility. The design concept of test loop and the technology of flow rate measurement are investigated in this research

  6. An approach for establishing the performance maps of the sc-CO_2 compressor: Development and qualification by means of CFD simulations

    International Nuclear Information System (INIS)

    Pham, H.S.; Alpy, N.; Ferrasse, J.H.; Boutin, O.; Tothill, M.; Quenaut, J.; Gastaldi, O.; Cadiou, T.; Saez, M.

    2016-01-01

    Highlights: • Ability of CFD to predict the performance of a sc-CO_2 test compressor is shown. • Risk of vapor pockets occurrence inside a scale 1:1 compressor is highlighted. • Limitation of previous performance maps approaches to model the real gas behavior is shown. • A performance maps approach for the sc-CO_2 compressor is proposed and validated. - Abstract: One of the challenges in the performance prediction of the supercritical CO_2 (sc-CO_2) compressor is the real gas behavior of the working fluid near the critical point. This study deals with the establishment of an approach that allows coping with this particularity by dressing compressor performance maps in adequate reduced coordinates (i.e., suitable dimensionless speed and flow parameters inputs and pressure ratio and enthalpy rise outputs), while using CFD for its validation. Two centrifugal compressor designs have been considered in this work. The first one corresponds to a 6 kW small scale component implemented in a test loop at Tokyo Institute of Technology. The second one corresponds to a 38 MW scale 1:1 design considered at an early stage of a project that investigates sc-CO_2 cycle for a Small Modular Reactor application. Numerical results on the former have been successfully confronted with the experimental data to qualify the ability of CFD to provide a performance database. Results on the latter have revealed a significant decrease in the static temperature and pressure during flow acceleration along the leading edge of the impeller blades. In this line, the increased risk of vapor pockets appearance inside a sc-CO_2 compressor has been highlighted and recommendations regarding the choice of the on-design inlet conditions and the compressor design have been given to overcome this concern. CFD results on the scale 1:1 compressor have then been used to evaluate the relevancy of some previous performance maps approaches for a sc-CO_2 compressor application. These include the conventional

  7. Performance of Differential-Phase-Shift Keying Protocol Applying 1310 nm Up-Conversion Single-Photon Detector

    International Nuclear Information System (INIS)

    Chen-Xu, Feng; Rong-Zhen, Jiao; Wen-Han, Zhang

    2008-01-01

    The performance of the differential-phase-shift keying (DPSK) protocol applying a 1310 nm up-conversion single-photon detector is analysed. The error rate and the communication rate as a function of distance for three quantum key distribution protocols, the Bennett–Brassard 1984, the Bennett–Brassard–Mermin 1992, and the DPSK, are presented. Then we compare the performance of these three protocols using the 1310nm up-conversion detector. We draw the conclusion that the DPSK protocol applying the detector has significant advantage over the other two protocols. Longer transmission distance and lower error rate can be achieved. (general)

  8. 20 CFR 669.500 - What performance measures and standards apply to the NFJP?

    Science.gov (United States)

    2010-04-01

    ... account the characteristics of the population to be served and the economic conditions in the service area... must be included in the approved plan. (b) We may develop additional performance indicators with... service area economy and local demographics of eligible MSFW's. The levels of performance for these...

  9. Rutting Performance of Cold-Applied Asphalt Repair Materials for Airfield Pavements

    Science.gov (United States)

    2017-06-23

    this study. Cold mix asphalt materials, further denoted cold mixes , were selected to reasonably represent available products on the market and were...pavement repair, primarily because of the small quantities involved and/or the unavailability of hot- mixed asphalt. These cold-applied mixtures have...poorer rutting resistance than hot mix asphalt because additives, often solvent, are required to provide adequate workability for them to be placed

  10. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  11. Appraising and applying evidence about a diagnostic test during a performance-based assessment

    Directory of Open Access Journals (Sweden)

    Franklin Ellen

    2004-10-01

    Full Text Available Abstract Background The practice of Evidence-based Medicine requires that clinicians assess the validity of published research and then apply the results to patient care. We wanted to assess whether our soon-to-graduate medical students could appraise and apply research about a diagnostic test within a clinical context and to compare our students with peers trained at other institutions. Methods 4th year medical students who previously had demonstrated competency at probability revision and just starting first-year Internal Medicine residents were used for this research. Following an encounter with a simulated patient, subjects critically appraised a paper about an applicable diagnostic test and revised the patient's pretest probability given the test result. Results The medical students and residents demonstrated similar skills at critical appraisal, correctly answering 4.7 and 4.9, respectively, of 6 questions (p = 0.67. Only one out of 28 (3% medical students and none of the 15 residents were able to correctly complete the probability revision task (p = 1.00. Conclusions This study found that most students completing medical school are able to appraise an article about a diagnostic test but few are able to apply the information from the article to a patient. These findings raise questions about the clinical usefulness of the EBM skills possessed by graduating medical students within the area of diagnostic testing.

  12. Applying Service Performance Guarantees to Reduce Risk Perception in the Purchase and Consumption of Higher Education

    Directory of Open Access Journals (Sweden)

    Nooraini Mohamad Sheriff

    2006-12-01

    Full Text Available The intangible nature of education is one contributor to consumers’ perception of risk prior to their purchase and consumption. This risk includes: functional risk, financial risk, temporal risk, physical risk, psychological risk and social risk. The presence of these risks often makes consumer evaluation prior to purchase and consumption difficult. Invoking a service guarantee is a platform available to enable higher educational institutions to minimize such risk perception so as to induce purchase. Specifically, service guarantee for higher education entails the application of teaching performance guarantee. This form of guarantee focuses on two important customer groups of higher educational institutions namely, students and faculty members, and focuses only on a specific performance aspect such as instructor’s performance. Thus, if students are dissatisfied with an instructor’s performance they are entitled to receive their money back. The imposition of such a teaching performance guarantee would implicate instructor’s accountability for certain aspects of their performance. It also establishes a mechanism to solicit feedback to better understand why and how instructors fail. Consequently, service performance guarantee creates a high level of customer focus and signals instructors’ care towards students

  13. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    Science.gov (United States)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  14. Performance of non-parametric algorithms for spatial mapping of tropical forest structure

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2016-08-01

    Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.

  15. JACoW Automatic PID performance monitoring applied to LHC cryogenics

    CERN Document Server

    Bradu, Benjamin; Marti, Ruben; Tilaro, Filippo

    2018-01-01

    At CERN, the LHC (Large Hadron Collider) cryogenic system employs about 5000 PID (Proportional Integral Derivative) regulation loops distributed over the 27 km of the accelerator. Tuning all these regulation loops is a complex task and the systematic monitoring of them should be done in an automated way to be sure that the overall plant performance is improved by identifying the poorest performing PID controllers. It is nearly impossible to check the performance of a regulation loop with a classical threshold technique as the controlled variables could evolve in large operation ranges and the amount of data cannot be manually checked daily. This paper presents the adaptation and the application of an existing regulation indicator performance algorithm on the LHC cryogenic system and the different results obtained in the past year of operation. This technique is generic for any PID feedback control loop, it does not use any process model and needs only a few tuning parameters. The publication also describes th...

  16. ANALYTIC NETWORK PROCESS AND BALANCED SCORECARD APPLIED TO THE PERFORMANCE EVALUATION OF PUBLIC HEALTH SYSTEMS

    Directory of Open Access Journals (Sweden)

    Marco Aurélio Reis dos Santos

    2015-08-01

    Full Text Available The performance of public health systems is an issue of great concern. After all, to assure people's quality of life, public health systems need different kinds of resources. Balanced Scorecard provides a multi-dimensional evaluation framework. This paper presents the application of the Analytic Network Process and Balanced Scorecard in the performance evaluation of a public health system in a typical medium-sized Southeastern town in Brazil.

  17. Simulating the Sky as Seen by the Square Kilometer Array using the MIT Array Performance Simulator (MAPS)

    Science.gov (United States)

    Matthews, Lynn D.; Cappallo, R. J.; Doeleman, S. S.; Fish, V. L.; Lonsdale, C. J.; Oberoi, D.; Wayth, R. B.

    2009-05-01

    The Square Kilometer Array (SKA) is a proposed next-generation radio telescope that will operate at frequencies of 0.1-30 GHz and be 50-100 times more sensitive than existing radio arrays. Meeting the performance goals of this instrument will require innovative new hardware and software developments, a variety of which are now under consideration. Key to evaluating the performance characteristics of proposed SKA designs and testing the feasibility of new data calibration and processing algorithms is the ability to carry out realistic simulations of radio wavelength arrays under a variety of observing conditions. The MIT Array Performance Simulator (MAPS) (http://www.haystack.mit.edu/ast/arrays/maps/index.html) is an observations simulation package designed to achieve this goal. MAPS accepts an input source list or sky model and generates a model visibility set for a user-defined "virtual observatory'', incorporating such factors as array geometry, primary beam shape, field-of-view, and time and frequency resolution. Optionally, effects such as thermal noise, out-of-beam sources, variable station beams, and time/location-dependent ionospheric effects can be included. We will showcase current capabilities of MAPS for SKA applications by presenting results from an analysis of the effects of realistic sky backgrounds on the achievable image fidelity and dynamic range of SKA-like arrays comprising large numbers of small-diameter antennas.

  18. Applying Machine Learning and High Performance Computing to Water Quality Assessment and Prediction

    OpenAIRE

    Ruijian Zhang; Deren Li

    2017-01-01

    Water quality assessment and prediction is a more and more important issue. Traditional ways either take lots of time or they can only do assessments. In this research, by applying machine learning algorithm to a long period time of water attributes’ data; we can generate a decision tree so that it can predict the future day’s water quality in an easy and efficient way. The idea is to combine the traditional ways and the computer algorithms together. Using machine learning algorithms, the ass...

  19. Relation of project managers' personality and project performance: An approach based on value stream mapping

    Directory of Open Access Journals (Sweden)

    Maurizio Bevilacqua

    2014-09-01

    Full Text Available Purpose: This work investigates the influence of project managers’ personality on the success of a project in a Multinational Corporation. The methodology proposed for analyzing the project managers’ personality is based on the Myers-Briggs Type Indicator.Design/methodology/approach: Forty projects carried out in 2012 by multinational corporation, concerning new product development (NPD, have been analyzed, comparing the profile of project managers with results obtained in terms of traditional performance indexes (time delay and over-budget of projects and performance indexes usually used in “Lean Production” sector (waste time and type of “wastes”. A detailed analysis of the most important “wastes” during the project development is carried out using the Value Stream Mapping (VSM technique.Findings and Originality/value: Relying on the Myers–Briggs personality instrument, results show that extroverted managers (as opposed to introverted managers carry out projects that show lower delay and lower waste time. Introverted managers often make “Over-processing” and “Defect” types of waste. Moreover, lower delay and over-budget have been shown by perceiving managers.Research limitations: Regarding the limitations of this work it is necessary to highlight that we collected data from project managers in a retrospective way. While we believe that several aspects of our data collection effort helped enhance the accuracy of the results, future research could conduct real-time case study research to get more detailed insights into the proposed relationships and avoid retrospective bias. Moreover we focused on a single respondent, the project manager. This helped us ensure that their interpretations played an important role in product development. But, we cannot examined the opinion of team members that could be different from project managers opinion regarding some questions.Originality/value: This research provides insight useful

  20. Employee perceptions of line management performance: applying the AMO theory to explain the effectiveness of line managers' HRM implementation

    NARCIS (Netherlands)

    Bos-Nehles, Anna Christina; van Riemsdijk, Maarten; Looise, Jan C.

    2013-01-01

    Line managers are today seen as increasingly important in effectively implementing HRM practices. Based on the Ability-Motivation-Opportunity (AMO) theory, we predict that line managers' performance in this regard will depend on their ability to apply HRM practices, and that their motivation and the

  1. The learning theories’ knowledge applied in the performance of distance tutor

    Directory of Open Access Journals (Sweden)

    Fernanda Abreu de Moraes Figueiredo

    2016-07-01

    Full Text Available Abstract: This study aimed to identify the most influential theory of learning related to the practice of mentoring from behaviorism, cognitivism, humanism, the sociocultural theory and connectivism, and apply the most appropriate theories to solve common problems in distance education. For this purpose, we used the literature method. It was noted that each of the theories end up being influential to the role of tutor. Therefore, the learning tends to be richer in the ratio and effective to apply different theories together. However, that support better substantiating tutor's role is humanism, the sociocultural theory and connectivism. It was noticed that the problems often experienced by students in distance education are due to failures tutor interaction and affection, implying to resolve them closer tutor with the student to have more responsibility in the exchange of information, meeting deadlines and clarity in the disclosure notes assessments. Knowledge are mainly from humanism and sociocultural theory that end up not only reasons for existence of the tutor as serving to improve the development of the quality of tutor-student interaction. Keywords: learning theories; distance learning (DL; tutor distance.

  2. Investigation of Performance Task Studies Applied by Turkish Teachers for the Purpose of Consolidation and Evaluation

    Directory of Open Access Journals (Sweden)

    Ali GÖÇER

    2014-04-01

    Full Text Available The aim of this study is to provide an overview of the applications of task performance as they are used to determine the status of the development in process and acquire the senior mental, linguistic, and social skills of students in secondary school Turkish lessons. In this study which adapted action research design in qualitative research approach, the data were obtained through interviews and document analysis method. The study was conducted on a study group consisting of 13 Turkish teachers. The data were obtained using the main form of the interview as a means of data collection. Additionally, were collected as examined document that products made by the task performance of students. The data were examined with content analysis method. According to the results of the data, almost all of the teachers in the study group have implemented performance tasks in accordance with the decisions of branch and the curriculum of the course.

  3. Applying mathematical models to predict resident physician performance and alertness on traditional and novel work schedules.

    Science.gov (United States)

    Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P

    2016-09-13

    In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid

  4. A performance improvement program applied to the Perry Nuclear Power Plant instrumentation and control section

    International Nuclear Information System (INIS)

    Anderson, G.R.

    1987-01-01

    The management at Cleveland Electric Illuminating Company sought to avoid problems typically encountered in the start-up of new nuclear generating units. In response to early indications that such problems may have been developing at their Perry Nuclear Power Plant, several performance improvement initiatives were undertaken. One of these initiatives was a performance improvement evaluation (PIE) for the instrumentation and control (IandC) section at Perry. The IandC PIE, which used a method designed to be adaptable to other disciplines as well, had important results that are applicable to other nuclear power plants

  5. Site Characterization and Preliminary Performance Assessment Calculation Applied To JAEA-Horonobe URL Site of Japan

    International Nuclear Information System (INIS)

    Lim, Doo Hyun; Hatanaka, Koichiro; Ishii, Eiichi

    2010-01-01

    JAEA-Horonobe Underground Research Laboratory (URL) is designed for research and development on high-level radioactive waste (HLW) repository in sedimentary rock. For a potential HLW repository, understanding and implementing fracturing and faulting system, with data from the site characterization, into the performance assessment is essential because fracture and fault will be the major conductors or barriers for the groundwater flow and radionuclide release. The objectives are i) quantitative derivation of characteristics and correlation of fracturing/faulting system with geologic and geophysics data obtained from the site characterization, and ii) preliminary performance assessment calculation with characterized site information

  6. Site Characterization and Preliminary Performance Assessment Calculation Applied To JAEA-Horonobe URL Site of Japan

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Doo Hyun [NE Union Hill Road, Suite 200, WA 98052 (United States); Hatanaka, Koichiro; Ishii, Eiichi [Japan Atomic Energy Agency, Hokkaido (Japan)

    2010-10-15

    JAEA-Horonobe Underground Research Laboratory (URL) is designed for research and development on high-level radioactive waste (HLW) repository in sedimentary rock. For a potential HLW repository, understanding and implementing fracturing and faulting system, with data from the site characterization, into the performance assessment is essential because fracture and fault will be the major conductors or barriers for the groundwater flow and radionuclide release. The objectives are i) quantitative derivation of characteristics and correlation of fracturing/faulting system with geologic and geophysics data obtained from the site characterization, and ii) preliminary performance assessment calculation with characterized site information

  7. Performance of Ruecking's Word-compression Method When Applied to Machine Retrieval from a Library Catalog

    Directory of Open Access Journals (Sweden)

    Ben-Ami Lipetz

    1969-12-01

    Full Text Available F. H. Ruecking's word-compression algorithm for retrieval of bibliographic data from computer stores was tested for performance in matching user-supplied, unedited bibliographic data to the bibliographic data contained in a library catalog. The algorithm was tested by manual simulation, using data derived from 126 case studies of successful manual searches of the card catalog at Sterling Memorial Library, Yale University. The algorithm achieved 70% recall in comparison to conventional searching. Its accepta- bility as a substitute for conventional catalog searching methods is ques- tioned unless recall performance can be improved, either by use of the algorithm alone or in combination with other algorithms.

  8. Photocatalysis applied to concrete products - part 2 : influencing factors and product performance

    NARCIS (Netherlands)

    Hunger, M.; Hüsken, G.; Brouwers, H.J.H.

    2008-01-01

    The second part of this three-part article series addresses the influence of physicochemical parameters on the degradation performance of concrete products containing photocatalytic active TiO2. The influence of process conditions like irradiance, relative humidity, pollutant concentration and flow

  9. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    Science.gov (United States)

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  10. Applying Learning Analytics for the Early Prediction of Students' Academic Performance in Blended Learning

    Science.gov (United States)

    Lu, Owen H. T.; Huang, Anna Y. Q.; Huang, Jeff C. H.; Lin, Albert J. Q.; Ogata, Hiroaki; Yang, Stephen J. H.

    2018-01-01

    Blended learning combines online digital resources with traditional classroom activities and enables students to attain higher learning performance through well-defined interactive strategies involving online and traditional learning activities. Learning analytics is a conceptual framework and is a part of our Precision education used to analyze…

  11. High-Performance Classrooms for Women? Applying a Relational Frame to Management/Organizational Behavior Courses.

    Science.gov (United States)

    Buttner, E. Holly

    2002-01-01

    Attributes of relational theory, based on women's development, include preventive connecting, mutual empowering, achieving, and team building. These attributes are compatible with the practices of high performance work organizations. Relational practices should be integrated into management and organizational behavior courses. (Contains 53…

  12. Study on the coupling performance of a turboexpander compressor applied in cryogenic reverse Brayton air refrigerator

    International Nuclear Information System (INIS)

    Yang, Shanju; Chen, Shuangtao; Chen, Xingya; Zhang, Xingqun; Hou, Yu

    2016-01-01

    Highlights: • Numerical simulations on expansion and compression processes were carried out. • A coupling model was built based on analysis and simulation and verified by test. • Relations and interactions among coupling parameters were quantitatively described. • When T_0_C = 0.39 MPa, the cooling capacity of refrigerator reached 221 W at 129.6 K. - Abstract: A small cryogenic reverse Brayton air refrigerator with turboexpander compressor (TEC) is presented in this study. Because of stable process, simple matching between expander and brake blower, and easy regulation, a turboexpander with brake blower is usually used in small reverse Brayton refrigerator. However, a turboexpander with brake blower just consumes and wastes the output energy during the enthalpy drop. In contrast, the output energy of TEC is absorbed by its coupled compressor for recycling. Thus when employing a TEC, the reverse Brayton refrigerator will achieve lower refrigeration temperature, larger cooling capacity and more effective energy use. TEC overall performance, which has an important impact on the refrigerator thermal performance, is mainly determined by the coupling between expander and compressor. In a TEC, the compressor and expander should seek balance among energy, rotating speed, mass flow rate and pressure, though restricted by individual working characteristics. The coupling relations among compressor efficiency, expander efficiency, compressor pressure ratio and expander expansion ratio are quite complex. In this study, theoretical coupling analysis between expander and compressor was conducted. The aerodynamic performances of compressor and expander were calculated using CFX simulation with SST model. The performance curves of compressor and expander were obtained through simulation results, which were validated by experimental data. Based on the coupling analysis and numerical simulations, the automatic coupling model between compression process and expansion process

  13. Applying Importance-Performance Analysis as a Service Quality Measure in Food Service Industry

    OpenAIRE

    Tzeng, Gwo-Hshiung; Chang, Hung-Fan

    2011-01-01

    As the global economy becomes a service oriented economy, food service accounts for over 20% of service revenue, with an annual growth rate of more than 3%. Compared to physical products, service features are invisible, and the production and sale occurs simultaneously. There is not easy to measure the performance of service. Therefore, the service quality of catering services is considered to be an important topic of service management. According Market Intelligence & Consulting Institute (M...

  14. Web communication of CSR and financial performance: Study applied to catalan meat companies

    OpenAIRE

    Aramayo García, Alejandra; Arimany-Serrat, Nuria; Uribe Salazar, Clara; Sabata Aliberch, Anna

    2016-01-01

    Purpose: Understanding the relationship between CSR communication on corporate websites and the financial performance of Catalan meat companies. Design/methodology/approach: Qualitative and quantitative analysis of the CSR communication variables of corporate websites identifying the companies with the best CSR web communication’s practices, and economic and financial comparative analysis. It also modelled the financial returns to determine whether CSR communication, as an independent vari...

  15. Performance enhancement of microbial fuel cell by applying transient-state regulation

    International Nuclear Information System (INIS)

    Liang, Peng; Zhang, Changyong; Jiang, Yong; Bian, Yanhong; Zhang, Helan; Sun, Xueliang; Yang, Xufei; Zhang, Xiaoyuan; Huang, Xia

    2017-01-01

    Highlights: • MFC was operated with transient-state regulation to enhance its performance. • Effects of the TSR parameters on MFC performance were thoroughly investigated. • Long-term operation of MFC in TSR mode allowed 32.7% higher power production. • Anode capacitance helped reduce the MFC’s internal impedance in the TSR mode. - Abstract: A binder-free, pseudocapacitive anode was fabricated by coating reduced graphene oxide (rGO) and manganese oxide (MnO_2) nanoparticles on stainless steel fibre felt (SS). Microbial fuel cell (MFC) equipped with this novel anode yielded a maximum power density of 1045 mW m"−"2, 20 times higher than that of a similar MFC with a bare SS anode (46 mW m"−"2). Transient-state regulation (TSR) was implemented to further improve the MFC’s power generation. The optimal TSR duty cycle ranged from 67% to 95%, and the MFC’s power density increased with TSR frequency. A maximum power density output of 1238 mW m"−"2 was achieved at the TSR duty cycle of 75% and the frequency of 1 Hz, 18.4% greater than that obtained from the steady state operation. The TSR mode delivered better MFC performance especially when the external resistance was small. Long-term operation tests revealed that the current density and power density yielded in the TSR mode were on average 15.0% and 32.7% greater than those in the steady state mode, respectively. The TSR mode was believed to reduce the internal resistance of the MFC while enhance substrate mass transfer and electron transfer within the anode matrix, thereby improving the MFC performance.

  16. Ideal diet versus athletic performance: a study about nutritional stereotypes applied by triathletes

    Directory of Open Access Journals (Sweden)

    Patrícia Kanno

    2009-09-01

    Full Text Available Stereotype is a term designated to some generalized perceptions that people attri-bute to another individual, groups, objects and/or events. Objective: The purpose of this study was to evaluate the priorities that nutritionist and triathletes attributed to food to enhance the sport performance. Method: The sample was composed by 56 subjects, splited in two groups: Nutritionists (G1, n = 29 and Triathletes (G2, n = 27. The Intake Food Priority Question-naire (QPA, composed by food groups, was used to evaluate the importance of each food in the athletic performance. Results: To the data analysis, each food was grouped into one of the eight categories designed in nutrition pyramid. An arithmetic means were elaborated to categories and it was performed t tests for independent samples to compare nutritionists and triathletes priorities about each category. There were no differences between groups in Vegetables, Leguminous, Fruits and Sugar/Candies categories and, also, to the black coffee item. However, the triathletes overestimated the Cereals/Breads/Tubercle and Meat/Eggs categories and underestimated the Milk/Dairy Products and Oils/Fat categories. Conclusion: The triathletes overestimated some macronutrients (carbohydrates and proteins in detriment from another, like lipids.

  17. An applied artificial intelligence approach towards assessing building performance simulation tools

    Energy Technology Data Exchange (ETDEWEB)

    Yezioro, Abraham [Faculty of Architecture and Town Planning, Technion IIT (Israel); Dong, Bing [Center for Building Performance and Diagnostics, School of Architecture, Carnegie Mellon University (United States); Leite, Fernanda [Department of Civil and Environmental Engineering, Carnegie Mellon University (United States)

    2008-07-01

    With the development of modern computer technology, a large amount of building energy simulation tools is available in the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. This paper presents an approach towards assessing building performance simulation results to actual measurements, using artificial neural networks (ANN) for predicting building energy performance. Training and testing of the ANN were carried out with energy consumption data acquired for 1 week in the case building called the Solar House. The predicted results show a good fitness with the mathematical model with a mean absolute error of 0.9%. Moreover, four building simulation tools were selected in this study in order to compare their results with the ANN predicted energy consumption: Energy{sub 1}0, Green Building Studio web tool, eQuest and EnergyPlus. The results showed that the more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. (author)

  18. Control Grouped Pedagogical Experiment to Test the Performance of Second-generation Web Maps and the Traditional Maps at the University of Debrecen

    Directory of Open Access Journals (Sweden)

    Dániel Balla

    2015-01-01

    Full Text Available Almost every component of the information society is influenced by elements built on communication technology. Learning also tends to be related to the dynamic usage of computers. Nowadays, a number of applications (online or offline are also available that engage large groups of potential users and simultaneously provide a virtual environment to facilitate learning. This study introduces the self-developed interactive blind map teaching-examining e-learning system of the University of Debrecen. Results of testing the system with a control group are also presented.Both experimental and control groups of students were required to sita test of topographic knowledge following a semester of study. The pass mark for the test was 80%. The experimental group used the new digital environment to study, while the control group prepared for their exam using paper maps in the traditional way. The key research questions addressed by the study were to determine whether exam results obtained by the group using the ‘digital’ method better than those of the control's; and if there were a difference between the exam performances of the two groups, was this statistically significant and, therefore, likely to occur in other similar scenarios?

  19. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  20. Predicting biopharmaceutical performance of oral drug candidates - Extending the volume to dissolve applied dose concept.

    Science.gov (United States)

    Muenster, Uwe; Mueck, Wolfgang; van der Mey, Dorina; Schlemmer, Karl-Heinz; Greschat-Schade, Susanne; Haerter, Michael; Pelzetter, Christian; Pruemper, Christian; Verlage, Joerg; Göller, Andreas H; Ohm, Andreas

    2016-05-01

    The purpose of the study was to experimentally deduce pH-dependent critical volumes to dissolve applied dose (VDAD) that determine whether a drug candidate can be developed as immediate release (IR) tablet containing crystalline API, or if solubilization technology is needed to allow for sufficient oral bioavailability. pH-dependent VDADs of 22 and 83 compounds were plotted vs. the relative oral bioavailability (AUC solid vs. AUC solution formulation, Frel) in humans and rats, respectively. Furthermore, in order to investigate to what extent Frel rat may predict issues with solubility limited absorption in human, Frel rat was plotted vs. Frel human. Additionally, the impact of bile salts and lecithin on in vitro dissolution of poorly soluble compounds was tested and data compared to Frel rat and human. Respective in vitro - in vivo and in vivo - in vivo correlations were generated and used to build developability criteria. As a result, based on pH-dependent VDAD, Frel rat and in vitro dissolution in simulated intestinal fluid the IR formulation strategy within Pharmaceutical Research and Development organizations can be already set at late stage of drug discovery. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. A high performance Time-of-Flight detector applied to isochronous mass measurement at CSRe

    International Nuclear Information System (INIS)

    Mei Bo; Tu Xiaolin; Wang Meng; Xu Hushan; Mao Ruishi; Hu Zhengguo; Ma Xinwen; Yuan Youjin; Zhang Xueying; Geng Peng; Shuai Peng; Zang Yongdong; Tang Shuwen; Ma Peng; Lu Wan; Yan Xinshuai; Xia Jiawen; Xiao Guoqing; Guo Zhongyan; Zhang Hongbin

    2010-01-01

    A high performance Time-of-Flight detector has been designed and constructed for isochronous mass spectrometry at the experimental Cooler Storage Ring (CSRe). The detector has been successfully used in an experiment to measure the masses of the N∼Z∼33 nuclides near the proton drip-line. Of particular interest is the mass of 65 As. A maximum detection efficiency of 70% and a time resolution of 118±8 ps (FWHM) have been achieved in the experiment. The dependence of detection efficiency and signal average pulse height (APH) on atomic number Z has been studied. The potential of APH for Z identification has been discussed.

  2. Evaluation of performance indicators applied to a material recovery facility fed by mixed packaging waste.

    Science.gov (United States)

    Mastellone, Maria Laura; Cremiato, Raffaele; Zaccariello, Lucio; Lotito, Roberta

    2017-06-01

    Most of the integrated systems for municipal solid waste management aim to increase the recycling of secondary materials by means of physical processes including sorting, shredding and reprocessing. Several restrictions prevent from reaching a very high material recycling efficiency: the variability of the composition of new-marketed materials used for packaging production and its shape and complexity are critical issues. The packaging goods are in fact made of different materials (aluminium, polymers, paper, etc.), possibly assembled, having different shape (flat, cylindrical, one-dimensional, etc.), density, colours, optical properties and so on. These aspects limit the effectiveness and efficiency of the sorting and reprocessing plants. The scope of this study was to evaluate the performance of a large scale Material Recovery Facility (MRF) by utilizing data collected during a long period of monitoring. The database resulted from the measured data has been organized in four sections: (1) data related to the amount and type of inlet waste; (2) amount and composition of output products and waste; (3) operating data (such as worked hours for shift, planned and unscheduled maintenance time, setting parameters of the equipment, and energy consumption for shift); (4) economic data (value of each product, disposal price for the produced waste, penalty for non-compliance of products and waste, etc.). A part of this database has been utilized to build an executive dashboard composed by a set of performance indicators suitable to measure the effectiveness and the efficiency of the MRF operations. The dashboard revealed itself as a powerful tool to support managers and engineers in their decisions in respect to the market demand or compliance regulation variation as well as in the designing of the lay-out improvements. The results indicated that the 40% of the input waste was recovered as valuable products and that a large part of these (88%) complied with the standards of

  3. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  4. Assessment of learning powered mobility use--applying grounded theory to occupational performance.

    Science.gov (United States)

    Nilsson, Lisbeth; Durkin, Josephine

    2014-01-01

    Collaboration by two grounded theory researchers, who each had developed a learning continuum instrument, led to the emergence of a new tool for assessment of learning powered mobility use. We undertook a rigorous process of comparative reanalysis that included merging, modifying, and expanding our previous research findings. A new instrument together with its facilitating strategies emerged in the course of revisits to our existing rich account of data taken from real environment powered mobility practice over an extensive time period. Instrument descriptors, categories, phases, and stages allow a facilitator to assess actual phase and plot actual occupational performance and provide a learner with the just right challenge through the learning process. Facilitating strategies are described for each of the phases and provide directions for involvement during learner performance. The learning approach is led by a belief system that the intervention is user-led, working in partnership and empowering the learner. The new assessment tool is inclusive of every potential powered mobility user because it focuses on the whole continuum of the learning process of powered mobility use from novice to expert. The new tool was appraised by clinicians and has been used successfully in clinical practice in the United Kingdom and Sweden.

  5. A Study of Performance Output of a Multivane Air Engine Applying Optimal Injection and Vane Angles

    Directory of Open Access Journals (Sweden)

    Bharat Raj Singh

    2012-01-01

    Full Text Available This paper presents a new concept of the air engine using compressed air as the potential power source for motorbikes, in place of an internal combustion engine. The motorbike is proposed to be equipped with an air engine, which transforms the energy of the compressed air into mechanical motion energy. A mathematical model is presented here, and performance evaluation is carried out on an air-powered novel air turbine engine. The maximum power output is obtained as 3.977 kW (5.50 HP at the different rotor to casing diameter ratios, optimal injection angle 60°, vane angle 45° for linear expansion (i.e., at minimum air consumption when the casing diameter is kept 100 mm, at injection pressure 6 bar (90 psi and speed of rotation 2500 rpm. A prototype air engine is built and tested in the laboratory. The experimental results are also seen much closer to the analytical values, and the performance efficiencies are recorded around 70% to 95% at the speed of rotation 2500–3000 rpm.

  6. Performance Modeling Applied to the Treatment and Disposal of a Mixed Waste at the SRS

    International Nuclear Information System (INIS)

    Pickett, J.B.; Jantzen, C.M.; Cook, J.R.; Whited, A.R.; Field, R.A.

    1997-05-01

    Performance modeling for Low Level Mixed Waste disposal was conducted using the measured leach rates from a number of vitrified waste formulations. The objective of the study was to determine if the improved durability of a vitrified mixed waste would allow trench disposal at the Savannah River Site (SRS). Leaching data were compiled from twenty-nine diverse reference glasses, encompassing a wide range of exposed glass surface area to leachant volume ratios (SA/V), and various leachant solutions; all of which had been leached at 90 degrees Celsius, using the MCC-1 or PCT procedures (ASTM Procedures C1220-92 and C1285-94, respectively). The normalized leach rates were scaled to the ambient disposal temperature of 25 degrees Celsius, and compared to the allowable leach rate of uranium - which would meet the performance assessment requirements. The results indicated that a glass of above average durability (vs. the reference glasses) would meet the uranium leaching concentration for direct SRS trench disposal

  7. Performance of an online translation tool when applied to patient educational material.

    Science.gov (United States)

    Khanna, Raman R; Karliner, Leah S; Eck, Matthias; Vittinghoff, Eric; Koenig, Christopher J; Fang, Margaret C

    2011-11-01

    Language barriers may prevent clinicians from tailoring patient educational material to the needs of individuals with limited English proficiency. Online translation tools could fill this gap, but their accuracy is unknown. We evaluated the accuracy of an online translation tool for patient educational material. We selected 45 sentences from a pamphlet available in both English and Spanish, and translated it into Spanish using GoogleTranslate™ (GT). Three bilingual Spanish speakers then performed a blinded evaluation on these 45 sentences, comparing GT-translated sentences to those translated professionally, along four domains: fluency (grammatical correctness), adequacy (information preservation), meaning (connotation maintenance), and severity (perceived dangerousness of an error if present). In addition, evaluators indicated whether they had a preference for either the GT-translated or professionally translated sentences. The GT-translated sentences had significantly lower fluency scores compared to the professional translation (3.4 vs. 4.7, P educational material, GT performed comparably to professional human translation in terms of preserving information and meaning, though it was slightly worse in preserving grammar. In situations where professional human translations are unavailable or impractical, online translation may someday fill an important niche. Copyright © 2011 Society of Hospital Medicine.

  8. ANALYSIS OF THE PREDICTIVE DMC CONTROLLER PERFORMANCE APPLIED TO A FEED-BATCH BIOREACTOR

    Directory of Open Access Journals (Sweden)

    J. A. D. RODRIGUES

    1997-12-01

    Full Text Available Two control algorithms were implemented in the stabilization of the dissolved oxygen concentration of the penicillin process production phase. A deterministic and nonstructured mathematical model was used, where were considered the balances of cell, substrate, dissolved oxygen and product formation as well as kinetic of the growth, respiration, product inhibition due to excess of substrate, penicillin hydrolyze, yield factors among cell growth, substrate consumption and dissolved oxygen consumption. The bioreactor was operated in a feed-batch way using an optimal strategy for the operational policy. The agitation speed was used as manipulated variable in order to achieve the dissolved oxygen control because it was found to be the most sensitive one. Two types of control configurations were implemented. First, the PID feedback control with the parameters estimated through Modified Simplex optimization method using the IAE index, and second, the DMC predictive control that had as control parameters the model, prediction and control horizons as well as suppression factor and the trajectory parameter. A sensitivity analysis of these two control algorithms was performed using the sample time and dead time as the index to make stability evaluation. Both configurations showed stable performance, however, the predictive one was found to be more robust in relation to the sample time, as well as the dead time variations. This is a very important characteristic to be considered for the implementation of control scheme in real fermentative process

  9. Use of geological mapping tools to improve the hydraulic performance of SuDS

    DEFF Research Database (Denmark)

    Bockhorn, Britta; Klint, Knud Erik; Jensen, Marina Bergen

    2015-01-01

    measuring the shallow subsurface resistivity with a geoelectrical multi-electrode system. To confirm the resistivity data we conducted a spear auger mapping. The exposed sediments ranged from clay tills over sandy clay tills to sandy tills and correspond well to the geoelectrical data. To verify the value...

  10. Interactive anatomical and surgical live stream lectures improve students' academic performance in applied clinical anatomy.

    Science.gov (United States)

    Shiozawa, Thomas; Butz, Benjamin; Herlan, Stephan; Kramer, Andreas; Hirt, Bernhard

    2017-01-01

    Tuebingen's Sectio Chirurgica (TSC) is an innovative, interactive, multimedia, and transdisciplinary teaching method designed to complement dissection courses. The Tuebingen's Sectio Chirurgica (TSC) allows clinical anatomy to be taught via interactive live stream surgeries moderated by an anatomist. This method aims to provide an application-oriented approach to teaching anatomy that offers students a deeper learning experience. A cohort study was devised to determine whether students who participated in the TSC were better able to solve clinical application questions than students who did not participate. A total of 365 students participated in the dissection course during the winter term of the 2012/2013 academic year. The final examination contained 40 standard multiple-choice (S-MC) and 20 clinically-applied multiple-choice (CA-MC) items. The CA-MC items referred to clinical cases but could be answered solely using anatomical knowledge. Students who regularly participated in the TSC answered the CA-MC questions significantly better than the control group (75% and 65%, respectively; P  0.05). The CA-MC questions had a slightly higher level of difficulty than the S-MC questions (0.725 and 0.801, respectively; P = 0.083). The discriminatory power of the items was comparable (S-MC median Pearson correlations: 0.321; CA-MC: 0.283). The TSC successfully teaches the clinical application of anatomical knowledge. Students who attended the TSC in addition to the dissection course were able to answer CA-MC questions significantly better than students who did not attend the TSC. Thus, attending the TSC in addition to the dissection course supported students' clinical learning goals. Anat Sci Educ 10: 46-52. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  11. Plasma Treatment of Carbon Nanotubes Applied to Improve the High Performance of Carbon Nanofiber Supercapacitors

    International Nuclear Information System (INIS)

    Chang, Wei-Min; Wang, Cheng-Chien; Chen, Chuh-Yung

    2015-01-01

    Graphical abstract: This article focused on improving conductivity of carbon nanofibers (CNFs) by added plasma-treatment carbon nanotubes (CNTs). The plasma modification method avoided the destruction of the length and structure of the CNTs and maintained their good electrical properties. Through this method, the relation between conductivity and surface activity site of CNFs was investigated. The results shown that he CNTs-MA added to the CNFs successfully maintained the activity sites on the surface of the CNFs and provide a good electric network to enhance the supercapacitor performance of the CNFs. - Highlights: • The plasma modification method avoided the destruction of the length and structure of the CNTs and maintained their good electrical properties. • The highest conductivity of the CNTs-MA/CNF was 5.2 s/cm at 2.5 wt.% of CNTs-MA addition. It was increased to 8.7 time. • The CNTs-MA added to the CNFs successfully maintained the activity sites on the surface of the CNFs and provide a good electric network to enhance the supercapacitor performance of the CNFs. The highest capacitance was 382 F/g. - Abstract: Plasma-treatment carbon nanotubes (CNTs) grafted with maleic anhydride (MA) were embedded in polyacryonitrile nanofibers via electrospinning and subsequently carbonizated at 800 °C to fabricate carbon nanofibers (CNFs). The grafted degree of MA on CNTs (CNTs-MA) was determined via Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The morphology, surface composition and conductivity of the CNTs-MA/CNF were characterized using electron microscopy, X-ray photoelectron and electrochemical impedance spectroscopy, respectively. CNTs-MA not only affected the conductivity of the CNFs but also the types of the nitrogen functional groups that could be represented as active sites on the CNFs to enhance the performance of the supercapacitors. When 2.5 wt.% CNTs-MA was embedded in the CNFs, the highest conductivity was obtained at

  12. Thermodynamic Simulation on the Performance of Twin Screw Expander Applied in Geothermal Power Generation

    Directory of Open Access Journals (Sweden)

    Yuanqu Qi

    2016-08-01

    Full Text Available A three-dimensional (3D geometry model of twin screw expander has been developed in this paper to measure and analyze geometric parameters such as groove volume, suction port area, and leakage area, which can be described as functions of rotation angle of male rotor. Taking the suction loss, leakage loss, and real gas effect into consideration, a thermodynamic model is developed using continuity and energy conservation equation. The developed model is verified by comparing predicted results of power output and internal efficiency with experimental data. Based on the model, the relationship between mass flow rate through inlet port and leakage path with rotation angle of male rotor as well as effects of the inlet parameter and operating parameter on the performance of the expander are analyzed.

  13. Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography

    Directory of Open Access Journals (Sweden)

    Juan Jaramillo

    2017-01-01

    Full Text Available This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 μm, which is an acceptable value for most of research and development (R and D microtechnology developments in which the typical feature size used is in the micrometer range.

  14. APPLIED BEHAVIORAL FINANCE: INVESTOR BIASES, PERFORMANCE REVERSION TO THE MEAN and TREND FORMATION

    Directory of Open Access Journals (Sweden)

    ADRIAN MITROI

    2014-02-01

    Full Text Available In the pursuit of understanding the behavior of the market player, the basic argument relays on the supposition that the risk appetite increases exactly at the worst moment - when the capacity to assume additional risk decreases significantly.People view a sample randomly drawn from a population as highly representative and cvasi similar to the population in all its essential characteristics. They expect any two samples drawn from a particular population to be more similar to one another and to the population than is statistically justifiable. This behavior is different from the tenets of classic finance theory. The paper aims at demonstating that investor psychological biases lead to investment performance to tilt to the mean in the long run and by following the trend, the financial market population do not enjoy significant sustainable benefits. As a reflection of the behavioral biases and influences, the statistical demonstration supports the conclusion that markets do not random walk.

  15. Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography.

    Science.gov (United States)

    Jaramillo, Juan; Zarzycki, Artur; Galeano, July; Sandoz, Patrick

    2017-01-31

    This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP) observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 μm, which is an acceptable value for most of research and development (R and D) microtechnology developments in which the typical feature size used is in the micrometer range.

  16. Web communication of CSR and financial performance: Study applied to catalan meat companies

    Directory of Open Access Journals (Sweden)

    Alejandra Aramayo García

    2016-02-01

    Full Text Available Purpose: Understanding the relationship between CSR communication on corporate websites and the financial performance of Catalan meat companies. Design/methodology/approach: Qualitative and quantitative analysis of the CSR communication variables of corporate websites identifying the companies with the best CSR web communication’s practices, and economic and financial comparative analysis. It also modelled the financial returns to determine whether CSR communication, as an independent variable, affects the net profit generated in relation to the investment of the stakeholders.  The analysis covered a sample of 130 Catalan meat companies. Findings: The report provides a diagnosis of the CSR web communication and also of the financial health of the companies in the period analyzed. The study contributes to the discussion on the relationship between CSR and financial performance. Research limitations/implications: It would be desirable extended periods of economic and financial analysis, and a more in depth study of online communication strategy incorporating the views of those responsible for the strategy and stakeholders. Practical implications: The analysis provides a better understanding of current corporate web communication and the economic and financial situation of the companies analyzed. It has practical benefits in making strategic decisions to improve the relationship with stakeholders and allows us to assess the forecast that has been made for this sector in Catalonia in the period analyzed. Social implications: The results of the study allow the industry to see the future prospects of this sector and to make the necessary changes. The results lead to improved transparency and responsible behavior. Originality/value: The analysis allows the stakeholders of the meat industry to evaluate the company’s social behavior, to assess the financial health and to take appropriate future actions.

  17. Life cycle performances of log wood applied for soil bioengineering constructions

    Science.gov (United States)

    Kalny, Gerda; Strauss-Sieberth, Alexandra; Strauss, Alfred; Rauch, Hans Peter

    2016-04-01

    Nowadays there is a high demand on engineering solutions considering not only technical aspects but also ecological and aesthetic values. Soil bioengineering is a construction technique that uses biological components for hydraulic and civil engineering solutions. Soil bioengineering solutions are based on the application of living plants and other auxiliary materials including among others log wood. This kind of construction material supports the soil bioengineering system as long as the plants as living construction material overtake the stability function. Therefore it is important to know about the durability and the degradation process of the wooden logs to retain the integral performance of a soil bio engineering system. These aspects will be considered within the framework of the interdisciplinary research project „ELWIRA Plants, wood, steel and concrete - life cycle performances as construction materials". Therefore field investigations on soil bioengineering construction material, specifically European Larch wood logs, of different soil bioengineering structures at the river Wien have been conducted. The drilling resistance as a parameter for particular material characteristics of selected logs was measured and analysed. The drilling resistance was measured with a Rinntech Resistograph instrument at different positions of the wooden logs, all surrounded with three different backfills: Fully surrounded with air, with earth contact on one side and near the water surface in wet-dry conditions. The age of the used logs ranges from one year old up to 20 year old. Results show progress of the drilling resistance throughout the whole cross section as an indicator to assess soil bioengineering construction material. Logs surrounded by air showed a higher drilling resistance than logs with earth contact and the ones exposed to wet-dry conditions. Hence the functional capability of wooden logs were analysed and discussed in terms of different levels of degradation

  18. Water spray cooling technique applied on a photovoltaic panel: The performance response

    International Nuclear Information System (INIS)

    Nižetić, S.; Čoko, D.; Yadav, A.; Grubišić-Čabo, F.

    2016-01-01

    Highlights: • An experimental study was conducted on a monocrystalline photovoltaic panel (PV). • A water spray cooling technique was implemented to determine PV panel response. • The experimental results showed favorable cooling effect on the panel performance. • A feasibility aspect of the water spray cooling technique was also proven. - Abstract: This paper presents an alternative cooling technique for photovoltaic (PV) panels that includes a water spray application over panel surfaces. An alternative cooling technique in the sense that both sides of the PV panel were cooled simultaneously, to investigate the total water spray cooling effect on the PV panel performance in circumstances of peak solar irradiation levels. A specific experimental setup was elaborated in detail and the developed cooling system for the PV panel was tested in a geographical location with a typical Mediterranean climate. The experimental result shows that it is possible to achieve a maximal total increase of 16.3% (effective 7.7%) in electric power output and a total increase of 14.1% (effective 5.9%) in PV panel electrical efficiency by using the proposed cooling technique in circumstances of peak solar irradiation. Furthermore, it was also possible to decrease panel temperature from an average 54 °C (non-cooled PV panel) to 24 °C in the case of simultaneous front and backside PV panel cooling. Economic feasibility was also determined for of the proposed water spray cooling technique, where the main advantage of the analyzed cooling technique is regarding the PV panel’s surface and its self-cleaning effect, which additionally acts as a booster to the average delivered electricity.

  19. Effects of body-mapping-designed clothing on heat stress and running performance in a hot environment.

    Science.gov (United States)

    Jiao, Jiao; Li, Yi; Yao, Lei; Chen, Yajun; Guo, Yueping; Wong, Stephen H S; Ng, Frency S F; Hu, Junyan

    2017-10-01

    To investigate clothing-induced differences in human thermal response and running performance, eight male athletes participated in a repeated-measure study by wearing three sets of clothing (CloA, CloB, and CloC). CloA and CloB were body-mapping-designed with 11% and 7% increased capacity of heat dissipation respectively than CloC, the commonly used running clothing. The experiments were conducted by using steady-state running followed by an all-out performance running in a controlled hot environment. Participants' thermal responses such as core temperature (T c ), mean skin temperature ([Formula: see text]), heat storage (S), and the performance running time were measured. CloA resulted in shorter performance time than CloC (323.1 ± 10.4 s vs. 353.6 ± 13.2 s, p = 0.01), and induced the lowest [Formula: see text], smallest ΔT c , and smallest S in the resting and running phases. This study indicated that clothing made with different heat dissipation capacities affects athlete thermal responses and running performance in a hot environment. Practitioner Summary: A protocol that simulated the real situation in running competitions was used to investigate the effects of body-mapping-designed clothing on athletes' thermal responses and running performance. The findings confirmed the effects of optimised clothing with body-mapping design and advanced fabrics, and ensured the practical advantage of developed clothing on exercise performance.

  20. Saving Salmon Through Advances in Fluvial Remote Sensing: Applying the Optimal Band Ratio Analysis (OBRA) for Bathymetric Mapping of Over 250 km of River Channel and Habitat Classification

    Science.gov (United States)

    Richardson, R.; Legleiter, C. J.; Harrison, L.

    2015-12-01

    Salmonids are threatened with extinction across the world from the fragmentation of riverine ecosystems from dams and diversions. In California, efforts to expand the range of spawnable habitat for native salmon by transporting fish around reservoirs is a potentially species saving idea. But, strong scientific evidence of the amount of high quality habitat is required to make these difficult management decisions. Remote sensing has long been used in fluvial settings to identify physical parameters that drive the quality of aquatic habitat; however, the true strength of remote sensing to cover large spatial extents has not been applied with the resolution that is relevant to salmonids. This project utilizes hyperspectral data of over 250 km of the Tuolumne and Merced Rivers to extract depth and bed slope from the wetted channel and NIR LiDAR for the surrounding topography. The Optimal Band Ratio Analysis (OBRA) has proven as an effective tool to create bathymetric maps of river channels in ideal settings with clear water, high amounts of bottom reflectance, and less than 3 meters deep over short distances. Results from this study show that OBRA can be applied over larger riverscapes at high resolutions (0.5 m). The depth and bed slope estimations are used to classify habitat units that are crucial to quantifying the quality and amount of habitat in these river that once produced large populations of native salmonids. As more managers look to expand habitat for these threatened species the tools developed here will be cost effective over the large extents that salmon migrate to spawn.

  1. Diagnostic performance of qualitative shear-wave elastography according to different color map opacities for breast masses.

    Science.gov (United States)

    Kim, Hana; Youk, Ji Hyun; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju

    2013-08-01

    To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses 101 patients aged 21-77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1-4), color homogeneity (Ehomo) and six-point color score of maximum elasticity (Ecol). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. A visual color pattern, Ehomo, Ecol and LOM scoring were significantly different between benign and malignant lesions under all color opacities (Pbreast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Improved performance of the microbial electrolysis desalination and chemical-production cell with enlarged anode and high applied voltages.

    Science.gov (United States)

    Ye, Bo; Luo, Haiping; Lu, Yaobin; Liu, Guangli; Zhang, Renduo; Li, Xiao

    2017-11-01

    The aim of this study was to improve performance of the microbial electrolysis desalination and chemical-production cell (MEDCC) using enlarged anode and high applied voltages. MEDCCs with anode lengths of 9 and 48cm (i.e., the 9cm-anode MEDCC and 48cm-anode MEDCC, respectively) were tested under different voltages (1.2-3.0V). Our results demonstrated for the first time that the MEDCC could maintain high performance even under the applied voltage higher than that for water dissociation (i.e., 1.8V). Under the applied voltage of 2.5V, the maximum current density in the 48cm-anode MEDCC reached 32.8±2.6A/m 2 , which is one of the highest current densities reported so far in the bioelectrochemical system (BES). The relative abundance of Geobacter was changed along the anode length. Our results show the great potential of the BES with enlarged anode and high applied voltages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Self-organizing maps applied to two-phase flow on natural circulation loop study; Aplicacao de mapas auto-organizaveis na classificacao de padroes de escoamento bifasico

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Leonardo Ferreira

    2016-11-01

    Two-phase flow of liquid and gas is found in many closed circuits using natural circulation for cooling purposes. Natural circulation phenomenon is important on recent nuclear power plant projects for decay heat removal. The Natural Circulation Facility (Circuito de Circulacao Natural CCN) installed at Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN, is an experimental circuit designed to provide thermal hydraulic data related to single and two-phase flow under natural circulation conditions. This periodic flow oscillation behavior can be observed thoroughly in this facility due its glass-made tubes transparency. The heat transfer estimation has been improved based on models that require precise prediction of pattern transitions of flow. This work presents experiments realized at CCN to visualize natural circulation cycles in order to classify two-phase flow patterns associated with phase transients and static instabilities of flow. Images are compared and clustered using Kohonen Self-organizing Maps (SOM's) applied on different digital image features. The Full Frame Discret Cosine Transform (FFDCT) coefficients were used as input for the classification task, enabling good results. FFDCT prototypes obtained can be associated to each flow pattern, enabling a better comprehension of each observed instability. A systematic test methodology was used to verify classifier robustness.

  4. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr x Holstein F2 population

    Directory of Open Access Journals (Sweden)

    Fabyano Fonseca Silva

    2011-01-01

    Full Text Available Nowadays, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr x Holstein population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable.

  5. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    Science.gov (United States)

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  6. EMPIRICAL MODELS FOR PERFORMANCE OF DRIPPERS APPLYING CASHEW NUT PROCESSING WASTEWATER

    Directory of Open Access Journals (Sweden)

    KETSON BRUNO DA SILVA

    2016-01-01

    Full Text Available The objective of this work was to develop empirical models for hydraulic performance of drippers operating with cashew nut processing wastewater depending on operating time, operating pressure and effluent quality. The experiment consisted of two factors, types of drippers (D1=1.65 L h-1, D2=2.00 L h-1 and D3=4.00 L h-1, and operating pressures (70, 140, 210 and 280 kPa, with three replications. The flow variation coefficient (FVC, distribution uniformity coefficient (DUC and the physicochemical and biological characteristics of the effluent were evaluated every 20 hours until complete 160 hours of operation. Data were interpreted through simple and multiple linear stepwise regression models. The regression models that fitted to the FVC and DUC as a function of operating time were square root, linear and quadratic, with 17%, 17% and 8%, and 17%, 17% and 0%, respectively. The regression models that fitted to the FVC and DUC as a function of operating pressures were square root, linear and quadratic, with 11%, 22% and 0% and the 0%, 22% and 11%, respectively. Multiple linear regressions showed that the dissolved solids content is the main wastewater characteristic that interfere in the FVC and DUC values of the drip units D1 (1.65 L h-1 and D3 (4.00 L h-1, operating at work pressure of 70 kPa (P1.

  7. Applying integrated software to optimize corporate production performance: a case study at Suncor

    International Nuclear Information System (INIS)

    Masse, L.P.; Rhynes, P.

    1997-01-01

    The feasibility and need to introduce a central database of basic well data for use in the petroleum industry in order to enhance production performance was discussed. Suncor developed a central database of well data as the foundation for a future systems architecture for its own use. The perceived, current and future benefits of such a system were described. Suncor identified the need for a corporate repository which is accessible to multiple applications, and provides the opportunity to upgrade the system to new technology that will benefit from integration. The objective was to document existing data sets, identify what additional data would be useful and document existing processes around this well data. The integrated set of data is supplied by multiple vendors and includes public land data, production budget, public well data, forecasting, economics, drilling, procurement system, fixed assets, maintenance, land administration, field data capture, production accounting and financial accounting. In addition to being able to access the current well data, significant added value is expected from the pro-active communication within the departments, and the additional time available for analysis and decisions as opposed to searching for data and comparing sources. 4 figs

  8. Performance of bioaugmentation-assisted phytoextraction applied to metal contaminated soils: A review

    International Nuclear Information System (INIS)

    Lebeau, Thierry; Braud, Armelle; Jezequel, Karine

    2008-01-01

    Bioaugmentation-assisted phytoextraction is a promising method for the cleaning-up of soils contaminated by metals. Bacteria mainly Plant Growth Promoting Rhizobacteria (PGPR) and fungi mainly Arbuscular Mycorrhizal Fungi (AMF) associated with hyperaccumulating or non-hyperaccumulating plants were analyzed on the basis of a bioprocess engineering approach (concentration and amount of metals extracted by plants, translocation and bioconcentration factor, and plant biomass). In average bioaugmentation increased metals accumulated by shoots by a factor of about 2 (metal concentration) and 5 (amount) without any obvious differences between bacteria and fungi. To optimize this process, new relevant microorganism-plant associations and field scale experiments are needed along with a common methodology for the comparison of all experiments on the same basis. Recommendations were suggested concerning both the microbial-plant selection and the implementation of bioaugmentation to enhance the microbial survival. The use of microbial consortia associated with plant was discussed notably for multi-contaminated soils. - Bioaugmentation-assisted plant improves the phytoextraction performances for soils contaminated by metals

  9. Performance of bioaugmentation-assisted phytoextraction applied to metal contaminated soils: A review

    Energy Technology Data Exchange (ETDEWEB)

    Lebeau, Thierry [Equipe Depollution Biologique des Sols (EDBS), University of Haute-Alsace, 28, rue de Herrlisheim, BP 50 568, 68 008 Colmar Cedex (France)], E-mail: thierry.lebeau@uha.fr; Braud, Armelle; Jezequel, Karine [Equipe Depollution Biologique des Sols (EDBS), University of Haute-Alsace, 28, rue de Herrlisheim, BP 50 568, 68 008 Colmar Cedex (France)

    2008-06-15

    Bioaugmentation-assisted phytoextraction is a promising method for the cleaning-up of soils contaminated by metals. Bacteria mainly Plant Growth Promoting Rhizobacteria (PGPR) and fungi mainly Arbuscular Mycorrhizal Fungi (AMF) associated with hyperaccumulating or non-hyperaccumulating plants were analyzed on the basis of a bioprocess engineering approach (concentration and amount of metals extracted by plants, translocation and bioconcentration factor, and plant biomass). In average bioaugmentation increased metals accumulated by shoots by a factor of about 2 (metal concentration) and 5 (amount) without any obvious differences between bacteria and fungi. To optimize this process, new relevant microorganism-plant associations and field scale experiments are needed along with a common methodology for the comparison of all experiments on the same basis. Recommendations were suggested concerning both the microbial-plant selection and the implementation of bioaugmentation to enhance the microbial survival. The use of microbial consortia associated with plant was discussed notably for multi-contaminated soils. - Bioaugmentation-assisted plant improves the phytoextraction performances for soils contaminated by metals.

  10. Performance comparison of digital microRNA profiling technologies applied on human breast cancer cell lines.

    Directory of Open Access Journals (Sweden)

    Erik Knutsen

    Full Text Available MicroRNA profiling represents an important first-step in deducting individual RNA-based regulatory function in a cell, tissue, or at a specific developmental stage. Currently there are several different platforms to choose from in order to make the initial miRNA profiles. In this study we investigate recently developed digital microRNA high-throughput technologies. Four different platforms were compared including next generation SOLiD ligation sequencing and Illumina HiSeq sequencing, hybridization-based NanoString nCounter, and miRCURY locked nucleic acid RT-qPCR. For all four technologies, full microRNA profiles were generated from human cell lines that represent noninvasive and invasive tumorigenic breast cancer. This study reports the correlation between platforms, as well as a more extensive analysis of the accuracy and sensitivity of data generated when using different platforms and important consideration when verifying results by the use of additional technologies. We found all the platforms to be highly capable for microRNA analysis. Furthermore, the two NGS platforms and RT-qPCR all have equally high sensitivity, and the fold change accuracy is independent of individual miRNA concentration for NGS and RT-qPCR. Based on these findings we propose new guidelines and considerations when performing microRNA profiling.

  11. Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction

    Science.gov (United States)

    Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.

    1987-01-01

    Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.

  12. Performance Analysis of Long-Reach Coherent Detection OFDM-PON Downstream Transmission Using m-QAM-Mapped OFDM Signal

    Science.gov (United States)

    Pandey, Gaurav; Goel, Aditya

    2017-12-01

    In this paper, orthogonal frequency division multiplexing (OFDM)-passive optical network (PON) downstream transmission is demonstrated over different lengths of fiber at remote node (RN) for different m-QAM (quadrature amplitude modulation)-mapped OFDM signal (m=4, 16, 32 and 64) transmission from the central office (CO) for different data rates (10, 20 30 and 40 Gbps) using coherent detection at the user end or optical network unit (ONU). Investigation is performed with different number of subcarriers (32, 64, 128, 512 and 1,024), back-to-back optical signal-to-noise ratio (OSNR) along with transmitted and received constellation diagrams for m-QAM-mapped coherent OFDM downstream transmission at different speeds over different transmission distances. Received optical power is calculated for different bit error rates (BERs) at different speeds using m-QAM-mapped coherent detection OFDM downstream transmission. No dispersion compensation is utilized in between the fiber span. Simulation results suggest the different lengths and data rates that can be used for different m-QAM-mapped coherent detection OFDM downstream transmission, and the proposed system may be implemented in next-generation high-speed PONs (NG-PONs).

  13. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test

    DEFF Research Database (Denmark)

    Møller, Jonas Bech; Overgaard, R.V.; Madsen, Henrik

    2010-01-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of ...... obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method is concluded to have high relevance not only in theory but also in practice....

  14. Early detection of poor adherers to statins: applying individualized surveillance to pay for performance.

    Directory of Open Access Journals (Sweden)

    Andrew J Zimolzak

    Full Text Available Medication nonadherence costs $300 billion annually in the US. Medicare Advantage plans have a financial incentive to increase medication adherence among members because the Centers for Medicare and Medicaid Services (CMS now awards substantive bonus payments to such plans, based in part on population adherence to chronic medications. We sought to build an individualized surveillance model that detects early which beneficiaries will fall below the CMS adherence threshold.This was a retrospective study of over 210,000 beneficiaries initiating statins, in a database of private insurance claims, from 2008-2011. A logistic regression model was constructed to use statin adherence from initiation to day 90 to predict beneficiaries who would not meet the CMS measure of proportion of days covered 0.8 or above, from day 91 to 365. The model controlled for 15 additional characteristics. In a sensitivity analysis, we varied the number of days of adherence data used for prediction.Lower adherence in the first 90 days was the strongest predictor of one-year nonadherence, with an odds ratio of 25.0 (95% confidence interval 23.7-26.5 for poor adherence at one year. The model had an area under the receiver operating characteristic curve of 0.80. Sensitivity analysis revealed that predictions of comparable accuracy could be made only 40 days after statin initiation. When members with 30-day supplies for their first statin fill had predictions made at 40 days, and members with 90-day supplies for their first fill had predictions made at 100 days, poor adherence could be predicted with 86% positive predictive value.To preserve their Medicare Star ratings, plan managers should identify or develop effective programs to improve adherence. An individualized surveillance approach can be used to target members who would most benefit, recognizing the tradeoff between improved model performance over time and the advantage of earlier detection.

  15. Geographically weighted negative binomial regression applied to zonal level safety performance models.

    Science.gov (United States)

    Gomes, Marcos José Timbó Lima; Cunto, Flávio; da Silva, Alan Ricardo

    2017-09-01

    Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  17. An evaluation of the performance of tag SNPs derived from HapMap in a Caucasian population.

    Directory of Open Access Journals (Sweden)

    Alexandre Montpetit

    2006-03-01

    Full Text Available The Haplotype Map (HapMap project recently generated genotype data for more than 1 million single-nucleotide polymorphisms (SNPs in four population samples. The main application of the data is in the selection of tag single-nucleotide polymorphisms (tSNPs to use in association studies. The usefulness of this selection process needs to be verified in populations outside those used for the HapMap project. In addition, it is not known how well the data represent the general population, as only 90-120 chromosomes were used for each population and since the genotyped SNPs were selected so as to have high frequencies. In this study, we analyzed more than 1,000 individuals from Estonia. The population of this northern European country has been influenced by many different waves of migrations from Europe and Russia. We genotyped 1,536 randomly selected SNPs from two 500-kbp ENCODE regions on Chromosome 2. We observed that the tSNPs selected from the CEPH (Centre d'Etude du Polymorphisme Humain from Utah (CEU HapMap samples (derived from US residents with northern and western European ancestry captured most of the variation in the Estonia sample. (Between 90% and 95% of the SNPs with a minor allele frequency of more than 5% have an r2 of at least 0.8 with one of the CEU tSNPs. Using the reverse approach, tags selected from the Estonia sample could almost equally well describe the CEU sample. Finally, we observed that the sample size, the allelic frequency, and the SNP density in the dataset used to select the tags each have important effects on the tagging performance. Overall, our study supports the use of HapMap data in other Caucasian populations, but the SNP density and the bias towards high-frequency SNPs have to be taken into account when designing association studies.

  18. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Rackham, Jamie; Weber, Anne-Laure; Chard, Patrick

    2012-01-01

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  19. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Rackham, Jamie [Babcock International Group, Sellafield, Seascale, Cumbria, (United Kingdom); Weber, Anne-Laure [Institut de Radioprotection et de Surete Nucleaire Fontenay-Aux-Roses (France); Chard, Patrick [Canberra, Forss Business and Technology park, Thurso, Caithness (United Kingdom)

    2012-12-15

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  20. The MAPS-based vertex detector for the STAR experiment: Lessons learned and performance

    Energy Technology Data Exchange (ETDEWEB)

    Contin, Giacomo, E-mail: gcontin@lbl.gov

    2016-09-21

    The PiXeL detector (PXL) of the STAR experiment at RHIC is the first application of the state-of-the-art thin Monolithic Active Pixel Sensors (MAPS) technology in a collider environment. The PXL, together with the Intermediate Silicon Tracker (IST) and the Silicon Strip Detector (SSD), form the Heavy Flavor Tracker (HFT), which has been designed to improve the vertex resolution and extend the STAR measurement capabilities in the heavy flavor domain, providing a clean probe for studying the Quark–Gluon Plasma. The two PXL layers are placed at a radius of 2.8 and 8 cm from the beam line, respectively, and is based on ultra-thin high resolution MAPS sensors. The sensor features 20.7 μm pixel pitch, 185.6 μs readout time and 170 mW/cm{sup 2} power dissipation. The detector is air-cooled, allowing a global material budget of 0.4% radiation length on the innermost layer. A novel mechanical approach to detector insertion allows for fast installation and integration of the pixel sub detector. The HFT took data in Au+Au collisions at 200 GeV during the 2014 RHIC run. Modified during the RHIC shutdown to improve its reliability, material budget, and tracking capabilities, the HFT took data in p+p and p+Au collisions at √s{sub NN}=200 GeV in the 2015 RHIC run. In this paper we present detector specifications, experience from the construction and operations, and lessons learned. We also show preliminary results from 2014 Au+Au data analyses, demonstrating the capabilities of charm reconstruction with the HFT. - Highlights: • First MAPS-based vertex detector in a collider experiment. • Achieved low material budget of 0.39% of radiation length per detector layer. • Track pointing resolution to the primary vertex better than 10⊕24 GeV/p×c μm. • Gain in significance for the topological reconstruction of the D{sup 0}−>K+π decay in STAR. • Observed latch-up induced damage of MAPS sensors.

  1. A Manual for the Performance of Protective Equipment Fit-Mapping

    Science.gov (United States)

    2009-10-01

    Fit Map Derived Accommodation Envelopes Stature Chest Circumference 1 2 3 4 5 A 760-900 1500- 1590 1520-1640 1590 -1710 B 820-960 1500-1620 1520-1640...Right and Left ISO Definition No. N/A CAESAR Name: THELION/BUSTPOINT, RIGHT AND LEFT Description: Most anterior protrusion of the bra cup on women ...Development. pp. 19-59. 17. Robinette, K.M. (1996). Flight suit sizes for women , Armstrong Laboratory, Brooke AFB, MIPR Number 96MM6646. 18

  2. Performance values for non destructive assay (NDA) techniques applied to safeguards: the 2002 evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Guardini, S.

    2003-01-01

    The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO 2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)

  3. Impact of Voltage Mapping to Guide Whether to Perform Ablation of the Posterior Wall in Patients With Persistent Atrial Fibrillation.

    Science.gov (United States)

    Cutler, Michael J; Johnson, Jeremy; Abozguia, Khalid; Rowan, Shane; Lewis, William; Costantini, Otto; Natale, Andrea; Ziv, Ohad

    2016-01-01

    Fibrosis as a substrate for atrial fibrillation (AF) has been shown in numerous preclinical models. Voltage mapping enables in vivo assessment of scar in the left atrium (LA), which can be targeted with catheter ablation. We hypothesized that using the presence or absence of low voltage to guide ablation beyond pulmonary vein antral isolation (PVAI) will improve atrial arrhythmia (AF/AT)-free survival in persistent AF. Single-center retrospective analysis of 2 AF ablation strategies: (1) standard ablation (SA) versus (2) voltage-guided ablation (VGA). PVAI was performed in both groups. With SA, additional lesions beyond PVAI were performed at the discretion of the operator. With VGA, additional lesions to isolate the LA posterior wall were performed if voltage mapping of this region in sinus rhythm showed scar (LA voltage atrial size. Posterior wall ablation was performed in 57% of patient with SA compared to 42% with VGA. VGA ablation increased 1-year AF-/AT-free survival in patients when compared to SA (80% vs. 57%; P = 0.005). In a multivariate analysis, VGA was the only independent predictor of AF-/AT-free survival (hazard ratio of 0.30; P = 0.002). The presence of LA posterior wall scar may be an important ablation target in persistent AF. A prospective randomized trial is needed to confirm these data. © 2015 Wiley Periodicals, Inc.

  4. Performance of Global-Appearance Descriptors in Map Building and Localization Using Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Luis Payá

    2014-02-01

    Full Text Available Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

  5. Mapping the performance of wood-burning stoves by installations worldwide

    DEFF Research Database (Denmark)

    Luis Teles de Carvalho, Ricardo; Jensen, Ole Michael; Tarelho, Luis A. C.

    2016-01-01

    environmental health risk. Research stressed the need to increase the performance of conventional interplays between users, stoves and buildings. This scientific review aims to characterize the performance and environmental effects of 9 wood-burning stove categories by installations worldwide...

  6. What Happens Inside a Fuel Cell? Developing an Experimental Functional Map of Fuel Cell Performance

    KAUST Repository

    Brett, Daniel J. L.; Kucernak, Anthony R.; Aguiar, Patricia; Atkins, Stephen C.; Brandon, Nigel P.; Clague, Ralph; Cohen, Lesley F.; Hinds, Gareth; Kalyvas, Christos; Offer, Gregory J.; Ladewig, Bradley; Maher, Robert; Marquis, Andrew; Shearing, Paul; Vasileiadis, Nikos; Vesovic, Velisa

    2010-01-01

    Fuel cell performance is determined by the complex interplay of mass transport, energy transfer and electrochemical processes. The convolution of these processes leads to spatial heterogeneity in the way that fuel cells perform, particularly due

  7. Applying digital particle image velocimetry to animal-generated flows : Traps, hurdles and cures in mapping steady and unsteady flows in Re regimes between 10(-2) and 10(5)

    NARCIS (Netherlands)

    Stamhuis, EJ; Videler, JJ; van Duren, LA; Muller, UK

    2002-01-01

    Digital particle image velocimetry (DPIV) has been applied to animal-generated flows since 1993 to map the flow patterns and vortex wakes produced by a range of feeding and swimming aquatic animals, covering a Re range of 10(-2)-10(5). In this paper, the special circumstances, problems and some

  8. Screen media usage, sleep time and academic performance in adolescents: clustering a self-organizing maps analysis.

    Science.gov (United States)

    Peiró-Velert, Carmen; Valencia-Peris, Alexandra; González, Luis M; García-Massó, Xavier; Serra-Añó, Pilar; Devís-Devís, José

    2014-01-01

    Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and to improve

  9. Screen media usage, sleep time and academic performance in adolescents: clustering a self-organizing maps analysis.

    Directory of Open Access Journals (Sweden)

    Carmen Peiró-Velert

    Full Text Available Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and

  10. Overview of the Quality Assurance Program Applied to the Performance Assessment of the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Pickering, S.Y.

    1999-01-01

    The Waste Isolation Pilot Plant (WIPP) is the first deep geologic repository for radioactive waste disposal in the world to be certified by a regulator. Rigorous, nuclear-industry quality assurance (QA) requirements were imposed by the US Environmental Protection Agency. As the Scientific Advisor to the US Department of Energy, Sandia National Laboratories applied these standards to the experimental studies and performance assessment used in the certification process. The QA program ensured that activities conducted by SNL were traceable, transparent, reviewed, reproducible, and retrievable. As a result, regulators and stakeholders were able to evaluate and ultimately certify and accept the WIPP

  11. The performance of a surface-applied corrosion inhibitor for the carbon steel in saturated Ca(OH)2 solutions

    International Nuclear Information System (INIS)

    Zheng, Haibing; Li, Weihua; Ma, Fubin; Kong, Qinglin

    2014-01-01

    In the present work, the performance of an amino alcohol based surface applied inhibitor was studied by the electrochemical techniques in saturated Ca(OH) 2 solutions. The surface morphology of the carbon steel was observed by scanning electron microscope, and the energy diffraction spectrum was also tested. Results showed that the inhibitor used in this work demonstrated obvious inhibition efficiency on the carbon steel in saturated Ca(OH) 2 solutions. The inhibition mechanism of the inhibitor lies in the quick adsorption of the active component on carbon steel surface

  12. The impact of noisy and misaligned attenuation maps on human-observer performance at lesion detection in SPECT

    Science.gov (United States)

    Wells, R. G.; Gifford, H. C.; Pretorius, P. H.; Famcombe, T. H.; Narayanan, M. V.; King, M. A.

    2002-06-01

    We have demonstrated an improvement due to attenuation correction (AC) at the task of lesion detection in thoracic SPECT images. However, increased noise in the transmission data due to aging sources or very large patients, and misregistration of the emission and transmission maps, can reduce the accuracy of the AC and may result in a loss of lesion detectability. We investigated the impact of noise in and misregistration of transmission data, on the detection of simulated Ga-67 thoracic lesions. Human-observer localization-receiver-operating-characteristic (LROC) methodology was used to assess performance. Both emission and transmission data were simulated using the MCAT computer phantom. Emission data were reconstructed using OSEM incorporating AC and detector resolution compensation. Clinical noise levels were used in the emission data. The transmission-data noise levels ranged from zero (noise-free) to 32 times the measured clinical levels. Transaxial misregistrations of 0.32, 0.63, and 1.27 cm between emission and transmission data were also examined. Three different algorithms were considered for creating the attenuation maps: filtered backprojection (FBP), unbounded maximum-likelihood (ML), and block-iterative transmission AB (BITAB). Results indicate that a 16-fold increase in the noise was required to eliminate the benefit afforded by AC, when using FBP or ML to reconstruct the attenuation maps. When using BITAB, no significant loss in performance was observed for a 32-fold increase in noise. Misregistration errors are also a concern as even small errors here reduce the performance gains of AC.

  13. Diagnostic performance of qualitative shear-wave elastography according to different color map opacities for breast masses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hana; Youk, Ji Hyun, E-mail: jhyouk@yuhs.ac; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju

    2013-08-15

    Purpose: To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses Materials and methods: 101 patients aged 21–77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1–4), color homogeneity (E{sub homo}) and six-point color score of maximum elasticity (E{sub col}). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. Results: A visual color pattern, E{sub homo}, E{sub col} and LOM scoring were significantly different between benign and malignant lesions under all color opacities (P < 0.001). For 50% opacity, AUCs of visual color pattern, E{sub col}, E{sub homo} and LOM scoring were 0.902, 0.951, 0.835 and 0.975. But, for each SWE feature, there was no significant difference in the AUC among three different color opacities. For all color opacities, visual color pattern and E{sub col} showed significantly higher AUC than E{sub homo}. In addition, a combined set of B-mode US and SWE showed significantly higher AUC than SWE alone for color patterns, E{sub homo}, but no significant difference was found in E{sub col}. Conclusion: Qualitative SWE was useful to differentiate benign from malignant breast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE.

  14. Diagnostic performance of qualitative shear-wave elastography according to different color map opacities for breast masses

    International Nuclear Information System (INIS)

    Kim, Hana; Youk, Ji Hyun; Gweon, Hye Mi; Kim, Jeong-Ah; Son, Eun Ju

    2013-01-01

    Purpose: To compare the diagnostic performance of qualitative shear-wave elastography (SWE) according to three different color map opacities for breast masses Materials and methods: 101 patients aged 21–77 years with 113 breast masses underwent B-mode US and SWE under three different color map opacities (50%, 19% and 100%) before biopsy or surgery. Following SWE features were reviewed: visual pattern classification (pattern 1–4), color homogeneity (E homo ) and six-point color score of maximum elasticity (E col ). Combined with B-mode US and SWE, the likelihood of malignancy (LOM) was also scored. The area under the curve (AUC) was obtained by ROC curve analysis to assess the diagnostic performance under each color opacity. Results: A visual color pattern, E homo , E col and LOM scoring were significantly different between benign and malignant lesions under all color opacities (P < 0.001). For 50% opacity, AUCs of visual color pattern, E col , E homo and LOM scoring were 0.902, 0.951, 0.835 and 0.975. But, for each SWE feature, there was no significant difference in the AUC among three different color opacities. For all color opacities, visual color pattern and E col showed significantly higher AUC than E homo . In addition, a combined set of B-mode US and SWE showed significantly higher AUC than SWE alone for color patterns, E homo , but no significant difference was found in E col . Conclusion: Qualitative SWE was useful to differentiate benign from malignant breast lesion under all color opacities. The difference in color map opacity did not significantly influence diagnostic performance of SWE

  15. Adjustable ETHD lubrication applied to the improvement of dynamic performance of flexible rotors supported by active TPJB

    DEFF Research Database (Denmark)

    Salazar, Jorge Andrés González; Cerda Varela, Alejandro Javier; Santos, Ilmar

    2013-01-01

    This paper reports the dynamic study of a flexible rotor-bearing test rig which resembles a large overhung centrifugal compressor. The rotor is supported by an active tilting pad journal bearing (TPJB) able to perform the adjustable lubrication regime. Such a regime is obtained by injecting...... pressurized oil directly into the bearing clearance through a nozzle placed in a radial bore at the middle of the pad and connected to a high pressure supply unit by servovalves. The theoretical model is based on a finite element model, where the active TPJB with adjustable lubrication is included using...... and the experimental results are obtained. The improvements are obtained when the system response amplitudes in a bounded speed range is reduced by applying the adjustable lubrication. Results are in agreement with the established fact that a significant improvement of the rotor-bearing system dynamic performance can...

  16. Applying the min-projection strategy to improve the transient performance of the three-phase grid-connected inverter.

    Science.gov (United States)

    Baygi, Mahdi Oloumi; Ghazi, Reza; Monfared, Mohammad

    2014-07-01

    Applying the min-projection strategy (MPS) to a three-phase grid-connected inverter to improve its transient performance is the main objective of this paper. For this purpose, the inverter is first modeled as a switched linear system. Then, the feasibility of the MPS technique is investigated and the stability criterion is derived. Hereafter, the fundamental equations of the MPS for the control of the inverter are obtained. The proposed scheme is simulated in PSCAD/EMTDC environment. The validity of the MPS approach is confirmed by comparing the obtained results with those of VOC method. The results demonstrate that the proposed method despite its simplicity provides an excellent transient performance, fully decoupled control of active and reactive powers, acceptable THD level and a reasonable switching frequency. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Participatory Maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    2016-01-01

    practice. In particular, mapping environmental damage, endangered species, and human-made disasters has become one focal point for environmental knowledge production. This type of digital map has been highlighted as a processual turn in critical cartography, whereas in related computational journalism...... of a geo-visualization within information mapping that enhances embodiment in the experience of the information. InfoAmazonia is defined as a digitally created map-space within which journalistic practice can be seen as dynamic, performative interactions between journalists, ecosystems, space, and species...

  18. Performance of engineering undergraduate students in mathematics: A case study in UniMAP

    Science.gov (United States)

    Saad, Syafawati Ab.; Azziz, Nor Hizamiyani Abdul; Zakaria, Siti Aisyah; Yazid, Nornadia Mohd

    2015-12-01

    The purpose of this paper is to study the trend performance of the first year engineering students at a public university in Mathematics course: Engineering Mathematics I. We analyze how ethnicity factor influenced students' performance in mathematics course over three years period. The performance of the undergraduate students in this study is measured by their cumulative grade point average (CGPA) in the first semester. Analysis of Variance (ANOVA) will be used to test the significance difference between three variables (Malay, Chinese and Indian). Method of simple linear regression (SLR) is used to test the relationship between the performances and to predict the future performance for this course. The findings of the study show that Chinese students perform better than Malay and Indian students.

  19. Mapping Disciplinary Values and Rhetorical Concerns through Language: Writing Instruction in the Performing and Visual Arts

    Science.gov (United States)

    Cox, Anicca

    2015-01-01

    Via interview data focused on instructor practices and values, this study sought to describe some of what performing and visual arts instructors do at the university level to effectively teach disciplinary values through writing. The study's research goals explored how relationships to writing process in visual and performing arts support…

  20. Comparison of graduate-entry and direct school leaver student performance on an applied dental knowledge test.

    Science.gov (United States)

    Ali, K; Zahra, D; Tredwin, C

    2017-11-01

    To compare the academic performance of graduate-entry and direct school leavers in an undergraduate dental programme. This study examined the results of students in applied dental knowledge (ADK) progress tests conducted during two academic years. A mixed model analysis of variance (ANOVA) was conducted to compare the performance of graduate-entry and direct school leavers. ADK was treated as a repeated measures variable, and the outcome variable of interest was percentage score on the ADK. The results show statistically significant main effects for ADK [F (1,113) = 61.58, P < 0.001, η 2 p = 0.35], Cohort [F (1,113) = 88.57, P < 0.001, η 2 p = 0.44] and Entry [F (1,113) = 11.31, P = 0.001, η 2 p = 0.09]. That is, students do better on each subsequent test (main effect of ADK), students in later years of the programme perform better than those in earlier years (main effect of cohort), and graduate-entry students outperform direct school leavers. This is the first study to explore the differences in the academic performance of graduate-entry and direct school leavers in an undergraduate dental programme. The results show that the academic performance of graduate students was better than the direct school leavers in years 2 and 3. Further research is required to compare the performance of students longitudinally across the entire duration of undergraduate dental programmes and evaluate whether this difference persists throughout. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. The effect of using digital mind mapping on cognitive achievement and performance level of some basic skills in handball

    Directory of Open Access Journals (Sweden)

    Khaled Thabet Awad

    2016-07-01

    Full Text Available This study aims to identify the effect of using digital mind maps to on the cognitive achievement and the performance level of some basic skills in handball. Research population includes the first-year students at the Faculty of Physical Education in Port Said consisting of 200 students. Research Sample both researchers randomly selected the sample of first year students. The total sample size reaches 180 students with a 90.00%, after excluding failed students, re-registered students, the students of other levels of curriculum, practitioners to previous experiences and irregular students. The total number was 20 students with a percentage of (10.00%. They were divided into: Basic Sample: includes 80 students with a 44.44%. They were divided into two equal groups of 40 students. First Exploratory Sample: includes 60 students from the same research population and from outside the basic sample in order to find Tests Validity of the tests with a 33.33%. Second Exploratory Sample: includes 40 students from the same research population and from outside the basic sample in order to find Tests Reliability of the tests and identify the extent of pilot program appropriateness for the sample under discussion with a 22.22%. The first-year students were selected, according to the study plan, which contains a handball curriculum for the students of this educational level. Statistical Treatments: Both researchers conducted data statistically processes, using a statistical package for Social Sciences, SPSS ver. 20.0, in order to identify: arithmetic mean, standard deviation, median, skewness coefficient, correlation coefficient, discriminant validity coefficient, "t" test per one group, "t" test per two groups. The use of mind maps has a positive effect better than (explanation and model method on the cognitive achievement and the performance level of some basic skills in handball. Active learning techniques, such as the method of digital mind maps in teaching

  2. Use of Self-Organizing Maps for Balanced Scorecard analysis to monitor the performance of dialysis clinic chains.

    Science.gov (United States)

    Cattinelli, Isabella; Bolzoni, Elena; Barbieri, Carlo; Mari, Flavio; Martin-Guerrero, José David; Soria-Olivas, Emilio; Martinez-Martinez, José Maria; Gomez-Sanchis, Juan; Amato, Claudia; Stopper, Andrea; Gatti, Emanuele

    2012-03-01

    The Balanced Scorecard (BSC) is a validated tool to monitor enterprise performances against specific objectives. Through the choice and the evaluation of strategic Key Performance Indicators (KPIs), it provides a measure of the past company's outcome and allows planning future managerial strategies. The Fresenius Medical Care (FME) BSC makes use of 30 KPIs for a continuous quality improvement strategy within its dialysis clinics. Each KPI is monthly associated to a score that summarizes the clinic efficiency for that month. Standard statistical methods are currently used to analyze the BSC data and to give a comprehensive view of the corporate improvements to the top management. We herein propose the Self-Organizing Maps (SOMs) as an innovative approach to extrapolate information from the FME BSC data and to present it in an easy-readable informative form. A SOM is a computational technique that allows projecting high-dimensional datasets to a two-dimensional space (map), thus providing a compressed representation. The SOM unsupervised (self-organizing) training procedure results in a map that preserves similarity relations existing in the original dataset; in this way, the information contained in the high-dimensional space can be more easily visualized and understood. The present work demonstrates the effectiveness of the SOM approach in extracting useful information from the 30-dimensional BSC dataset: indeed, SOMs enabled both to highlight expected relationships between the KPIs and to uncover results not predictable with traditional analyses. Hence we suggest SOMs as a reliable complementary approach to the standard methods for BSC interpretation.

  3. An airborne interferometric SAR system for high-performance 3D mapping

    Science.gov (United States)

    Lange, Martin; Gill, Paul

    2009-05-01

    With a vertical accuracy better than 1 m and collection rates up to 7000 km2/h, airborne interferometric synthetic aperture radars (InSAR) bridge the gap between space borne radar sensors and airborne optical LIDARs. This paper presents the latest generation of X-band InSAR sensors, developed by Intermap TechnologiesTM, which are operated on our four aircrafts. The sensors collect data for the NEXTMap(R) program - a digital elevation model (DEM) with 1 m vertical accuracy for the contiguous U.S., Hawaii, and most of Western Europe. For a successful operation, challenges like reduction of multipath reflections, very high interferometric phase stability, and a precise system calibration had to be mastered. Recent advances in sensor design, comprehensive system automation and diagnostics have increased the sensor reliability to a level where no radar operator is required onboard. Advanced flight planning significantly improved aircraft utilization and acquisition throughput, while reducing operational costs. Highly efficient data acquisition with straight flight lines up to 1200 km is daily routine meanwhile. The collected data pass though our automated processing cluster and finally are edited to our terrain model products. Extensive and rigorous quality control at every step of the workflow are key to maintain stable vertical accuracies of 1 m and horizontal accuracies of 2 m for our 3D maps. The combination of technical and operational advances presented in this paper enabled Intermap to survey two continents, producing 11 million km2 of uniform and accurate 3D terrain data.

  4. T1 mapping cardiovascular magnetic resonance imaging to detect myocarditis—Impact of slice orientation on the diagnostic performance

    Energy Technology Data Exchange (ETDEWEB)

    Bohnen, Sebastian, E-mail: s.bohnen@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Radunski, Ulf K., E-mail: u.radunski@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Lund, Gunnar K., E-mail: glund@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Tahir, Enver, E-mail: e.tahir@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Avanesov, Maxim, E-mail: m.avanesov@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Stehning, Christian, E-mail: christian.stehning@philips.com [Philips Research, Hamburg (Germany); Schnackenburg, Bernhard, E-mail: bernhard.schnackenburg@philips.com [Philips Healthcare Germany, Hamburg (Germany); Adam, Gerhard, E-mail: g.adam@uke.de [University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology, Hamburg (Germany); Blankenberg, Stefan, E-mail: s.blankenberg@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany); Muellerleile, Kai, E-mail: kamuellerleile@uke.de [University Medical Center Hamburg-Eppendorf, University Heart Center, General and Interventional Cardiology, Hamburg (Germany)

    2017-01-15

    Background: T1 mapping is a promising diagnostic tool to improve the diagnostic accuracy of cardiovascular magnetic resonance (CMR) in patients with suspected myocarditis. However, there are currently no data on the potential influence of slice orientation on the diagnostic performance of CMR. Thus, we compared the diagnostic performance of global myocardial T1 and extracellular volume (ECV) values to differentiate patients with myocarditis from healthy individuals between different slice orientations. Methods: This study included 48 patients with clinically defined myocarditis and 13 healthy controls who underwent CMR at 1.5 T. A modified Look-Locker inversion-recovery (MOLLI) sequence was used for T1 mapping before and 15 min after administration of 0.075 mmol/kg Gadolinium-BOPTA. T1 mapping was performed on three short and on three long axes slices, respectively. Native T1, post-contrast T1 and extracellular volume (ECV) −BOPTA maps were calculated using a dedicated plug-in written for the OsiriX software and compared between the mean value of three short-axes slices (3SAX), the central short-axis (1SAX), the mean value of three long-axes slices (3LAX), the four-chamber view (4CH), the three-chamber view (3CH) and the two-chamber view (2CH). Results: There were significantly lower native T1 values on 3LAX (1081 ms (1037–1131 ms)) compared to 3SAX (1107 ms (1069–1143 ms), p = 0.0022) in patients with myocarditis, but not in controls (1026 ms (1009–1059 ms) vs. 1039 ms (1023–1055 ms), p = 0.2719). The areas under the curve (AUC) to discriminate between myocarditis and healthy controls by native myocardial T1 were 0.85 (p < 0.0001) on 3SAX, 0.85 (p < 0.0001) on 1SAX, 0.76 (p = 0.0002) on 3LAX, 0.70 (p = 0.0075) on 4CH, 0.72 (p = 0.0020) on 3CH and 0.75 (p = 0.0003) on 2CH. The AUCs for ECV-BOPTA were 0.83 (p < 0.0001) on 3 SAX, 0.82 (p < 0.0001) on 1SAX, 0.77 (p = 0.0005) on 3LAX, 0.71 (p = 0.0079) on 4CH, 0.69 (p = 0.0371) on 3CH and 0.75 (p = 0.0006) on

  5. T1 mapping cardiovascular magnetic resonance imaging to detect myocarditis—Impact of slice orientation on the diagnostic performance

    International Nuclear Information System (INIS)

    Bohnen, Sebastian; Radunski, Ulf K.; Lund, Gunnar K.; Tahir, Enver; Avanesov, Maxim; Stehning, Christian; Schnackenburg, Bernhard; Adam, Gerhard; Blankenberg, Stefan; Muellerleile, Kai

    2017-01-01

    Background: T1 mapping is a promising diagnostic tool to improve the diagnostic accuracy of cardiovascular magnetic resonance (CMR) in patients with suspected myocarditis. However, there are currently no data on the potential influence of slice orientation on the diagnostic performance of CMR. Thus, we compared the diagnostic performance of global myocardial T1 and extracellular volume (ECV) values to differentiate patients with myocarditis from healthy individuals between different slice orientations. Methods: This study included 48 patients with clinically defined myocarditis and 13 healthy controls who underwent CMR at 1.5 T. A modified Look-Locker inversion-recovery (MOLLI) sequence was used for T1 mapping before and 15 min after administration of 0.075 mmol/kg Gadolinium-BOPTA. T1 mapping was performed on three short and on three long axes slices, respectively. Native T1, post-contrast T1 and extracellular volume (ECV) −BOPTA maps were calculated using a dedicated plug-in written for the OsiriX software and compared between the mean value of three short-axes slices (3SAX), the central short-axis (1SAX), the mean value of three long-axes slices (3LAX), the four-chamber view (4CH), the three-chamber view (3CH) and the two-chamber view (2CH). Results: There were significantly lower native T1 values on 3LAX (1081 ms (1037–1131 ms)) compared to 3SAX (1107 ms (1069–1143 ms), p = 0.0022) in patients with myocarditis, but not in controls (1026 ms (1009–1059 ms) vs. 1039 ms (1023–1055 ms), p = 0.2719). The areas under the curve (AUC) to discriminate between myocarditis and healthy controls by native myocardial T1 were 0.85 (p < 0.0001) on 3SAX, 0.85 (p < 0.0001) on 1SAX, 0.76 (p = 0.0002) on 3LAX, 0.70 (p = 0.0075) on 4CH, 0.72 (p = 0.0020) on 3CH and 0.75 (p = 0.0003) on 2CH. The AUCs for ECV-BOPTA were 0.83 (p < 0.0001) on 3 SAX, 0.82 (p < 0.0001) on 1SAX, 0.77 (p = 0.0005) on 3LAX, 0.71 (p = 0.0079) on 4CH, 0.69 (p = 0.0371) on 3CH and 0.75 (p = 0.0006) on

  6. A comparison of multicopter and fixed-wing unmanned aerial systems (UAS) applied to mapping debris flows in small alpine catchments

    Science.gov (United States)

    Sotier, Bernadette; Lechner, Veronika

    2016-04-01

    multicopter deployment. This plays an important role especially for the monitoring of events, where the access roads were destroyed or non-existent. On the other hand, the fixed-wing UAS requires more space for starting and landing. Both campaigns were performed over a full day therefore the lighting conditions changed from flight to flight, affecting the quality of the recorded images. Although the Sony ILCE-7R offers much higher images quality and higher sensor resolution, the results of image processing of Seigesbach and Plojergraben are comparable in terms of processing time, GSD and accuracy for this application. One important difference between the campaigns is for example, that in the Plojergraben, the torrent is partly hidden by bank-side trees and many trees are lying in the riverbed, which causes large errors in calculated volumes. From our experience, external conditions like lighting, visibility and accessibility are determining factors for getting high-quality results in alpine environments, and good results are possible with low-cost equipment. Notwithstanding the operational constraints, the choice of the platform therefore is of secondary importance for debris flow volume mapping.

  7. Performance of computer-aided detection applied to full-field digital mammography in detection of breast cancers

    International Nuclear Information System (INIS)

    Sadaf, Arifa; Crystal, Pavel; Scaranelo, Anabel; Helbich, Thomas

    2011-01-01

    Objective: The aim of this retrospective study was to evaluate performance of computer-aided detection (CAD) with full-field digital mammography (FFDM) in detection of breast cancers. Materials and Methods: CAD was retrospectively applied to standard mammographic views of 127 cases with biopsy proven breast cancers detected with FFDM (Senographe 2000, GE Medical Systems). CAD sensitivity was assessed in total group of 127 cases and for subgroups based on breast density, mammographic lesion type, mammographic lesion size, histopathology and mode of presentation. Results: Overall CAD sensitivity was 91% (115 of 127 cases). There were no statistical differences (p > 0.1) in CAD detection of cancers in dense breasts 90% (53/59) versus non-dense breasts 91% (62/68). There was statistical difference (p 20 mm 97% (22/23). Conclusion: CAD applied to FFDM showed 100% sensitivity in identifying cancers manifesting as microcalcifications only and high sensitivity 86% (71/83) for other mammographic appearances of cancer. Sensitivity is influenced by lesion size. CAD in FFDM is an adjunct helping radiologist in early detection of breast cancers.

  8. Evaluating performance of multivariable vibration isolators : a frequency domain identification approach applied to an industrial AVIS : A frequency domain identification approach applied to an industrial AVIS

    NARCIS (Netherlands)

    Beijen, M.A.; Heertjes, M.A.; Voorhoeve, R.J.; Oomen, T.A.E.

    2017-01-01

    Vibration isolation is essential for industrial high-precision systems in suppressing the influence of external disturbances. The aim of this paper is to develop an identification method to estimate the transmissibility matrix for such systems. The transmissibility matrix is a key performance

  9. Mapping the Social Side of Pre-Service Teachers: Connecting Closeness, Trust, and Efficacy with Performance

    Science.gov (United States)

    Liou, Yi-Hwa; Daly, Alan J.; Canrinus, Esther T.; Forbes, Cheryl A.; Moolenaar, Nienke M.; Cornelissen, Frank; Van Lare, Michelle; Hsiao, Joyce

    2017-01-01

    This exploratory study foregrounds the important, but often understudied social side of pre-service teacher development and its relation to teaching performance in one university-based teacher preparation program in the US. We examine the extent to which pre-service elementary teachers' social relationships and perceptions of peer trust and…

  10. Mapping the social side of pre-service teachers : connecting closeness, trust, and efficacy with performance

    NARCIS (Netherlands)

    Liou, Yi Hwa; Daly, Alan J.; Canrinus, Esther T.; Forbes, Cheryl A.; Moolenaar, Nienke M.|info:eu-repo/dai/nl/304352802; Cornelissen, Frank; Van Lare, Michelle; Hsiao, Joyce

    2017-01-01

    This exploratory study foregrounds the important, but often understudied social side of pre-service teacher development and its relation to teaching performance in one university-based teacher preparation program in the US. We examine the extent to which pre-service elementary teachers’ social

  11. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    Science.gov (United States)

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  12. Performance metrics for state-of-the-art airborne magnetic and electromagnetic systems for mapping and detection of unexploded ordnance

    Science.gov (United States)

    Doll, William E.; Bell, David T.; Gamey, T. Jeffrey; Beard, Les P.; Sheehan, Jacob R.; Norton, Jeannemarie

    2010-04-01

    Over the past decade, notable progress has been made in the performance of airborne geophysical systems for mapping and detection of unexploded ordnance in terrestrial and shallow marine environments. For magnetometer systems, the most significant improvements include development of denser magnetometer arrays and vertical gradiometer configurations. In prototype analyses and recent Environmental Security Technology Certification Program (ESTCP) assessments using new production systems the greatest sensitivity has been achieved with a vertical gradiometer configuration, despite model-based survey design results which suggest that dense total-field arrays would be superior. As effective as magnetometer systems have proven to be at many sites, they are inadequate at sites where basalts and other ferrous geologic formations or soils produce anomalies that approach or exceed those of target ordnance items. Additionally, magnetometer systems are ineffective where detection of non-ferrous ordnance items is of primary concern. Recent completion of the Battelle TEM-8 airborne time-domain electromagnetic system represents the culmination of nearly nine years of assessment and development of airborne electromagnetic systems for UXO mapping and detection. A recent ESTCP demonstration of this system in New Mexico showed that it was able to detect 99% of blind-seeded ordnance items, 81mm and larger, and that it could be used to map in detail a bombing target on a basalt flow where previous airborne magnetometer surveys had failed. The probability of detection for the TEM-8 in the blind-seeded study area was better than that reported for a dense-array total-field magnetometer demonstration of the same blind-seeded site, and the TEM-8 system successfully detected these items with less than half as many anomaly picks as the dense-array total-field magnetometer system.

  13. Expected Performance of Ozone Climate Data Records from Ozone Mapping and Profiler Suite Limb Profiler

    Science.gov (United States)

    Xu, P. Q.; Rault, D. F.; Pawson, S.; Wargan, K.; Bhartia, P. K.

    2012-01-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) was launched on board of the Soumi NPP space platform in late October 2011. It provides ozone-profiling capability with high-vertical resolution from 60 Ian to cloud top. In this study, an end-to-end Observing System Simulation Experiment (OSSE) of OMPS/LP ozone is discussed. The OSSE was developed at NASA's Global Modeling and Assimilation Office (GMAO) using the Goddard Earth Observing System (GEOS-5) data assimilation system. The "truth" for this OSSE is built by assimilating MLS profiles and OMI ozone columns, which is known to produce realistic three-dimensional ozone fields in the stratosphere and upper troposphere. OMPS/LP radiances were computed at tangent points computed by an appropriate orbital model. The OMPS/LP forward RT model, Instrument Models (IMs) and EDR retrieval model were introduced and pseudo-observations derived. The resultant synthetic OMPS/LP observations were evaluated against the "truth" and subsequently these observations were assimilated into GEOS-5. Comparison of this assimilated dataset with the "truth" enables comparisons of the likely uncertainties in 3-D analyses of OMPS/LP data. This study demonstrated the assimilation capabilities of OMPS/LP ozone in GEOS-5, with the monthly, zonal mean (O-A) smaller than 0.02ppmv at all levels, the nns(O-A) close to O.lppmv from 100hPa to 0.2hPa; and the mean(O-B) around the 0.02ppmv for all levels. The monthly zonal mean analysis generally agrees to within 2% of the truth, with larger differences of 2-4% (0.1-0.2ppmv) around 10hPa close to North Pole and in the tropical tropopause region, where the difference is above 20% due to the very low ozone concentrations. These OSSEs demonstrated that, within a single data assimilation system and the assumption that assimilated MLS observations provide a true rendition of the stratosphere, the OMPS/LP ozone data are likely to produce accurate analyses through much of the stratosphere

  14. A Mapping Method Applied to the IT Solutions Procurement Guide of IN/SLTI/ MPOG 04/2014 Normative Instruction and Models Constellation CMMI-ACQ, CMMI-DEV and CMMI-SVC

    Directory of Open Access Journals (Sweden)

    PLÁCIDO DA SILVA, L. S.

    2016-12-01

    Full Text Available Several initiatives have emerged in the search for improvement of software processes in recent years. These initiatives are usually guided by Standards, Models and Quality Standards, aiming to establish best practices to guide the definition of processes and support the assessment of the maturity and capacity of organizations in the development of software products and provision of IT services. Despite the emergence of several initiatives, when the topic refers to the processes of contracting information technology (IT solutions by the Brazilian Federal Public Administration (APF, its application in the context of organizations has obstacles, such as the complexity of the processes and oversight of federal government agencies. In order to overcome these obstacles, the Court of Audit of the Union (TCU recommended the preparation of the SLTI/MPOG 04/2014 Normative Instruction, containing guidelines for the process of contracting IT Solutions. This work defines a Mapping Method between IN/SLTI/MPOG 04/2014 and APF's IT Solutions Procurement Guide (GCSTI, with the objective of identifying the maturity and adherence of GCSTI to CMMI-ACQ, CMMI -DEV and CMMI-SVC. This work defines a Mapping Method between IN / SLTI / MPOG 04/2014 and its processes defined in the APF IT Solutions Procurement Guide (GCSTI, with the objective of identifying the maturity and adherence of the GCSTI to the Models CMMI-ACQ, CMMI-DEV and CMMI-SVC and bringing as benefits a systematized and structured methodology to apply and map models, norms and standards of any nature. As a result of this research, the mapping method created allowed the mapping between the CMMI and GSTI models of IN / SLTI / PMO 04/2014, and the method could be applied in any other mapping, from which the processes were oriented to the same Structure and had similar objectives.

  15. Polymorphism screening and mapping of nine meat performance-related genes in the pig

    Czech Academy of Sciences Publication Activity Database

    Horák, Pavel; Stratil, Antonín; Svatoňová, Martina; Maštálková, Lucie; Patáková, Jitka; Van Poucke, M.; Bartenschlager, H.; Peelman, L. J.; Geldermann, H.

    2010-01-01

    Roč. 41, č. 3 (2010), s. 334-335 ISSN 0268-9146 R&D Projects: GA AV ČR KJB500450801; GA ČR GA523/09/0844; GA ČR(CZ) GA523/06/1302 Institutional research plan: CEZ:AV0Z50450515 Keywords : genomics * meat performance-related genes * pig Subject RIV: GI - Animal Husbandry ; Breeding Impact factor: 2.203, year: 2010

  16. Polymorphism screening and mapping of nine meat performance-related genes in the pig

    Czech Academy of Sciences Publication Activity Database

    Horák, Pavel; Stratil, Antonín; Svatoňová, Martina; Maštálková, Lucie; Patáková, Jitka; Van Poucke, M.; Bartenschlager, H.; Peelman, L. J.; Geldermann, H.

    2010-01-01

    Roč. 41, č. 3 (2010), s. 334-335 ISSN 0268-9146 R&D Projects: GA AV ČR KJB500450801; GA ČR GA523/09/0844; GA ČR(CZ) GA523/06/1302 Institutional research plan: CEZ:AV0Z50450515 Keywords : genomics * meat performance -related genes * pig Subject RIV: GI - Animal Husbandry ; Breeding Impact factor: 2.203, year: 2010

  17. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  18. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    Science.gov (United States)

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  19. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  20. In Preparation of the Nationwide Dissemination of the School-Based Obesity Prevention Program DOiT: Stepwise Development Applying the Intervention Mapping Protocol

    Science.gov (United States)

    van Nassau, Femke; Singh, Amika S.; van Mechelen, Willem; Brug, Johannes; Chin A. Paw, Mai J. M.

    2014-01-01

    Background: The school-based Dutch Obesity Intervention in Teenagers (DOiT) program is an evidence-based obesity prevention program. In preparation for dissemination throughout the Netherlands, this study aimed to adapt the initial program and to develop an implementation strategy and materials. Methods: We revisited the Intervention Mapping (IM)…

  1. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  2. Beethoven recordings reviewed: a systematic method for mapping the content of music performance criticism.

    Science.gov (United States)

    Alessandri, Elena; Williamson, Victoria J; Eiholzer, Hubert; Williamon, Aaron

    2015-01-01

    Critical reviews offer rich data that can be used to investigate how musical experiences are conceptualized by expert listeners. However, these data also present significant challenges in terms of organization, analysis, and interpretation. This study presents a new systematic method for examining written responses to music, tested on a substantial corpus of music criticism. One hundred critical reviews of Beethoven's piano sonata recordings, published in the Gramophone between August 1934 and July 2010, were selected using in-depth data reduction (qualitative/quantitative approach). The texts were then examined using thematic analysis in order to generate a visual descriptive model of expert critical review. This model reveals how the concept of evaluation permeates critical review. It also distinguishes between two types of descriptors. The first characterizes the performance in terms of specific actions or features of the musical sound (musical parameters, technique, and energy); the second appeals to higher-order properties (artistic style, character and emotion, musical structure, communicativeness) or assumed performer qualities (understanding, intentionality, spontaneity, sensibility, control, and care). The new model provides a methodological guide and conceptual basis for future studies of critical review in any genre.

  3. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  4. A Simple and High Performing Rate Control Initialization Method for H.264 AVC Coding Based on Motion Vector Map and Spatial Complexity at Low Bitrate

    Directory of Open Access Journals (Sweden)

    Yalin Wu

    2014-01-01

    Full Text Available The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB. In order to obtain the optimal initial QP (quantization parameter for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods.

  5. Hippocampal development and the dissociation of cognitive-spatial mapping from motor performance [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Bryan D. Devan

    2015-09-01

    Full Text Available The publication of a recent article in F1000Research has led to discussion of, and correspondence on a broader issue that has a long history in the fields of neuroscience and psychology.  Namely, is it possible to separate the cognitive components of performance, in this case spatial behavior, from the motoric demands of a task?  Early psychological experiments attempted such a dissociation by studying a form of spatial maze learning where initially rats were allowed to explore a complex maze, termed “latent learning,” before reinforcement was introduced.  Those rats afforded the latent learning experience solved the task faster than those that were not, implying that cognitive map learning during exploration aided in the performance of the task once a motivational component was introduced.  This form of latent learning was interpreted as successfully demonstrating that an exploratory cognitive map component was acquired irrespective of performing a learned spatial response under deprivation/motivational conditions.  The neural substrate for cognitive learning was hypothesized to depend on place cells within the hippocampus.  Subsequent behavioral studies attempted to directly eliminate the motor component of spatial learning by allowing rats to passively view the distal environment before performing any motor response using a task that is widely considered to be hippocampal-dependent.  Latent learning in the water maze, using a passive placement procedure has met with mixed results.  One constraint on viewing cues before performing a learned swimming response to a hidden goal has been the act of dynamically viewing distal cues while moving through a part of the environment where an optimal learned spatial escape response would be observed.  We briefly review these past findings obtained with adult animals to the recent efforts of establishing a “behavioral topology” separating cognitive-spatial learning from tasks differing in

  6. Hippocampal development and the dissociation of cognitive-spatial mapping from motor performance [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Bryan D. Devan

    2015-08-01

    Full Text Available The publication of a recent article in F1000Research has led to discussion of, and correspondence on a broader issue that has a long history in the fields of neuroscience and psychology.  Namely, is it possible to separate the cognitive components of performance, in this case spatial behavior, from the motoric demands of a task?  Early psychological experiments attempted such a dissociation by studying a form of spatial maze learning where initially rats were allowed to explore a complex maze, termed “latent learning,” before reinforcement was introduced.  Those rats afforded the latent learning experience solved the task faster than those that were not, implying that cognitive map learning during exploration aided in the performance of the task once a motivational component was introduced.  This form of latent learning was interpreted as successfully demonstrating that an exploratory cognitive map component was acquired irrespective of performing a learned spatial response under deprivation/motivational conditions.  The neural substrate for cognitive learning was hypothesized to depend on place cells within the hippocampus.  Subsequent behavioral studies attempted to directly eliminate the motor component of spatial learning by allowing rats to passively view the distal environment before performing any motor response using a task that is widely considered to be hippocampal-dependent.  Latent learning in the water maze, using a passive placement procedure has met with mixed results.  One constraint on viewing cues before performing a learned swimming response to a hidden goal has been the act of dynamically viewing distal cues while moving through a part of the environment where an optimal learned spatial escape response would be observed.  We briefly review these past findings obtained with adult animals to the recent efforts of establishing a “behavioral topology” separating cognitive-spatial learning from tasks differing in

  7. Mapping the knowledge base for maritime health: 4 safety and performance at sea.

    Science.gov (United States)

    Carter, Tim

    2011-01-01

    There is very little recent investigative work on the contribution of health related impairment and disability to either accident risks or to reduced performance at sea, the only exception being studies on fatigue and parallel data on sleep related incidents. Incidents where health related impairment, other than fatigue, has contributed are very rarely found in reports of maritime accident investigations. This may either indicate the irrelevance of these forms of impairment to accidents or alternatively point to the effectiveness of existing control measures. The main approach to risk reduction is by the application of fitness criteria to seafarers during medical examinations. Where there is a knowledge base it is either, as in the case of vision, a very old one that relates to patterns of visual task that differ markedly from those in modern shipping or, as with hearing, is based on untested assumptions about the levels of impairment that will prevent effective communications at sea. There are practical limitations to the assessment of cognitive functions as these encompass such a wide range of impairments from those associated with fatigue, medication, or substance abuse to those relating to age or to the risks of sudden incapacitation from a pre-existing illness. Physical capability can be assessed but only in limited ways in the course of a medical examination. In the absence of clear evidence of accident risks associated with health-related impairments or disabilities it is unlikely that there will be pressure to update criteria that appear to be providing satisfactory protection. As capability is related to the tasks performed, investigations need to integrate information on ergonomic and organizational aspects with that on health and impairment. Criteria that may select seafarers with health- -related impairment need to be reviewed wherever the task demands in modern shipping have changed, in order to relax or modify them where indicated in order to reduce

  8. Influence Model Assisted Learning Cycle Mind Map to Achievement Physics Laboratory Judging from the performance Grade VIII SMPN 1 Rejoso Pasuruan

    OpenAIRE

    Ary Analisa Rahma

    2014-01-01

    Pengaruh Model Siklus Belajar Berbantuan Mind Map terhadap Prestasi Belajar Fisika Ditinjau dari Kinerja Laboratorium Siswa Kelas VIII SMPN 1 Rejoso Kabupaten Pasuruan Abstract: This study aimed to examine the effect of the learning cycle models aided the mind map on the learning achievement in terms of the performance of laboratory physics class VIII student on light material in SMP Negeri 1 Rejoso Pasuruan. This study is a quasi-experimental research. The research design used is a 2 x 2...

  9. The performance of the new enhanced-resolution satellite passive microwave dataset applied for snow water equivalent estimation

    Science.gov (United States)

    Pan, J.; Durand, M. T.; Jiang, L.; Liu, D.

    2017-12-01

    The newly-processed NASA MEaSures Calibrated Enhanced-Resolution Brightness Temperature (CETB) reconstructed using antenna measurement response function (MRF) is considered to have significantly improved fine-resolution measurements with better georegistration for time-series observations and equivalent field of view (FOV) for frequencies with the same monomial spatial resolution. We are looking forward to its potential for the global snow observing purposes, and therefore aim to test its performance for characterizing snow properties, especially the snow water equivalent (SWE) in large areas. In this research, two candidate SWE algorithms will be tested in China for the years between 2005 to 2010 using the reprocessed TB from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), with the results to be evaluated using the daily snow depth measurements at over 700 national synoptic stations. One of the algorithms is the SWE retrieval algorithm used for the FengYun (FY) - 3 Microwave Radiation Imager. This algorithm uses the multi-channel TB to calculate SWE for three major snow regions in China, with the coefficients adapted for different land cover types. The second algorithm is the newly-established Bayesian Algorithm for SWE Estimation with Passive Microwave measurements (BASE-PM). This algorithm uses the physically-based snow radiative transfer model to find the histogram of most-likely snow property that matches the multi-frequency TB from 10.65 to 90 GHz. It provides a rough estimation of snow depth and grain size at the same time and showed a 30 mm SWE RMS error using the ground radiometer measurements at Sodankyla. This study will be the first attempt to test it spatially for satellite. The use of this algorithm benefits from the high resolution and the spatial consistency between frequencies embedded in the new dataset. This research will answer three questions. First, to what extent can CETB increase the heterogeneity in the mapped SWE? Second, will

  10. Does cognitive performance map to categorical diagnoses of schizophrenia, schizoaffective disorder and bipolar disorder? A discriminant functions analysis.

    Science.gov (United States)

    Van Rheenen, Tamsyn E; Bryce, Shayden; Tan, Eric J; Neill, Erica; Gurvich, Caroline; Louise, Stephanie; Rossell, Susan L

    2016-03-01

    Despite known overlaps in the pattern of cognitive impairments in individuals with bipolar disorder (BD), schizophrenia (SZ) and schizoaffective disorder (SZA), few studies have examined the extent to which cognitive performance validates traditional diagnostic boundaries in these groups. Individuals with SZ (n=49), schizoaffective disorder (n=33) and BD (n=35) completed a battery of cognitive tests measuring the domains of processing speed, immediate memory, semantic memory, learning, working memory, executive function and sustained attention. A discriminant functions analysis revealed a significant function comprising semantic memory, immediate memory and processing speed that maximally separated patients with SZ from those with BD. Initial classification scores on the basis of this function showed modest diagnostic accuracy, owing in part to the misclassification of SZA patients as having SZ. When SZA patients were removed from the model, a second cross-validated classifier yielded slightly improved diagnostic accuracy and a single function solution, of which semantic memory loaded most heavily. A cluster of non-executive cognitive processes appears to have some validity in mapping onto traditional nosological boundaries. However, since semantic memory performance was the primary driver of the discrimination between BD and SZ, it is possible that performance differences between the disorders in this cognitive domain in particular, index separate underlying aetiologies. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Heat Maps Applied to Environmental Management: An Analysis of Hot Spots in Acaraú River Basin, Ceará, 2010-2015

    Directory of Open Access Journals (Sweden)

    Ulisses Costa Oliveira

    2017-08-01

    Full Text Available Using maps generated through the Kernel density estimator this work analyzes the density of fire points located in the Acaraú River Basin during the years 2010-2015. Data was processed using QGIS Wien software, version 2.8. Maps were generated and heat densities were classified using a color scale, divided into five classes, represented by the colors white (very low, green (low, yellow (average, orange (high and red (very high. The results show that over the years the hot spots were concentrated in the portion of the basin which covers the low and middle reaches of the Acaraú River as well as in the areas near the Ibiapaba Plateau in its southwestern part. The year 2015 witnessed the most significant number of outbreaks of fire, totaling 3,813 hot spots, more than double the previous four years.

  12. Applying process mapping and analysis as a quality improvement strategy to increase the adoption of fruit, vegetable, and water breaks in Australian primary schools.

    Science.gov (United States)

    Biggs, Janice S; Farrell, Louise; Lawrence, Glenda; Johnson, Julie K

    2014-03-01

    Over the past decade, public health policy in Australia has prioritized the prevention and control of obesity and invested in programs that promote healthy eating-related behaviors, which includes increasing fruit and vegetable consumption in children. This article reports on a study that used process mapping and analysis as a quality improvement strategy to improve the delivery of a nutrition primary prevention program delivered in primary schools in New South Wales, Australia. Crunch&Sip® has been delivered since 2008. To date, adoption is low with only 25% of schools implementing the program. We investigated the cause of low adoption and propose actions to increase school participation. We conducted semistructured interviews with key stakeholders and analyzed the process of delivering Crunch&Sip to schools. Interviews and process mapping and analysis identified a number of barriers to schools adopting the program. The analyses identified the need to simplify and streamline the process of delivering the program to schools and introduce monitoring and feedback loops to track ongoing participation. The combination of stakeholder interviews and process mapping and analysis provided important practical solutions to improving program delivery and also contributed to building an understanding of factors that help and hinder program adoption. The insight provided by this analysis helped identify usable routine measures of adoption, which were an improvement over those used in the existing program plan. This study contributed toward improving the quality and efficiency of delivering a health promoting program to work toward achieving healthy eating behaviors in children.

  13. [Visualizing Research Lines in Public Health: An analysis Based on Bibliometric Maps Applied to the Revista Española de Salud Pública (2006-2015)].

    Science.gov (United States)

    Gálvez, Carmen

    2016-12-01

    Identifying research lines is essential to understand the knowledge structure of a scientific domain. The aim of this study was to identify the main research topics of within the domain of public health, in the Revista Española de Saslud Pública during 2006-2015. Original articles included in the Social Sciences Citation Index (SSCI) database, available online through the Web of Science (WoS), were selected. The analysis units used were the keywords, KeyWords Plus (KW+), extracted automatically by SSCI. With KW+ obtained bibliometric, maps were created using a methodology based on the combination of co-word analysis, co-word analysis, clustering techniques and visualization techniques. We analyzed 512 documents, of which 176 KW+ were obtained with a frequency greater than or equal to 3. The results were bidimensional bibliometric maps with thematic groupings of KW+, representing the main research fronts: i) epidemiology, risk control programs disease and, in general, service organization and health policies; ii) infectious diseases, principally HIV; iii) a progressive increase in several lines interrelated with cardiovascular diseases (CVD); iv) a line multidimensional dedicated to different aspects associated to the quality of life related to health (HRQoL); and v) an emerging line linked to binge drinking. For the multidisciplinary and multidimensional nature of public health, the construction of bibliometric maps is an appropriate methodology to understand the knowledge structure of this scientific domain.

  14. Applying the Intervention Mapping protocol to develop a kindergarten-based, family-involved intervention to increase European preschool children's physical activity levels: the ToyBox-study.

    Science.gov (United States)

    De Craemer, M; De Decker, E; De Bourdeaudhuij, I; Verloigne, M; Duvinage, K; Koletzko, B; Ibrügger, S; Kreichauf, S; Grammatikaki, E; Moreno, L; Iotova, V; Socha, P; Szott, K; Manios, Y; Cardon, G

    2014-08-01

    Although sufficient physical activity is beneficial for preschoolers' health, activity levels in most preschoolers are low. As preschoolers spend a considerable amount of time at home and at kindergarten, interventions should target both environments to increase their activity levels. The aim of the current paper was to describe the six different steps of the Intervention Mapping protocol towards the systematic development and implementation of the physical activity component of the ToyBox-intervention. This intervention is a kindergarten-based, family-involved intervention implemented across six European countries. Based on the results of literature reviews and focus groups with parents/caregivers and kindergarten teachers, matrices of change objectives were created. Then, theory-based methods and practical strategies were selected to develop intervention materials at three different levels: (i) individual level (preschoolers); (ii) interpersonal level (parents/caregivers) and (iii) organizational level (teachers). This resulted in a standardized intervention with room for local and cultural adaptations in each participating country. Although the Intervention Mapping protocol is a time-consuming process, using this systematic approach may lead to an increase in intervention effectiveness. The presented matrices of change objectives are useful for future programme planners to develop and implement an intervention based on the Intervention Mapping protocol to increase physical activity levels in preschoolers. © 2014 World Obesity.

  15. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Science.gov (United States)

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  16. The design and performance of a velocity map imaging spectrometer for the study of molecular photoionisation dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Holland, D.M.P., E-mail: david.holland@stfc.ac.uk [Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Shaw, D.A. [Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)

    2012-12-10

    Highlights: Black-Right-Pointing-Pointer Velocity map imaging spectrometer optimised for molecular photoionisation dynamics. Black-Right-Pointing-Pointer Kinetic energy distribution of O{sup +} fragments measured. Black-Right-Pointing-Pointer Effect of autoionisation on photoelectron vibrational populations studied. -- Abstract: The design, construction and performance of a velocity map imaging spectrometer for the study of molecular photoionisation dynamics is described. The spectrometer has been optimised for the efficient collection and detection of particles (electrons or positively charged ions) generated through the interaction of gas phase molecules with synchrotron radiation. A double Einzel lens, incorporated into the flight tube, enhances the collection efficiency of energetic particles. Computer modelling has been used to trace the trajectories of charged particles through the spectrometer and to assess the image quality. A time and position sensitive delay-line detector is used to record the images. Results from two experimental studies are presented to illustrate the capabilities of the spectrometer. In the first, the effect of electronic autoionisation on the vibrationally resolved photoelectron branching ratios of the N{sub 2}{sup +} X {sup 2}{Sigma}{sub g}{sup +} state has been investigated in an excitation range where prominent structure due to Rydberg states occurs in the ion yield curve. The results show that autoionisation leads to rotational branch populations that differ from those observed in direct, non-resonant, photoionisation. In the second, the kinetic energy distribution and the angular distribution of O{sup +} fragments formed in the dissociative photoionisation of molecular oxygen have been measured. The timing properties of the detector have allowed O{sup +} fragments to be separated from O{sub 2}{sup +} parent ions using time-of-flight techniques.

  17. Prebiotic Low Sugar Chocolate Dairy Desserts: Physical and Optical Characteristics and Performance of PARAFAC and PCA Preference Map.

    Science.gov (United States)

    Morais, E C; Esmerino, E A; Monteiro, R A; Pinheiro, C M; Nunes, C A; Cruz, A G; Bolini, Helena M A

    2016-01-01

    The addition of prebiotic and sweeteners in chocolate dairy desserts opens up new opportunities to develop dairy desserts that besides having a lower calorie intake still has functional properties. In this study, prebiotic low sugar dairy desserts were evaluated by 120 consumers using a 9-point hedonic scale, in relation to the attributes of appearance, aroma, flavor, texture, and overall liking. Internal preference map using parallel factor analysis (PARAFAC) and principal component analysis (PCA) was performed using the consumer data. In addition, physical (texture profile) and optical (instrumental color) analyses were also performed. Prebiotic dairy desserts containing sucrose and sucralose were equally liked by the consumers. These samples were characterized by firmness and gumminess, which can be considered drivers of liking by the consumers. Optimization of the prebiotic low sugar dessert formulation should take in account the choice of ingredients that contribute in a positive manner for these parameters. PARAFAC allowed the extraction of more relevant information in relation to PCA, demonstrating that consumer acceptance analysis can be evaluated by simultaneously considering several attributes. Multiple factor analysis reported Rv value of 0.964, suggesting excellent concordance for both methods. © 2015 Institute of Food Technologists®

  18. Generalized surficial geologic map of the Fort Irwin area, San Bernadino: Chapter B in Geology and geophysics applied to groundwater hydrology at Fort Irwin, California

    Science.gov (United States)

    Miller, David M.; Menges, Christopher M.; Lidke, David J.; Buesch, David C.

    2014-01-01

    The geology and landscape of the Fort Irwin area, typical of many parts of the Mojave Desert, consist of rugged mountains separated by broad alluviated valleys that form the main coarse-resolution features of the geologic map. Crystalline and sedimentary rocks, Mesozoic and older in age, form most of the mountains with lesser accumulations of Miocene sedimentary and volcanic rocks. In detail, the area exhibits a fairly complex distribution of surficial deposits resulting from diverse rock sources and geomorphology that has been driven by topographic changes caused by recent and active faulting. Depositional environments span those typical of the Mojave Desert: alluvial fans on broad piedmonts, major intermittent streams along valley floors, eolian sand dunes and sheets, and playas in closed valleys that lack through-going washes. Erosional environments include rocky mountains, smooth gently sloping pediments, and badlands in readily eroded sediment. All parts of the landscape, from regional distribution of mountains, valleys, and faults to details of degree of soil development in surface materials, are portrayed by the surficial geologic map. Many of these attributes govern infiltration and recharge, and the surface distribution of permeable rock units such as Miocene sedimentary and volcanic rocks provides a basis for evaluating potential groundwater storage. Quaternary faults are widespread in the Fort Irwin area and include sinistral, east-striking faults that characterize the central swath of the area and the contrasting dextral, northwest-striking faults that border the east and west margins. Bedrock distribution and thickness of valley-fill deposits are controlled by modern and past faulting, and faults on the map help to identify targets for groundwater exploration.

  19. GPR applied to mapping utilities along the route of the Line 4 (yellow) subway tunnel construction in São Paulo City, Brazil

    Science.gov (United States)

    Porsani, Jorge Luís; Ruy, Yang Boo; Ramos, Fernanda Pereira; Yamanouth, Gisele R. B.

    2012-05-01

    The rapid industrial development and disorganized population growth in huge cities bring about various urban problems due to intense use of physical space on and below the surface. Subsurface problems in metropolitan areas are caused by subway line construction, which often follows the routes of utility networks, such as electric and telephone cables, water and gas pipes, storm sewers, etc. Usually, the main problems are related to damage or destruction of preexisting utilities, often putting human lives at risk. With the purpose of minimizing risks, GPR-profiling with 200 MHz antennae was done at two sites, both located in downtown São Paulo, Brazil. The objectives of this work were to map utilities or existing infrastructure in the subsurface in order to orient the construction of the Line 4 (yellow) subway tunnel in São Paulo. GPR profiles can detect water pipes, utility networks in the subsurface, and concrete foundation columns or pilings in subsoil up to 2 m depth. In addition, the GPR profiles also provided details of the target shapes in the subsurface. GPR interpretations combined with lithological information from boreholes and trenches opened in the study areas were extremely important in mapping of the correct spatial distribution of buried utilities at these two sites in São Paulo. This information improves and updates maps of utility placement, serves as a basis for planning of the geotechnical excavation of the Line 4 (yellow) subway tunnel in São Paulo, helps minimize problems related to destruction of preexisting utilities in the subsoil, and avoids risk of dangerous accidents.

  20. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  1. High performance liquid chromatography-charged aerosol detection applying an inverse gradient for quantification of rhamnolipid biosurfactants.

    Science.gov (United States)

    Behrens, Beate; Baune, Matthias; Jungkeit, Janek; Tiso, Till; Blank, Lars M; Hayen, Heiko

    2016-07-15

    A method using high performance liquid chromatography coupled to charged-aerosol detection (HPLC-CAD) was developed for the quantification of rhamnolipid biosurfactants. Qualitative sample composition was determined by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). The relative quantification of different derivatives of rhamnolipids including di-rhamnolipids, mono-rhamnolipids, and their precursors 3-(3-hydroxyalkanoyloxy)alkanoic acids (HAAs) differed for two compared LC-MS instruments and revealed instrument dependent responses. Our here reported HPLC-CAD method provides uniform response. An inverse gradient was applied for the absolute quantification of rhamnolipid congeners to account for the detector's dependency on the solvent composition. The CAD produces a uniform response not only for the analytes but also for structurally different (nonvolatile) compounds. It was demonstrated that n-dodecyl-β-d-maltoside or deoxycholic acid can be used as alternative standards. The method of HPLC-ultra violet (UV) detection after a derivatization of rhamnolipids and HAAs to their corresponding phenacyl esters confirmed the obtained results but required additional, laborious sample preparation steps. Sensitivity determined as limit of detection and limit of quantification for four mono-rhamnolipids was in the range of 0.3-1.0 and 1.2-2.0μg/mL, respectively, for HPLC-CAD and 0.4 and 1.5μg/mL, respectively, for HPLC-UV. Linearity for HPLC-CAD was at least 0.996 (R(2)) in the calibrated range of about 1-200μg/mL. Hence, the here presented HPLC-CAD method allows absolute quantification of rhamnolipids and derivatives. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Predictive performance for population models using stochastic differential equations applied on data from an oral glucose tolerance test.

    Science.gov (United States)

    Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H

    2010-02-01

    Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method

  3. Applying Theory to Understand and Modify Nurse Intention to Adhere to Recommendations regarding the Use of Filter Needles: An Intervention Mapping Approach.

    Science.gov (United States)

    Cassista, Julianne; Payne-Gagnon, Julie; Martel, Brigitte; Gagnon, Marie-Pierre

    2014-01-01

    The manipulation of glass ampoules involves risk of particle contamination of parenteral medication, and the use of filter needles has often been recommended in order to reduce the number of particles in these solutions. This study aims to develop a theory-based intervention to increase nurse intention to use filter needles according to clinical guideline recommendations produced by a large university medical centre in Quebec (Canada). Using the Intervention Mapping framework, we first identified the psychosocial determinants of nurse intention to use filter needles according to these recommendations. Second, we developed and implemented an intervention targeting nurses from five care units in order to increase their intention to adhere to recommendations on the use of filter needles. We also assessed nurse satisfaction with the intervention. In total, 270 nurses received the intervention and 169 completed the posttest questionnaire. The two determinants of intention, that is, attitude and perceived behavioral control, were significantly higher after the intervention, but only perceived behavioral control remained a predictor of intention. In general, nurses were highly satisfied with the intervention. This study provides support for the use of Intervention Mapping to develop, implement, and evaluate theory-based interventions in order to improve healthcare professional adherence to clinical recommendations.

  4. Applying Theory to Understand and Modify Nurse Intention to Adhere to Recommendations regarding the Use of Filter Needles: An Intervention Mapping Approach

    Directory of Open Access Journals (Sweden)

    Julianne Cassista

    2014-01-01

    Full Text Available The manipulation of glass ampoules involves risk of particle contamination of parenteral medication, and the use of filter needles has often been recommended in order to reduce the number of particles in these solutions. This study aims to develop a theory-based intervention to increase nurse intention to use filter needles according to clinical guideline recommendations produced by a large university medical centre in Quebec (Canada. Using the Intervention Mapping framework, we first identified the psychosocial determinants of nurse intention to use filter needles according to these recommendations. Second, we developed and implemented an intervention targeting nurses from five care units in order to increase their intention to adhere to recommendations on the use of filter needles. We also assessed nurse satisfaction with the intervention. In total, 270 nurses received the intervention and 169 completed the posttest questionnaire. The two determinants of intention, that is, attitude and perceived behavioral control, were significantly higher after the intervention, but only perceived behavioral control remained a predictor of intention. In general, nurses were highly satisfied with the intervention. This study provides support for the use of Intervention Mapping to develop, implement, and evaluate theory-based interventions in order to improve healthcare professional adherence to clinical recommendations.

  5. Mapping out Map Libraries

    Directory of Open Access Journals (Sweden)

    Ferjan Ormeling

    2008-09-01

    Full Text Available Discussing the requirements for map data quality, map users and their library/archives environment, the paper focuses on the metadata the user would need for a correct and efficient interpretation of the map data. For such a correct interpretation, knowledge of the rules and guidelines according to which the topographers/cartographers work (such as the kind of data categories to be collected, and the degree to which these rules and guidelines were indeed followed are essential. This is not only valid for the old maps stored in our libraries and archives, but perhaps even more so for the new digital files as the format in which we now have to access our geospatial data. As this would be too much to ask from map librarians/curators, some sort of web 2.0 environment is sought where comments about data quality, completeness and up-to-dateness from knowledgeable map users regarding the specific maps or map series studied can be collected and tagged to scanned versions of these maps on the web. In order not to be subject to the same disadvantages as Wikipedia, where the ‘communis opinio’ rather than scholarship, seems to be decisive, some checking by map curators of this tagged map use information would still be needed. Cooperation between map curators and the International Cartographic Association ( ICA map and spatial data use commission to this end is suggested.

  6. Raman Microspectroscopic Mapping with Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) Applied to the High-Pressure Polymorph of Titanium Dioxide, TiO2-II.

    Science.gov (United States)

    Smith, Joseph P; Smith, Frank C; Ottaway, Joshua; Krull-Davatzes, Alexandra E; Simonson, Bruce M; Glass, Billy P; Booksh, Karl S

    2017-08-01

    The high-pressure, α-PbO 2 -structured polymorph of titanium dioxide (TiO 2 -II) was recently identified in micrometer-sized grains recovered from four Neoarchean spherule layers deposited between ∼2.65 and ∼2.54 billion years ago. Several lines of evidence support the interpretation that these layers represent distal impact ejecta layers. The presence of shock-induced TiO 2 -II provides physical evidence to further support an impact origin for these spherule layers. Detailed characterization of the distribution of TiO 2 -II in these grains may be useful for correlating the layers, estimating the paleodistances of the layers from their source craters, and providing insight into the formation of the TiO 2 -II. Here we report the investigation of TiO 2 -II-bearing grains from these four spherule layers using multivariate curve resolution-alternating least squares (MCR-ALS) applied to Raman microspectroscopic mapping. Raman spectra provide evidence of grains consisting primarily of rutile (TiO 2 ) and TiO 2 -II, as shown by Raman bands at 174 cm -1 (TiO 2 -II), 426 cm -1 (TiO 2 -II), 443 cm -1 (rutile), and 610 cm -1 (rutile). Principal component analysis (PCA) yielded a predominantly three-phase system comprised of rutile, TiO 2 -II, and substrate-adhesive epoxy. Scanning electron microscopy (SEM) suggests heterogeneous grains containing polydispersed micrometer- and submicrometer-sized particles. Multivariate curve resolution-alternating least squares applied to the Raman microspectroscopic mapping yielded up to five distinct chemical components: three phases of TiO 2 (rutile, TiO 2 -II, and anatase), quartz (SiO 2 ), and substrate-adhesive epoxy. Spectral profiles and spatially resolved chemical maps of the pure chemical components were generated using MCR-ALS applied to the Raman microspectroscopic maps. The spatial resolution of the Raman microspectroscopic maps was enhanced in comparable, cost-effective analysis times by limiting spectral resolution

  7. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  8. 42 CFR 137.379 - Do Davis-Bacon wage rates apply to construction projects performed by Self-Governance Tribes...

    Science.gov (United States)

    2010-10-01

    ... projects performed by Self-Governance Tribes using Federal funds? 137.379 Section 137.379 Public Health... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Other § 137.379 Do Davis-Bacon wage rates apply to construction projects performed by Self-Governance Tribes using Federal funds? Davis-Bacon Act...

  9. Applying Earth Observation Data to agriculture risk management: a public-private collaboration to develop drought maps in North-East China

    Science.gov (United States)

    Surminski, S.; Holt Andersen, B.; Hohl, R.; Andersen, S.

    2012-04-01

    Earth Observation Data (EO) can improve climate risk assessment particularly in developing countries where densities of weather stations are low. Access to data that reflects exposure to weather and climate risks is a key condition for any successful risk management approach. This is of particular importance in the context of agriculture and drought risk, where historical data sets, accurate current data about crop growth and weather conditions, as well as information about potential future changes based on climate projections and socio-economic factors are all relevant, but often not available to stakeholders. Efforts to overcome these challenges in using EO data have so far been predominantly focused on developed countries, where satellite-derived Normalized Difference Vegetation Indexes (NDVI) and the MERIS Global Vegetation Indexes (MGVI), are already used within the agricultural sector for assessing and managing crop risks and to parameterize crop yields. This paper assesses how public-private collaboration can foster the application of these data techniques. The findings are based on a pilot project in North-East China where severe droughts frequently impact the country's largest corn and soybeans areas. With support from the European Space Agency (ESA), a consortium of meteorological experts, mapping firms and (re)insurance experts has worked to explore the potential use and value of EO data for managing crop risk and assessing exposure to drought for four provinces in North-East China (Heilongjiang, Jilin, Inner Mongolia and Liaoning). Combining NDVI and MGVI data with meteorological observations to help alleviate shortcomings of NDVI specific to crop types and region has resulted in the development of new drought maps for the time 2000-2011 in digital format at a high resolution (1x1 km). The observed benefits of this data application range from improved risk management to cost effective drought monitoring and claims verification for insurance purposes

  10. We Must Invest in Applied Knowledge of Computational Neurosciences and Neuroinformatics as an Important Future in Malaysia: The Malaysian Brain Mapping Project.

    Science.gov (United States)

    Sumari, Putra; Idris, Zamzuri; Abdullah, Jafri Malin

    2017-03-01

    The Academy of Sciences Malaysia and the Malaysian Industry-Government group for High Technology has been working hard to project the future of big data and neurotechnology usage up to the year 2050. On the 19 September 2016, the International Brain Initiative was announced by US Under Secretary of State Thomas Shannon at a meeting that accompanied the United Nations' General Assembly in New York City. This initiative was seen as an important effort but deemed costly for developing countries. At a concurrent meeting hosted by the US National Science Foundation at Rockefeller University, numerous countries discussed this massive project, which would require genuine collaboration between investigators in the realms of neuroethics. Malaysia's readiness to embark on using big data in the field of brain, mind and neurosciences is to prepare for the 4th Industrial Revolution which is an important investment for the country's future. The development of new strategies has also been encouraged by the involvement of the Society of Brain Mapping and Therapeutics, USA and the International Neuroinformatics Coordinating Facility.

  11. The Relationship of Deep and Surface Study Approaches on Factual and Applied Test-Bank Multiple-Choice Question Performance

    Science.gov (United States)

    Yonker, Julie E.

    2011-01-01

    With the advent of online test banks and large introductory classes, instructors have often turned to textbook publisher-generated multiple-choice question (MCQ) exams in their courses. Multiple-choice questions are often divided into categories of factual or applied, thereby implicating levels of cognitive processing. This investigation examined…

  12. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    Directory of Open Access Journals (Sweden)

    Jennifer L Smith

    Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates

  13. SU-F-T-238: Analyzing the Performance of MapCHECK2 and Delta4 Quality Assurance Phantoms in IMRT and VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Lu, SH; Tsai, YC; Lan, HT; Wen, SY; Chen, LH; Kuo, SH; Wang, CW [National Taiwan University Hospital, Taipei City, Taiwan (China)

    2016-06-15

    Purpose: Intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) have been widely investigated for use in radiotherapy and found to have a highly conformal dose distribution. Delta{sup 4} is a novel cylindrical phantom consisting of 1069 p-type diodes with true treatments measured in the 3D target volume. The goal of this study was to compare the performance of a Delta{sup 4} diode array for IMRT and VMAT planning with ion chamber and MapCHECK2. Methods: Fifty-four IMRT (n=9) and VMAT (n=45) plans were imported to Philips Pinnacle Planning System 9.2 for recalculation with a solid water phantom, MapCHECK2, and the Delta4 phantom. To evaluate the difference between the measured and calculated dose, we used MapCHECK2 and Delta{sup 4} for a dose-map comparison and an ion chamber (PTW 31010 Semiflex 0.125 cc) for a point-dose comparison. Results: All 54 plans met the criteria of <3% difference for the point dose (at least two points) by ion chamber. The mean difference was 0.784% with a standard deviation of 1.962%. With a criteria of 3 mm/3% in a gamma analysis, the average passing rates were 96.86%±2.19% and 98.42%±1.97% for MapCHECK2 and Delta{sup 4}, respectively. The student t-test of MapCHECK2/Delta{sup 4}, ion chamber/Delta{sup 4}, and ion chamber/MapCHECK2 were 0.0008, 0.2944, and 0.0002, respectively. There was no significant difference in passing rates between MapCHECK2 and Delta{sup 4} for the IMRT plan (p = 0.25). However, a higher pass rate was observed in Delta{sup 4} (98.36%) as compared to MapCHECK2 (96.64%, p < 0.0001) for the VMAT plan. Conclusion: The Pinnacle planning system can accurately calculate doses for VMAT and IMRT plans. The Delta{sup 4} shows a similar result when compared to ion chamber and MapCHECK2, and is an efficient tool for patient-specific quality assurance, especially for rotation therapy.

  14. MCDA APPLIED TO PERFORMANCE APPRAISAL OF SHORT-HAUL TRUCK DRIVERS: A CASE STUDY IN A PORTUGUESE TRUCKING COMPANY

    Directory of Open Access Journals (Sweden)

    Raquel Morte

    2015-03-01

    Full Text Available Performance appraisal increasingly assumes a more important role in any organizational environment. In the trucking industry, drivers are the company's image and for this reason it is important to develop and increase their performance and commitment to the company's goals. This paper aims to create a performance appraisal model for trucking drivers, based on a multi-criteria decision aid methodology. The PROMETHEE and MMASSI methodologies were adapted using the criteria used for performance appraisal by the trucking company studied. The appraisal involved all the truck drivers, their supervisors and the company's Managing Director. The final output is a ranking of the drivers, based on their performance, for each one of the scenarios used. The results are to be used as a decision-making tool to allocate drivers to the domestic haul service.

  15. The Balanced Scorecard and Beyond – Applying Theories of Performance Measurement, Employment and Rewards in Management Accounting Education

    OpenAIRE

    Eisenberg, Paul

    2016-01-01

    This study applies the prevailing scholarly theories of strategic management, employment decisions, cost accounting and share reward schemes to a panel of questions raised by Colin Drury (2012) in the case study of the fictitious company Integrated Technology Services (UK) Ltd., ITS (UK). The paper provides model answers which can be used when working with the case study at institutions of higher education. The merit of the work lies in three areas. First, it provides an overview of theories ...

  16. Soil-applied zinc and copper suppress cadmium uptake and improve the performance of cereals and legumes.

    Science.gov (United States)

    Murtaza, Ghulam; Javed, Wasim; Hussain, Amir; Qadir, Manzoor; Aslam, Muhammad

    2017-02-01

    The present study aimed to evaluate the effect of soil-applied Zn and Cu on absorption and accumulation of Cd applied through irrigation water in legume (chickpea and mung bean) and cereal (wheat and maize) crops. The results revealed that Cd in irrigation water at higher levels (2 and 5 mg L -1 ) significantly (p soil application of Zn and Cu, singly or combined, favored the biomass production. Plant tissue Cd concentration increased linearly with the increasing application of Cd via irrigation water. While Cd application caused a redistribution of metals in grains, straw, and roots with the highest concentration of Cd, Zn, and Cu occurred in roots followed by straw and grains. Zinc addition to soil alleviated Cd toxicity by decreasing Cd concentration in plant tissues due to a possible antagonistic effect. The addition of Cu to the soil had no consistent effects on Zn and Cd contents across all crops. Inhibitory effects of Cd on the uptake and accumulation of Zn and Cu have also been observed at higher Cd load. Thus, soil-applied Zn and Cu antagonized Cd helping the plant to cope with its toxicity and suppressed the toxic effects of Cd in plant tissues, thus favoring plant growth.

  17. Development of a cost efficient methodology to perform allocation of flammable and toxic gas detectors applying CFD tools

    Energy Technology Data Exchange (ETDEWEB)

    Storch, Rafael Brod; Rocha, Gean Felipe Almeida [Det Norske Veritas (DNV), Rio de Janeiro, RJ (Brazil); Nalvarte, Gladys Augusta Zevallos [Det Norske Veritas (DNV), Novik (Norway)

    2012-07-01

    This paper is aimed to present a computational procedure for flammable and toxic gas detector allocation and quantification developed by DNV. The proposed methodology applies Computational Fluid Dynamics (CFD) simulations as well as operational and safety characteristics of the analyzed region to assess the optimal number of toxic and flammable gas detectors and their optimal location. A probabilistic approach is also used when applying the DNV software ThorEXPRESSLite, following NORSOK Z013 Annex G and presented in HUSER et al. 2000 and HUSER et al. 2001, when the flammable gas detectors are assessed. A DNV developed program, DetLoc, is used to run in an iterative way the procedure described above leading to an automatic calculation of the gas detectors location and number. The main advantage of the methodology presented above is the independence of human interaction in the gas detector allocation leading to a more precise and free of human judgment allocation. Thus, a reproducible allocation is generated when comparing several different analyses and a global criteria appliance is guaranteed through different regions in the same project. A case study is presented applying the proposed methodology. (author)

  18. Growth anisotropy effect of bulk high temperature superconductors on the levitation performance in the applied magnetic field

    Science.gov (United States)

    Zheng, J.; Liao, X. L.; Jing, H. L.; Deng, Z. G.; Yen, F.; Wang, S. Y.; Wang, J. S.

    2013-10-01

    Growth anisotropies of bulk high temperature superconductors (HTSCs) fabricated by a top-seeded melt texture growth process, that is, different pinning effect in the growth sectors (GSs) and growth sector boundaries (GSBs), possess effect on the macro flux trapping and levitation performance of bulk HTSCs. Previous work (Physics Procedia, 36 (2012) 1043) has found that the bulk HTSC array with aligned GSB pattern (AGSBP) exhibits better capability for levitation and suppression of levitation force decay above a permanent magnet guideway (PMG) compared with misaligned GSB pattern (MGSBP). In this paper, we further examine this growth anisotropy effect on the maglev performance of a double-layer bulk HTSC. In contrast to reported trapped flux cases (Supercond. Sci. Technol. 19 (2006) S466), the two superposed bulk HTSCs with same AGSBP with PMG are found to show better maglev performance. These series of results are helpful and support a new way for the performance optimization of present HTS maglev systems.

  19. The Knowledge management and organizational performance, a study applied; La gestion del conocimiento y el desempeno organizacional. Un estudio aplicado

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Cabarcos, M. A.; Gottling Oliveira Monteiro, S.

    2010-07-01

    The main purpose of this work is to point out in different ways the relation between the knowledge management and the performance of a group of companies situated in the metropolitan area of Porto (AMP). To rich this main objective, it was explored and characterized different companies groups concerning the styles of knowledge managing adopted, in order to relate these different groups with various non-financial performance measures and find out inequalities. (Author) 56 refs.

  20. Growth anisotropy effect of bulk high temperature superconductors on the levitation performance in the applied magnetic field

    International Nuclear Information System (INIS)

    Zheng, J.; Liao, X.L.; Jing, H.L.; Deng, Z.G.; Yen, F.; Wang, S.Y.; Wang, J.S.

    2013-01-01

    Highlights: • The single-layer bulk HTSC with AGSBP obtains better levitation performance than that of MGSBP. • The double-layer bulk with AGSBP obtains better levitation performance than that of MGSBP too. • The double-layer bulk finding is contrast to MGSBP if pursuing high trapped field. • The optimization is highlighted by simple and easy operation, thus economical in the practice. -- Abstract: Growth anisotropies of bulk high temperature superconductors (HTSCs) fabricated by a top-seeded melt texture growth process, that is, different pinning effect in the growth sectors (GSs) and growth sector boundaries (GSBs), possess effect on the macro flux trapping and levitation performance of bulk HTSCs. Previous work (Physics Procedia, 36 (2012) 1043) has found that the bulk HTSC array with aligned GSB pattern (AGSBP) exhibits better capability for levitation and suppression of levitation force decay above a permanent magnet guideway (PMG) compared with misaligned GSB pattern (MGSBP). In this paper, we further examine this growth anisotropy effect on the maglev performance of a double-layer bulk HTSC. In contrast to reported trapped flux cases (Supercond. Sci. Technol. 19 (2006) S466), the two superposed bulk HTSCs with same AGSBP with PMG are found to show better maglev performance. These series of results are helpful and support a new way for the performance optimization of present HTS maglev systems

  1. Growth anisotropy effect of bulk high temperature superconductors on the levitation performance in the applied magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, J., E-mail: jzheng@swjtu.edu.cn; Liao, X.L.; Jing, H.L.; Deng, Z.G.; Yen, F.; Wang, S.Y.; Wang, J.S.

    2013-10-15

    Highlights: • The single-layer bulk HTSC with AGSBP obtains better levitation performance than that of MGSBP. • The double-layer bulk with AGSBP obtains better levitation performance than that of MGSBP too. • The double-layer bulk finding is contrast to MGSBP if pursuing high trapped field. • The optimization is highlighted by simple and easy operation, thus economical in the practice. -- Abstract: Growth anisotropies of bulk high temperature superconductors (HTSCs) fabricated by a top-seeded melt texture growth process, that is, different pinning effect in the growth sectors (GSs) and growth sector boundaries (GSBs), possess effect on the macro flux trapping and levitation performance of bulk HTSCs. Previous work (Physics Procedia, 36 (2012) 1043) has found that the bulk HTSC array with aligned GSB pattern (AGSBP) exhibits better capability for levitation and suppression of levitation force decay above a permanent magnet guideway (PMG) compared with misaligned GSB pattern (MGSBP). In this paper, we further examine this growth anisotropy effect on the maglev performance of a double-layer bulk HTSC. In contrast to reported trapped flux cases (Supercond. Sci. Technol. 19 (2006) S466), the two superposed bulk HTSCs with same AGSBP with PMG are found to show better maglev performance. These series of results are helpful and support a new way for the performance optimization of present HTS maglev systems.

  2. Physical protection: threat response and performance goals as applied at the nuclear material inspection and storage (NMIS) building

    International Nuclear Information System (INIS)

    Sanford, T.H.

    1982-01-01

    Only one aspect of nuclear security has been discussed here, a disciplined approach to physical protection systems (PPS) design. The best security against a multitude of threats to the nuclear industry is a dynamic and multifaceted safeguards program. It is one that combines PPS design with employee screening, reliability or behavioral observation programs, procedural control, assessment techniques, response capabilities, and security hardware. To be effective, such a program must be supported by management and applied uniformly to all personnel, including the safeguards and security staff

  3. Proposal of requirements for performance in Brazil for systems of external individual monitoring for neutrons applying the TLD-albedo technique

    International Nuclear Information System (INIS)

    Martins, Marcelo M.; Mauricio, Claudia L.P.; Pereira, Walsan W.; Fonseca, Evaldo S. da; Silva, Ademir X.

    2009-01-01

    This work presents a criteria and conditions proposal for the regulations in Brazil of individual monitoring systems for neutrons applying the albedo technique with thermoluminescent detectors. Tests are proposed for the characterization performance of the system based on the Regulation ISO 21909 and on the experience of the authors

  4. PERFORMANCE EVALUATION OF AN INNOVATIVE FIBER REINFORCED GEOPOLYMER SPRAY-APPLIED MORTAR FOR LARGE DIAMETER WASTEWATER MAIN REHABILITATION IN HOUSTON, TX

    Science.gov (United States)

    This report describes the performance evaluation of a fiber reinforced geopolymer spray-applied mortar, which has potential as a structural alternative to traditional open cut techniques used in large-diameter sewer pipes. Geopolymer is a sustainable green material that incorpor...

  5. MULTICRITERIA ANALYSIS OF FOOTBALL MATCH PERFORMANCES: COMPOSITION OF PROBABILISTIC PREFERENCES APPLIED TO THE ENGLISH PREMIER LEAGUE 2015/2016

    Directory of Open Access Journals (Sweden)

    Vitor Principe

    Full Text Available ABSTRACT This article aims to analyze the technical performance of football teams in the FA Premier League during the 2015/2016 season. Data of twenty clubs over 38 matches for each club are considered using 23 variables. These variables have been explored in the football literature and address different features of technical performance. The different configuration of the data for teams in detached segments motivated the multi-criteria approach, which enables identification of strong and weak sectors in each segment. The uncertainty as to the outcome of football matches and the imprecision of the measures indicated the use of Composition of Probabilistic Preferences (CPP to model the problem. “R” software was used in the modeling and computation. The CPP global scores obtained were more consistent with the final classification than those of other methods. CPP scores revealed different performances of particular groups of variables indicating aspects to be improved and explored.

  6. Examination of the gamma equilibrium point hypothesis when applied to single degree of freedom movements performed with different inertial loads.

    Science.gov (United States)

    Bellomo, A; Inbar, G

    1997-01-01

    One of the theories of human motor control is the gamma Equilibrium Point Hypothesis. It is an attractive theory since it offers an easy control scheme where the planned trajectory shifts monotionically from an initial to a final equilibrium state. The feasibility of this model was tested by reconstructing the virtual trajectory and the stiffness profiles for movements performed with different inertial loads and examining them. Three types of movements were tested: passive movements, targeted movements, and repetitive movements. Each of the movements was performed with five different inertial loads. Plausible virtual trajectories and stiffness profiles were reconstructed based on the gamma Equilibrium Point Hypothesis for the three different types of movements performed with different inertial loads. However, the simple control strategy supported by the model, where the planned trajectory shifts monotonically from an initial to a final equilibrium state, could not be supported for targeted movements performed with added inertial load. To test the feasibility of the model further we must examine the probability that the human motor control system would choose a trajectory more complicated than the actual trajectory to control.

  7. 75 FR 47592 - Final Test Guideline; Product Performance of Skin-applied Insect Repellents of Insect and Other...

    Science.gov (United States)

    2010-08-06

    .... Crystal Dr., Arlington, VA. The hours of operation of this Docket Facility are from 8:30 a.m. to 4 p.m... has been working to revise this Product Performance Test Guideline since it was published as a... further strengthening the scientific and ethical conduct of this kind of research; these have been...

  8. The Effects of Financial Incentives on Women’s Performance: The Tournament Theory Applied to Female Tennis Players

    Directory of Open Access Journals (Sweden)

    Matthieu LLORCA

    2017-06-01

    Full Text Available The purpose of this paper is to analyze the management model of women’s professional tennis by using the theoretical framework of the tournament theory. Indeed, this sport is particularly appropriate to study the effects of financial incentives on women’s performance in the context of competitive elimination tournament. Moreover, we take into account the direct opposition between players by building two relative performance indicators. Empirical tests are conducted, by using Ordinary Least Squares method, on the whole tournaments played by the 30 best women’s tennis players, over the 2011 season. Interesting implication found is that one tournament theory principle, the incentive effect, is confirmed. In other words, an undistributed prize structure between tournament rounds increases the player performance. However, the other consequence of the tournament theory, the participative effect, is rejected because the monetary gains distributed by the tournament’s organizer (either the premium earned or the total dollar endowment do not induce better player performance.

  9. Applying dynamic data collection to improve dry electrode system performance for a P300-based brain-computer interface

    Science.gov (United States)

    Clements, J. M.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Collins, L. M.; Throckmorton, C. S.

    2016-12-01

    Objective. Dry electrodes have an advantage over gel-based ‘wet’ electrodes by providing quicker set-up time for electroencephalography recording; however, the potentially poorer contact can result in noisier recordings. We examine the impact that this may have on brain-computer interface communication and potential approaches for mitigation. Approach. We present a performance comparison of wet and dry electrodes for use with the P300 speller system in both healthy participants and participants with communication disabilities (ALS and PLS), and investigate the potential for a data-driven dynamic data collection algorithm to compensate for the lower signal-to-noise ratio (SNR) in dry systems. Main results. Performance results from sixteen healthy participants obtained in the standard static data collection environment demonstrate a substantial loss in accuracy with the dry system. Using a dynamic stopping algorithm, performance may have been improved by collecting more data in the dry system for ten healthy participants and eight participants with communication disabilities; however, the algorithm did not fully compensate for the lower SNR of the dry system. An analysis of the wet and dry system recordings revealed that delta and theta frequency band power (0.1-4 Hz and 4-8 Hz, respectively) are consistently higher in dry system recordings across participants, indicating that transient and drift artifacts may be an issue for dry systems. Significance. Using dry electrodes is desirable for reduced set-up time; however, this study demonstrates that online performance is significantly poorer than for wet electrodes for users with and without disabilities. We test a new application of dynamic stopping algorithms to compensate for poorer SNR. Dynamic stopping improved dry system performance; however, further signal processing efforts are likely necessary for full mitigation.

  10. Study of weather and thermal comfort influence on sport performance: prognostic analysis applied to Rio de Janeiro's city marathon

    Science.gov (United States)

    Pallotta, M.; Herdies, D. L.; Gonçalves, L. G.

    2013-05-01

    There is nowadays a growing interest in the influence and impacts of weather and climate in human life. The weather conditions analysis shows the utility of this type of tool when applied in sports. These conditions act as a differential in strategy and training, especially for outdoor sports. This study had as aim objective develop weather forecast and thermal comfort evaluation targeted to sports, and hoped that the results can be used to the development of products and weather service in the Olympic Games 2016 in Rio de Janeiro City. The use of weather forecast applied to the sport showed to be efficient for the case of Rio de Janeiro City Marathon, especially due to the high spatial resolution. The WRF simulations for the three marathons studied showed good results for temperature, atmospheric pressure, and relative humidity. On the other hand, the forecast of the wind showed a pattern of overestimation of the real situation in all cases. It was concluded that the WRF model provides, in general, more representative simulations from 36 hours in advance, and with 18 hours of integration they were even better, describing efficiently the synoptic situation that would be found. A review of weather conditions and thermal comfort at specific points of the marathon route showed that there are significant differences between the stages of the marathon, which makes possible to plan the competition strategy under the thermal comfort. It was concluded that a relationship between a situation more thermally comfortable (uncomfortable) and the best (worst) time in Rio de Janeiro City Marathon

  11. Applying distance-to-target weighing methodology to evaluate the environmental performance of bio-based energy, fuels, and materials

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin; Heilmeier, Hermann; Bringezu, Stefan

    2007-01-01

    The enhanced use of biomass for the production of energy, fuels, and materials is one of the key strategies towards sustainable production and consumption. Various life cycle assessment (LCA) studies demonstrate the great potential of bio-based products to reduce both the consumption of non-renewable energy resources and greenhouse gas emissions. However, the production of biomass requires agricultural land and is often associated with adverse environmental effects such as eutrophication of surface and ground water. Decision making in favor of or against bio-based and conventional fossil product alternatives therefore often requires weighing of environmental impacts. In this article, we apply distance-to-target weighing methodology to aggregate LCA results obtained in four different environmental impact categories (i.e., non-renewable energy consumption, global warming potential, eutrophication potential, and acidification potential) to one environmental index. We include 45 bio- and fossil-based product pairs in our analysis, which we conduct for Germany. The resulting environmental indices for all product pairs analyzed range from -19.7 to +0.2 with negative values indicating overall environmental benefits of bio-based products. Except for three options of packaging materials made from wheat and cornstarch, all bio-based products (including energy, fuels, and materials) score better than their fossil counterparts. Comparing the median values for the three options of biomass utilization reveals that bio-energy (-1.2) and bio-materials (-1.0) offer significantly higher environmental benefits than bio-fuels (-0.3). The results of this study reflect, however, subjective value judgments due to the weighing methodology applied. Given the uncertainties and controversies associated not only with distance-to-target methodologies in particular but also with weighing approaches in general, the authors strongly recommend using weighing for decision finding only as a

  12. The Effect of Appropriately and Inappropriately Applied Automation for the Control of Unmanned Systems on Operator Performance

    Science.gov (United States)

    2009-09-01

    2.1 Participants Twelve civilians (7 men and 5 women ) with no prior experience with the Robotic NCO simulation participated in this study. The mean...operators in a multitasking environment. 15. SUBJECT TERMS design guidelines, robotics, simulation, unmanned systems, automation 16. SECURITY...model of operator performance, or a hybrid method which combines one or more of these different invocation techniques (e.g., critical events and

  13. Evaluation of Teaching Performance of English Courses by Applying Data Envelopment Analysis and Two-phase Segmentation

    Directory of Open Access Journals (Sweden)

    Bernard Montoneri

    2013-05-01

    Full Text Available Effective teaching performance is a crucial factor contributing to students’ learning improvement. Students’ ratings of teachers at the end of each semester can indirectly provide valuable information about teachers’ performance. This paper selects classes of freshmen students taking a course of English in a university of Taiwan from the academic year 2004 to 2006 as the research object. We adopt the data envelopment analysis, a reliable and robust evaluation method, to identify the relative efficiencies of each class. The calculation is performed in two phases. In phase 1, all the classes are in the same pool. The results of numerical analysis in phase 1 are used to clarify whether the existing teaching methods can achieve the desired results and what are the improved methods. Based on the calculation of phase 1, we segment all the classes into 2 groups according to their contribution of output indicators in calculating efficiency values. The empirical results are expected to identify more objective classes and to reveal that the evaluated classes refer to different efficient classes in different phases and their ranking order changes accordingly. This method can help to provide some concrete and practical teaching strategies for the inefficient classes.

  14. High resolution and high sensitivity methods for oligosaccharide mapping and characterization by normal phase high performance liquid chromatography following derivatization with highly fluorescent anthranilic acid.

    Science.gov (United States)

    Anumula, K R; Dhume, S T

    1998-07-01

    Facile labeling of oligosaccharides (acidic and neutral) in a nonselective manner was achieved with highly fluorescent anthranilic acid (AA, 2-aminobenzoic acid) (more than twice the intensity of 2-aminobenzamide, AB) for specific detection at very high sensitivity. Quantitative labeling in acetate-borate buffered methanol (approximately pH 5.0) at 80 degreesC for 60 min resulted in negligible or no desialylation of the oligosaccharides. A high resolution high performance liquid chromatographic method was developed for quantitative oligosaccharide mapping on a polymeric-NH2bonded (Astec) column operating under normal phase and anion exchange (NP-HPAEC) conditions. For isolation of oligosaccharides from the map by simple evaporation, the chromatographic conditions developed use volatile acetic acid-triethylamine buffer (approximately pH 4.0) systems. The mapping and characterization technology was developed using well characterized standard glycoproteins. The fluorescent oligosaccharide maps were similar to the maps obtained by the high pH anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD), except that the fluorescent maps contained more defined peaks. In the map, the oligosaccharides separated into groups based on charge, size, linkage, and overall structure in a manner similar to HPAEC-PAD with contribution of -COOH function from the label, anthranilic acid. However, selectivity of the column for sialic acid linkages was different. A second dimension normal phase HPLC (NP-HPLC) method was developed on an amide column (TSK Gel amide-80) for separation of the AA labeled neutral complex type and isomeric structures of high mannose type oligosaccharides. The oligosaccharides labeled with AA are compatible with biochemical and biophysical techniques, and use of matrix assisted laser desorption mass spectrometry for rapid determination of oligosaccharide mass map of glycoproteins is demonstrated. High resolution of NP-HPAEC and NP-HPLC methods

  15. Analysis of applied forces and electromyography of back and shoulders muscles when performing a simulated hand scaling task.

    Science.gov (United States)

    Porter, William; Gallagher, Sean; Torma-Krajewski, Janet

    2010-05-01

    Hand scaling is a physically demanding task responsible for numerous overexertion injuries in underground mining. Scaling requires the miner to use a long pry bar to remove loose rock, reducing the likelihood of rock fall injuries. The experiments described in this article simulated "rib" scaling (scaling a mine wall) from an elevated bucket to examine force generation and electromyographic responses using two types of scaling bars (steel and fiberglass-reinforced aluminum) at five target heights ranging from floor level to 176 cm. Ten male and six female subjects were tested in separate experiments. Peak and average force applied at the scaling bar tip and normalized electromyography (EMG) of the left and right pairs of the deltoid and erectores spinae muscles were obtained. Work height significantly affected peak prying force during scaling activities with highest force capacity at the lower levels. Bar type did not affect force generation. However, use of the lighter fiberglass bar required significantly more muscle activity to achieve the same force. Results of these studies suggest that miners scale points on the rock face that are below their knees, and reposition the bucket as often as necessary to do so. Published by Elsevier Ltd.

  16. Performance of a shallow-focus applied-magnetic-field diode for ion-beam-transport experiments

    Energy Technology Data Exchange (ETDEWEB)

    Young, F.C.; Neri, J.M.; Ottinger, P.F. [Naval Research Lab., Washington, DC (United States); Rose, D.V. [JAYCOR, Vienna (Vatican City State, Holy See); Jones, T.G.; Oliver, B.V.

    1997-12-31

    An applied-magnetic-field ion diode to study the transport of intense ion beams for light-ion inertial confinement fusion is being operated on the Gamble II generator at NRL. A Large-area (145-cm{sup 2}), shallow-focusing diode is used to provide the ion beam required for self-pinched transport (SPT) experiments. Experiments have demonstrated focusing at 70 cm for 1.2-MV, 40-kA protons. Beyond the focus, the beam hollows out consistent with 20--30 mrad microdivergence. The effect of the counter-pulse B-field on altering the ion-beam trajectories and improving the focus has been diagnosed with a multiple-pinhole-camera using radiachromic film. This diagnostic is also used to determine the radial and azimuthal uniformity of ion emission at the anode for different B-field conditions. Increasing the diode voltage to 1.5 MV and optimizing the ion current are planned before initiating SPT experiments. Experiments to measure the spatial beam profile at focus, i.e., the SPT channel entrance, are in progress. Results are presented.

  17. High Performance Reduced Order Models for Wind Turbines with Full-Scale Converters Applied on Grid Interconnection Studies

    DEFF Research Database (Denmark)

    Pereira, Heverton A.; F. Cupertino, Allan; Teodorescu, Remus

    2014-01-01

    Wind power has achieved technological evolution, and Grid Code (GC) requirements forced wind industry consolidation in the last three decades. However, more studies are necessary to understand how the dynamics inherent in this energy source interact with the power system. Traditional energy...... of Absolute Error (NIAE). Models are analyzed during wind speed variations and balanced voltage dip. During faults, WPPs must be able to supply reactive power to the grid, and this characteristic is analyzed. Using the proposed performance index, it is possible to conclude if a reduced order model is suitable...

  18. The impact of extracellular matrix coatings on the performance of human renal cells applied in bioartificial kidneys.

    Science.gov (United States)

    Zhang, Huishi; Tasnim, Farah; Ying, Jackie Y; Zink, Daniele

    2009-05-01

    Extracellular matrix (ECM) coatings have been used to improve cell performance in bioartificial kidneys (BAKs). However, their effects on primary human renal proximal tubule cells (HPTCs), which is the most important cell type with regard to clinical applications, have not been tested systematically. Also, the effects of ECM coatings on cell performance during extended time periods have not been addressed. Studying such effects is important for the development of long-term applications. Herein we analyzed for the first time systematically the effects of ECM coatings on proliferation and differentiation of human renal cells and we addressed, in particular, formation and long-term maintenance of differentiated epithelia. Our study focused on HPTCs. ECM coatings were tested alone or in combination with the growth factor bone morphogenetic protein-7 and other additives. The best results were obtained with ECMs consisting of the basal lamina components, laminin or collagen IV, and differentiated epithelia could be maintained up to three weeks on these ECMs. These results provide for the first time clear evidence which kinds of ECM coatings are most appropriate for BAKs. The results also showed that alpha-SMA-expressing myofibroblasts played a key role in the final disruption of differentiated epithelia. This suggests that epithelial-to-mesenchymal transition-related processes might be the major obstacle in long-term applications and such processes should be carefully addressed in future BAK-related research.

  19. Applying Human-performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis

    Energy Technology Data Exchange (ETDEWEB)

    O' Hara, J.M.

    2009-11-30

    Human performance models (HPMs) are simulations of human behavior with which we can predict human performance. Designers use them to support their human factors engineering (HFE) programs for a wide range of complex systems, including commercial nuclear power plants. Applicants to U.S. Nuclear Regulatory Commission (NRC) can use HPMs for design certifications, operating licenses, and license amendments. In the context of nuclear-plant safety, it is important to assure that HPMs are verified and validated, and their usage is consistent with their intended purpose. Using HPMs improperly may generate misleading or incorrect information, entailing safety concerns. The objective of this research was to develop guidance to support the NRC staff's reviews of an applicant's use of HPMs in an HFE program. The guidance is divided into three topical areas: (1) HPM Verification, (2) HPM Validation, and (3) User Interface Verification. Following this guidance will help ensure the benefits of HPMs are achieved in a technically sound, defensible manner. During the course of developing this guidance, I identified several issues that could not be addressed; they also are discussed.

  20. Applying Human-performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis

    International Nuclear Information System (INIS)

    O'Hara, J.M.

    2009-01-01

    Human performance models (HPMs) are simulations of human behavior with which we can predict human performance. Designers use them to support their human factors engineering (HFE) programs for a wide range of complex systems, including commercial nuclear power plants. Applicants to U.S. Nuclear Regulatory Commission (NRC) can use HPMs for design certifications, operating licenses, and license amendments. In the context of nuclear-plant safety, it is important to assure that HPMs are verified and validated, and their usage is consistent with their intended purpose. Using HPMs improperly may generate misleading or incorrect information, entailing safety concerns. The objective of this research was to develop guidance to support the NRC staff's reviews of an applicant's use of HPMs in an HFE program. The guidance is divided into three topical areas: (1) HPM Verification, (2) HPM Validation, and (3) User Interface Verification. Following this guidance will help ensure the benefits of HPMs are achieved in a technically sound, defensible manner. During the course of developing this guidance, I identified several issues that could not be addressed; they also are discussed.

  1. Integrating views on support for mid-level health worker performance: a concept mapping study with regional health system actors in rural Guatemala.

    Science.gov (United States)

    Hernández, Alison R; Hurtig, Anna-Karin; Dahlblom, Kjerstin; San Sebastián, Miguel

    2015-10-08

    Mid-level health workers are on the front-lines in underserved areas in many LMICs, and their performance is critical for improving the health of vulnerable populations. However, improving performance in low-resource settings is complex and highly dependent on the organizational context of local health systems. This study aims to examine the views of actors from different levels of a regional health system in Guatemala on actions to support the performance of auxiliary nurses, a cadre of mid-level health workers with a prominent role in public sector service delivery. A concept mapping study was carried out to develop an integrated view on organizational support and identify locally relevant strategies for strengthening performance. A total of 93 regional and district managers, and primary and secondary care health workers participated in generating ideas on actions needed to support auxiliary nurses' performance. Ideas were consolidated into 30 action items, which were structured through sorting and rating exercises, involving a total of 135 of managers and health workers. Maps depicting participants' integrated views on domains of action and dynamics in sub-groups' interests were generated using a sequence of multivariate statistical analyses, and interpreted by regional managers. The combined input of health system actors provided a multi-faceted view of actions needed to support performance, which were organized in six domains, including: Communication and coordination, Tools to orient work, Organizational climate of support, Motivation through recognition, Professional development and Skills development. The nature of relationships across hierarchical levels was identified as a cross-cutting theme. Pattern matching and go-zone maps indicated directions for action based on areas of consensus and difference across sub-groups of actors. This study indicates that auxiliary nurses' performance is interconnected with the performance of other health system actors who

  2. q-Deformed nonlinear maps

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 64; Issue 3 ... Keywords. Nonlinear dynamics; logistic map; -deformation; Tsallis statistics. ... As a specific example, a -deformation procedure is applied to the logistic map. Compared ...

  3. Mapping earthworm communities in Europe

    NARCIS (Netherlands)

    Rutgers, M.; Orgiazzi, A.; Gardi, C.; Römbke, J.; Jansch, S.; Keith, A.; Neilson, R.; Boag, B.; Schmidt, O.; Murchie, A.K.; Blackshaw, R.P.; Pérès, G.; Cluzeau, D.; Guernion, M.; Briones, M.J.I.; Rodeiro, J.; Pineiro, R.; Diaz Cosin, D.J.; Sousa, J.P.; Suhadolc, M.; Kos, I.; Krogh, P.H.; Faber, J.H.; Mulder, C.; Bogte, J.J.; Wijnen, van H.J.; Schouten, A.J.; Zwart, de D.

    2016-01-01

    Existing data sets on earthworm communities in Europe were collected, harmonized, collated, modelled and depicted on a soil biodiversity map. Digital Soil Mapping was applied using multiple regressions relating relatively low density earthworm community data to soil characteristics, land use,

  4. Innovative Nuclear Energy Systems: State-of-the Art Survey on Evaluation and Aggregation Judgment Measures Applied to Performance Comparison

    Directory of Open Access Journals (Sweden)

    Vladimir Kuznetsov

    2015-04-01

    Full Text Available This paper summarizes the experience gained in the application of multi-criteria decision making and uncertainty treatment methods to a comparative assessment of nuclear energy systems and related nuclear fuel cycles. These judgment measures provide a means for comprehensive evaluation according to different conflicting criteria, such as costs, benefits and risks, which are inevitably associated with the deployment of advanced technologies. Major findings and recommendations elaborated in international and national projects and studies are reviewed and discussed. A careful analysis is performed for multi-criteria comparative assessment of nuclear energy systems and nuclear fuel cycles on the basis of various evaluation and screening results. The purpose of this paper is to discuss the lessons learned, to share the identified solutions, and indicate promising future directions.

  5. Performance allocation traceable to regulatory criteria as applied to site characterization work at the Basalt Waste Isolation Project

    International Nuclear Information System (INIS)

    Deju, R.A.; Babad, H.; Bensky, M.S.; Jacobs, G.K.

    1983-01-01

    The Basalt Waste Isolation Project has developed a method for defining in detail the work required to demonstrate the feasibility of emplacing and providing for the safe isolation of nuclear wastes in a repository in the deep basalts at the Hanford Site near Richland, Washington. Criteria analysis allows the identification of areas of significant technical uncertainty or controversy that can be highlighted as issues. A preliminary analysis has been conducted, which, by identifying key radionuclides and allocating performance among the multiple barriers in a repository constructed in a basalt, allows the design and development testing activities at the Basalt Waste Isolation Project to be put into perspective. Application of sophisticated uncertainty analysis techniques will allow refinements in the analysis to be made and to further guide characterization and testing activities. Preliminary results suggest that a repository constructed in basalt will provide for the safe isolation of nuclear wastes in a cost-effective and reliable manner with a high degree of confidence

  6. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Science.gov (United States)

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  7. A road map for implementing systems engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dean, F.F. [Sandia National Labs., Albuquerque, NM (United States). New Mexico Weapons Systems Engineering Center; Bentz, B.; Bahill, A.T. [Univ. of Arizona, Tucson, AZ (United States)

    1997-02-01

    Studies by academia, industry, and government indicate that applying a sound systems engineering process to development programs is an important tool for preventing cost and schedule overruns and performance deficiencies. There is an enormous body of systems engineering knowledge. Where does one start? How can the principles of systems engineering be applied in the Sandia environment? This road map is intended to be an aid to answering these questions.

  8. Performance-based financing as a health system reform: mapping the key dimensions for monitoring and evaluation

    Science.gov (United States)

    2013-01-01

    Background Performance-based financing is increasingly being applied in a variety of contexts, with the expectation that it can improve the performance of health systems. However, while there is a growing literature on implementation issues and effects on outputs, there has been relatively little focus on interactions between PBF and health systems and how these should be studied. This paper aims to contribute to filling that gap by developing a framework for assessing the interactions between PBF and health systems, focusing on low and middle income countries. In doing so, it elaborates a general framework for monitoring and evaluating health system reforms in general. Methods This paper is based on an exploratory literature review and on the work of a group of academics and PBF practitioners. The group developed ideas for the monitoring and evaluation framework through exchange of emails and working documents. Ideas were further refined through discussion at the Health Systems Research symposium in Beijing in October 2012, through comments from members of the online PBF Community of Practice and Beijing participants, and through discussion with PBF experts in Bergen in June 2013. Results The paper starts with a discussion of definitions, to clarify the core concept of PBF and how the different terms are used. It then develops a framework for monitoring its interactions with the health system, structured around five domains of context, the development process, design, implementation and effects. Some of the key questions for monitoring and evaluation are highlighted, and a systematic approach to monitoring effects proposed, structured according to the health system pillars, but also according to inputs, processes and outputs. Conclusions The paper lays out a broad framework within which indicators can be prioritised for monitoring and evaluation of PBF or other health system reforms. It highlights the dynamic linkages between the domains and the different pillars

  9. Performance-based financing as a health system reform: mapping the key dimensions for monitoring and evaluation.

    Science.gov (United States)

    Witter, Sophie; Toonen, Jurrien; Meessen, Bruno; Kagubare, Jean; Fritsche, György; Vaughan, Kelsey

    2013-09-29

    Performance-based financing is increasingly being applied in a variety of contexts, with the expectation that it can improve the performance of health systems. However, while there is a growing literature on implementation issues and effects on outputs, there has been relatively little focus on interactions between PBF and health systems and how these should be studied. This paper aims to contribute to filling that gap by developing a framework for assessing the interactions between PBF and health systems, focusing on low and middle income countries. In doing so, it elaborates a general framework for monitoring and evaluating health system reforms in general. This paper is based on an exploratory literature review and on the work of a group of academics and PBF practitioners. The group developed ideas for the monitoring and evaluation framework through exchange of emails and working documents. Ideas were further refined through discussion at the Health Systems Research symposium in Beijing in October 2012, through comments from members of the online PBF Community of Practice and Beijing participants, and through discussion with PBF experts in Bergen in June 2013. The paper starts with a discussion of definitions, to clarify the core concept of PBF and how the different terms are used. It then develops a framework for monitoring its interactions with the health system, structured around five domains of context, the development process, design, implementation and effects. Some of the key questions for monitoring and evaluation are highlighted, and a systematic approach to monitoring effects proposed, structured according to the health system pillars, but also according to inputs, processes and outputs. The paper lays out a broad framework within which indicators can be prioritised for monitoring and evaluation of PBF or other health system reforms. It highlights the dynamic linkages between the domains and the different pillars. All of these are also framed within

  10. Sentinel nodes are identifiable in formalin-fixed specimens after surgeon-performed ex vivo sentinel lymph node mapping in colorectal cancer.

    LENUS (Irish Health Repository)

    Smith, Fraser McLean

    2012-02-03

    BACKGROUND: In recent years, the technique of sentinel lymph node (SLN) mapping has been applied to colorectal cancer. One aim was to ultrastage patients who were deemed node negative by routine pathologic processing but who went on to develop systemic disease. Such a group may benefit from adjuvant chemotherapy. METHODS: With fully informed consent and ethical approval, 37 patients with primary colorectal cancer and 3 patients with large adenomas were prospectively mapped. Isosulfan blue dye (1 to 2 mL) was injected around tumors within 5 to 10 minutes of resection. After gentle massage to recreate in vivo lymph flow, specimens were placed directly into formalin. During routine pathologic analysis, all nodes were bivalved, and blue-staining nodes were noted. These later underwent multilevel step sectioning with hematoxylin and eosin and cytokeratin staining. RESULTS: SLNs were found in 39 of 40 patients (98% sensitivity), with an average of 4.1 SLNs per patient (range, 1-8). In 14 of 16 (88% specificity) patients with nodal metastases on routine reporting, SLN status was in accordance. Focused examination of SLNs identified occult tumor deposits in 6 (29%) of 21 node-negative patients. No metastatic cells were found in SLNs draining the three adenomas. CONCLUSIONS: The ability to identify SLNs after formalin fixation increases the ease and applicability of SLN mapping in colorectal cancer. Furthermore, the sensitivity and specificity of this simple ex vivo method for establishing regional lymph node status were directly comparable to those in previously published reports.

  11. Differential maps, difference maps, interpolated maps, and long term prediction

    International Nuclear Information System (INIS)

    Talman, R.

    1988-06-01

    Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs

  12. Performance and retention of lightweight satellite radio tags applied to the ears of polar bears (Ursus maritimus)

    Science.gov (United States)

    Wiig, Øystein; Born, Erik W.; Laidre, Kristin L.; Dietz, Rune; Jensen, Mikkel Villum; Durner, George M.; Pagano, Anthony M.; Regehr, Eric V.; St. Martin, Michelle; Atkinson, Stephen N.; Dyck, Markus

    2017-01-01

    BackgroundSatellite telemetry studies provide information that is critical to the conservation and management of species affected by ecological change. Here we report on the performance and retention of two types (SPOT-227 and SPOT-305A) of ear-mounted Argos-linked satellite transmitters (i.e., platform transmitter terminal, or PTT) deployed on free-ranging polar bears in Eastern Greenland, Baffin Bay, Kane Basin, the southern Beaufort Sea, and the Chukchi Sea during 2007–2013.ResultsTransmissions from 142 out of 145 PTTs deployed on polar bears were received for an average of 69.3 days. The average functional longevity, defined as the number of days they transmitted while still attached to polar bears, for SPOT-227 was 56.8 days and for SPOT-305A was 48.6 days. Thirty-four of the 142 (24%) PTTs showed signs of being detached before they stopped transmitting, indicating that tag loss was an important aspect of tag failure. Furthermore, 10 of 26 (38%) bears that were re-observed following application of a PTT had a split ear pinna, suggesting that some transmitters were detached by force. All six PTTs that were still on bears upon recapture had lost the antenna, which indicates that antenna breakage was a significant contributor to PTT failure. Finally, only nine of the 142 (6%) PTTs—three of which were still attached to bears—had a final voltage reading close to the value indicating battery exhaustion. This suggests that battery exhaustion was not a major factor in tag performance.ConclusionsThe average functional longevity of approximately 2 months for ear-mounted PTTs (this study) is poor compared to PTT collars fitted to adult female polar bears, which can last for several years. Early failure of the ear-mounted PTTs appeared to be caused primarily by detachment from the ear or antenna breakage. We suggest that much smaller and lighter ear-mounted transmitters are necessary to reduce the risk of tissue irritation, tissue damage, and tag detachment, and

  13. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2007-01-01

    Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that today’s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher’s interest in the fusion of intelligent sensors and optimal signal processing techniques.

  14. Kinect Who’s Coming - Applying Kinect to Human Body Height Measurement to Improve Character Recognition Performance

    Directory of Open Access Journals (Sweden)

    Hau-Wei Lee

    2015-05-01

    Full Text Available A great deal of relevant research on character recognition has been carried out, but a certain amount of time is needed to compare faces from a large database. The Kinect is able to obtain three-dimensional coordinates for an object (x & y axes and depth, and in recent years research on its applications has expanded from use in gaming to that of image measurement. This study uses Kinect skeleton information to conduct body height measurements with the aim of improving character recognition performance. Time spent searching and comparing characters is reduced by creating height categories. The margin of error for height used in this investigation was ± 5 cm; therefore, face comparisons were only executed for people in the database within ±5 cm of the body height measured, reducing the search time needed. In addition, using height and facial features simultaneously to conduct character recognition can also reduce the frequency of mistaken recognition. The Kinect was placed on a rotary stage and the position of the head on the body frame was used to conduct body tracking. Body tracking can be used to reduce image distortion caused by the lens of the Kinect. EmguCV was used for image processing and character recognition. The methods proposed in this study can be used in public safety, student attendance registration, commercial VIP recognition and many others.

  15. Making a difference? Applying Vitellone's Social Science of the Syringe to performance and image enhancing drug injecting.

    Science.gov (United States)

    Hart, Aaron

    2018-04-18

    Vitellone's Social Science of the Syringe investigates epistemologies of injecting drug use. She argues for a methodology that can be simultaneously sensitive to biopolitical power regimes; the trajectories of social stratification; and the resistance, creativity and dignity of human agency. She proposes a methodological focus on the syringe-in-use as an active participant in these dynamics. Harm reduction policy and service provision frameworks have paid little attention to the phenomena of performance and image enhancing drug (PIEDs) injection. One way of assessing the merit of Vitellone's proposal is to use it to investigate these phenomena. I argue that Vitellone's method can be used to articulate a range of significant differences between people who inject PIEDs and other people who inject drugs, and that these differences can inform harm reduction initiatives. When compared to the heroin syringe, the PIED syringe participates in different socio-economic and material contexts, gendered identities, and biopolitical governance regimes. These differences materialise in different rates of syringe sharing and blood-borne virus transmission; and different experiences of needle exchange services. I offer a thought experiment demonstrating how a different syringe might alter the structural dynamics, biopolitical governance, and the agentic choices of people who inject PIEDs. Judging by the productive effects of diffracting Vitellone's analysis through an empirical concern with PIED injecting, I concur with Vitellone's proposition that 'something objective may be gained from an empirical investigation of the syringe-in-use' (p. 33). Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The self-determination theory applied in the analysis of motivation and academic performance of accounting students in a brazilian public university

    OpenAIRE

    Marina Salgado Borges; Gilberto José Miranda; Sheizi Calheira Freitas

    2017-01-01

    The aim of this study was analyzing the relations between academic performance and motivation of Accounting students in a Brazilian public university based on Self-determination Theory. Methodologically, in order to reach that, structured questionnaires were applied in classrooms with the Brazilian version of the Academic Motivation Scale (AMS), in a sample of 316 students enrolled from second to tenth periods of that course, equivalent to 37.2% of the total number of students. Data were anal...

  17. Survey and analysis of environmental performance indicators applied to thermoelectric generation styles; Levantamento e analise de indicadores de desempenho ambiental aplicados a empreendimentos termeletricos

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Bruno Moreno Rodrigo de; Cardoso Junior, Ricardo Abranches Felix [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil)

    2010-07-01

    Country economic growth is proportional bond to it electric generation increasing capacity. Brazil, which has a power matrix predominantly renewable, currently is increasing generation, from thermoelectricity, due, mainly, to major facilities in relation to environmental licensing. Since this new Thermal Power Plants operating, tied to the generation with those ones already installed, will result in environmental impacts that must be properly controlled. Such control should follow the Environmental Performance Evaluation guidance, standardized by ABNT NBR ISO 14031/04. Therefore, the following work presents the main Environmental Performance Indicators applied to several thermoelectric generation styles (Oil, Natural Gas, Diesel, Mineral Coal, Biomass, Waste, Solar and Nuclear). (author)

  18. Judging complex movement performances for excellence: a principal components analysis-based technique applied to competitive diving.

    Science.gov (United States)

    Young, Cole; Reinkensmeyer, David J

    2014-08-01

    Athletes rely on subjective assessment of complex movements from coaches and judges to improve their motor skills. In some sports, such as diving, snowboard half pipe, gymnastics, and figure skating, subjective scoring forms the basis for competition. It is currently unclear whether this scoring process can be mathematically modeled; doing so could provide insight into what motor skill is. Principal components analysis has been proposed as a motion analysis method for identifying fundamental units of coordination. We used PCA to analyze movement quality of dives taken from USA Diving's 2009 World Team Selection Camp, first identifying eigenpostures associated with dives, and then using the eigenpostures and their temporal weighting coefficients, as well as elements commonly assumed to affect scoring - gross body path, splash area, and board tip motion - to identify eigendives. Within this eigendive space we predicted actual judges' scores using linear regression. This technique rated dives with accuracy comparable to the human judges. The temporal weighting of the eigenpostures, body center path, splash area, and board tip motion affected the score, but not the eigenpostures themselves. These results illustrate that (1) subjective scoring in a competitive diving event can be mathematically modeled; (2) the elements commonly assumed to affect dive scoring actually do affect scoring (3) skill in elite diving is more associated with the gross body path and the effect of the movement on the board and water than the units of coordination that PCA extracts, which might reflect the high level of technique these divers had achieved. We also illustrate how eigendives can be used to produce dive animations that an observer can distort continuously from poor to excellent, which is a novel approach to performance visualization. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Cognitive Load and Self-Determination Theories Applied to E-Learning: Impact on Students' Participation and Academic Performance.

    Science.gov (United States)

    de Araujo Guerra Grangeia, Tiago; de Jorge, Bruno; Franci, Daniel; Martins Santos, Thiago; Vellutini Setubal, Maria Silvia; Schweller, Marcelo; de Carvalho-Filho, Marco Antonio

    2016-01-01

    Emergency clerkships expose students to a stressful environment that require multiple tasks, which may have a direct impact on cognitive load and motivation for learning. To address this challenge, Cognitive Load Theory and Self Determination Theory provided the conceptual frameworks to the development of a Moodle-based online Emergency Medicine course, inspired by real clinical cases. Three consecutive classes (2013-2015) of sixth-year medical students (n = 304) participated in the course, during a curricular and essentially practical emergency rotation. "Virtual Rounds" provided weekly virtual patients in narrative format and meaningful schemata to chief complaints, in order to simulate real rounds at Emergency Unit. Additional activities such as Extreme Decisions, Emergency Quiz and Electrocardiographic challenge offered different views of emergency care. Authors assessed student´s participation and its correlation with their academic performance. A survey evaluated students´ opinions. Students graduating in 2015 answered an online questionnaire to investigate cognitive load and motivation. Each student produced 1965 pageviews and spent 72 hours logged on. Although Clinical Emergency rotation has two months long, students accessed the online course during an average of 5.3 months. Virtual Rounds was the most accessed activity, and there was positive correlations between the number of hours logged on the platform and final grades on Emergency Medicine. Over 90% of students felt an improvement in their clinical reasoning and considered themselves better prepared for rendering Emergency care. Considering a Likert scale from 1 (minimum load) to 7 (maximum load), the scores for total cognitive load were 4.79±2.2 for Virtual Rounds and 5.56±1.96 for real medical rounds(pcognitive and motivational conceptual frameworks, seems to be a strong tool to engage students in learning. It may support them to manage the cognitive challenges involved in clinical care and

  20. Performance enhancement of high-field asymmetric waveform ion mobility spectrometry by applying differential-RF-driven operation mode.

    Science.gov (United States)

    Zeng, Yue; Tang, Fei; Zhai, Yadong; Wang, Xiaohao

    2017-09-01

    The traditional operation mode of high-field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) uses a one-way radio frequency (RF) voltage input as the dispersion voltage. This requires a high voltage input and limits power consumption reduction and miniaturization of instruments. With higher dispersion voltages or larger compensation voltages, there also exist problems such as low signal intensity or the fact that the dispersion voltage is no longer much larger than the compensation voltage. In this paper, a differential-RF-driven operation mode of FAIMS is proposed. The two-way RF is used to generate the dispersion field, and a phase difference is added between the two RFs to generate a single step waveform field. Theoretical analysis, and experimental results from an ethanol sample, showed that the peak positions of the ion spectra changed linearly (R 2 = 0.9992) with the phase difference of the two RFs in the differential-RF-driven mode and that the peak intensity of the ion spectrum could be enhanced by more than eight times for ethanol ions. In this way, it is possible to convert the ion spectrum peaks outside the separation or compensation voltage range into a detectable range, by changing the phase difference. To produce the same separation electric field, the high-voltage direct current input voltage can be maximally reduced to half of that in the traditional operation mode. Without changing the drift region size or drift condition, the differential-RF-driven operation mode can reduce power consumption, increase signal-to-noise ratio, extend the application range of the dispersion voltage and compensation voltage, and improve FAIMS detection performance.

  1. Cognitive Load and Self-Determination Theories Applied to E-Learning: Impact on Students' Participation and Academic Performance.

    Directory of Open Access Journals (Sweden)

    Tiago de Araujo Guerra Grangeia

    Full Text Available Emergency clerkships expose students to a stressful environment that require multiple tasks, which may have a direct impact on cognitive load and motivation for learning. To address this challenge, Cognitive Load Theory and Self Determination Theory provided the conceptual frameworks to the development of a Moodle-based online Emergency Medicine course, inspired by real clinical cases.Three consecutive classes (2013-2015 of sixth-year medical students (n = 304 participated in the course, during a curricular and essentially practical emergency rotation. "Virtual Rounds" provided weekly virtual patients in narrative format and meaningful schemata to chief complaints, in order to simulate real rounds at Emergency Unit. Additional activities such as Extreme Decisions, Emergency Quiz and Electrocardiographic challenge offered different views of emergency care. Authors assessed student´s participation and its correlation with their academic performance. A survey evaluated students´ opinions. Students graduating in 2015 answered an online questionnaire to investigate cognitive load and motivation.Each student produced 1965 pageviews and spent 72 hours logged on. Although Clinical Emergency rotation has two months long, students accessed the online course during an average of 5.3 months. Virtual Rounds was the most accessed activity, and there was positive correlations between the number of hours logged on the platform and final grades on Emergency Medicine. Over 90% of students felt an improvement in their clinical reasoning and considered themselves better prepared for rendering Emergency care. Considering a Likert scale from 1 (minimum load to 7 (maximum load, the scores for total cognitive load were 4.79±2.2 for Virtual Rounds and 5.56±1.96 for real medical rounds(p<0,01.A real-world inspired online course, based on cognitive and motivational conceptual frameworks, seems to be a strong tool to engage students in learning. It may support them to

  2. Applying Aspects of the Expert Performance Approach to Better Understand the Structure of Skill and Mechanisms of Skill Acquisition in Video Games.

    Science.gov (United States)

    Boot, Walter R; Sumner, Anna; Towne, Tyler J; Rodriguez, Paola; Anders Ericsson, K

    2017-04-01

    Video games are ideal platforms for the study of skill acquisition for a variety of reasons. However, our understanding of the development of skill and the cognitive representations that support skilled performance can be limited by a focus on game scores. We present an alternative approach to the study of skill acquisition in video games based on the tools of the Expert Performance Approach. Our investigation was motivated by a detailed analysis of the behaviors responsible for the superior performance of one of the highest scoring players of the video game Space Fortress (Towne, Boot, & Ericsson, ). This analysis revealed how certain behaviors contributed to his exceptional performance. In this study, we recruited a participant for a similar training regimen, but we collected concurrent and retrospective verbal protocol data throughout training. Protocol analysis revealed insights into strategies, errors, mental representations, and shifting game priorities. We argue that these insights into the developing representations that guided skilled performance could only easily have been derived from the tools of the Expert Performance Approach. We propose that the described approach could be applied to understand performance and skill acquisition in many different video games (and other short- to medium-term skill acquisition paradigms) and help reveal mechanisms of transfer from gameplay to other measures of laboratory and real-world performance. Copyright © 2016 Cognitive Science Society, Inc.

  3. SU-E-T-608: Performance Comparison of Four Commercial Treatment Planning Systems Applied to Intensity-Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y; Li, R; Chi, Z [The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, CN, Shijiazhuang, Hebei (China)

    2014-06-01

    Purpose: To compare the performances of four commercial treatment planning systems (TPS) used for the intensity-modulated radiotherapy (IMRT). Methods: Ten patients of nasopharyngeal (4 cases), esophageal (3 cases) and cervical (3 cases) cancer were randomly selected from a 3-month IMRT plan pool at one radiotherapy center. For each patient, four IMRT plans were newly generated by using four commercial TPS (Corvus, Monaco, Pinnacle and Xio), and then verified with Matrixx (two-dimensional array/IBA Company) on Varian23EX accelerator. A pass rate (PR) calculated from the Gamma index by OminiPro IMRT 1.5 software was evaluated at four plan verification standards (1%/1mm, 2%/2mm, 3%/3mm, 4%/4mm and 5%/5mm) for each treatment plan. Overall and multiple pairwise comparisons of PRs were statistically conducted by analysis of covariance (ANOVA) F and LSD tests among four TPSs. Results: Overall significant (p>0.05) differences of PRs were found among four TPSs with F test values of 3.8 (p=0.02), 21.1(>0.01), 14.0 (>0.01), 8.3(>0.01) at standards of 1%/1mm to 4%/4mm respectively, except at 5%/5mm standard with 2.6 (p=0.06). All means (standard deviation) of PRs at 3%/3mm of 94.3 ± 3.3 (Corvus), 98.8 ± 0.8 (Monaco), 97.5± 1.7 (Pinnacle), 98.4 ± 1.0 (Xio) were above 90% and met clinical requirement. Multiple pairwise comparisons had not demonstrated a consistent low or high pattern on either TPS. Conclusion: Matrixx dose verification results show that the validation pass rates of Monaco and Xio plans are relatively higher than those of the other two; Pinnacle plan shows slight higher pass rate than Corvus plan; lowest pass rate was achieved by the Corvus plan among these four kinds of TPS.

  4. Performance of USGS one-year earthquake hazard map for natural and induced seismicity in the central and eastern United States

    Science.gov (United States)

    Brooks, E. M.; Stein, S.; Spencer, B. D.; Salditch, L.; Petersen, M. D.; McNamara, D. E.

    2017-12-01

    Seismicity in the central United States has dramatically increased since 2008 due to the injection of wastewater produced by oil and gas extraction. In response, the USGS created a one-year probabilistic hazard model and map for 2016 to describe the increased hazard posed to the central and eastern United States. Using the intensity of shaking reported to the "Did You Feel It?" system during 2016, we assess the performance of this model. Assessing the performance of earthquake hazard maps for natural and induced seismicity is conceptually similar but has practical differences. Maps that have return periods of hundreds or thousands of years— as commonly used for natural seismicity— can be assessed using historical intensity data that also span hundreds or thousands of years. Several different features stand out when assessing the USGS 2016 seismic hazard model for the central and eastern United States from induced and natural earthquakes. First, the model can be assessed as a forecast in one year, because event rates are sufficiently high to permit evaluation with one year of data. Second, because these models are projections from the previous year thus implicitly assuming that fluid injection rates remain the same, misfit may reflect changes in human activity. Our results suggest that the model was very successful by the metric implicit in probabilistic hazard seismic assessment: namely, that the fraction of sites at which the maximum shaking exceeded the mapped value is comparable to that expected. The model also did well by a misfit metric that compares the spatial patterns of predicted and maximum observed shaking. This was true for both the central and eastern United States as a whole, and for the region within it with the highest amount of seismicity, Oklahoma and its surrounding area. The model performed least well in northern Texas, over-stating hazard, presumably because lower oil and gas prices and regulatory action reduced the water injection volume

  5. Sistema de informação geográfica para mapeamento da renda líquida aplicado no planejamento da agricultura irrigada Algorithm to mapping net income applied in irrigated agriculture planning

    Directory of Open Access Journals (Sweden)

    Wilson A. Silva

    2008-03-01

    Full Text Available O objetivo deste trabalho foi desenvolver um algoritmo na linguagem computacional MATLAB para aplicações em sistemas de informações geográficas, visando ao mapeamento da renda líquida maximizada de cultivos irrigados. O estudo foi desenvolvido para as culturas do maracujá, da cana-de-açúcar, do abacaxi e do mamão, em área de aproximadamente 2.500 ha, localizada no município de Campos dos Goytacazes, norte do Estado do Rio de Janeiro. Os dados de entrada do algoritmo foram informações edafoclimáticas, funções de resposta das culturas à água, dados de localização geográfica da área e índices econômicos referentes ao custo do processo produtivo. Os resultados permitiram concluir que o algoritmo desenvolvido se mostrou eficiente para o mapeamento da renda líquida de cultivos irrigados, sendo capaz de localizar áreas que apresentam maiores retornos econômicos.The objective of this work was to develop an algorithm in MATLAB computational language to be applied in geographical information systems to map net income irrigated crops to plan irrigated agriculture. The study was developed for the crops of passion fruit plant, sugarcane, pineapple and papaya, in an area of approximately 2,500 ha, at Campos dos Goytacazes, located at north of the State of Rio de Janeiro, Brazil. The algorithm input data were: information about soil, climate, crop water response functions, geographical location and economical cost indexes of the productive process. The results allowed concluding that developed algorithm was efficient to map net income of irrigated crops, been able to locate areas that present larger economical net income.

  6. Towards a high performance vertex detector based on 3D integration of deep N-well MAPS

    International Nuclear Information System (INIS)

    Re, V

    2010-01-01

    The development of deep N-Well (DNW) CMOS active pixel sensors was driven by the ambitious goal of designing a monolithic device with similar functionalities as in hybrid pixel readout chips, such as pixel-level sparsification and time stamping. The implementation of the DNW MAPS concept in a 3D vertical integration process naturally leads the designer towards putting more intelligence in the chip and in the pixels themselves, achieving novel device structures based on the interconnection of two or more layers fabricated in the same technology. These devices are read out with a data-push scheme that makes it possible to use pixel data for the generation of a flexible level 1 track trigger, based on associative memories, with short latency and high efficiency. This paper gives an update of the present status of DNW MAPS design in both 2D and 3D versions, and presents a discussion of the architectures that are being devised for the Layer 0 of the SuperB Silicon Vertex Tracker.

  7. Towards a high performance vertex detector based on 3D integration of deep N-well MAPS

    Energy Technology Data Exchange (ETDEWEB)

    Re, V, E-mail: valerio.re@unibg.i [University of Bergamo, Department of Industrial Engineering, Viale Marconi 5, 24044 Dalmine (Italy)

    2010-06-15

    The development of deep N-Well (DNW) CMOS active pixel sensors was driven by the ambitious goal of designing a monolithic device with similar functionalities as in hybrid pixel readout chips, such as pixel-level sparsification and time stamping. The implementation of the DNW MAPS concept in a 3D vertical integration process naturally leads the designer towards putting more intelligence in the chip and in the pixels themselves, achieving novel device structures based on the interconnection of two or more layers fabricated in the same technology. These devices are read out with a data-push scheme that makes it possible to use pixel data for the generation of a flexible level 1 track trigger, based on associative memories, with short latency and high efficiency. This paper gives an update of the present status of DNW MAPS design in both 2D and 3D versions, and presents a discussion of the architectures that are being devised for the Layer 0 of the SuperB Silicon Vertex Tracker.

  8. Enhanced performance of CdS/CdTe thin-film devices through temperature profiling techniques applied to close-spaced sublimation deposition

    Energy Technology Data Exchange (ETDEWEB)

    Xiaonan Li; Sheldon, P.; Moutinho, H.; Matson, R. [National Renewable Energy Lab., Golden, CO (United States)

    1996-05-01

    The authors describe a methodology developed and applied to the close-spaced sublimation technique for thin-film CdTe deposition. The developed temperature profiles consisted of three discrete temperature segments, which the authors called the nucleation, plugging, and annealing temperatures. They have demonstrated that these temperature profiles can be used to grow large-grain material, plug pinholes, and improve CdS/CdTe photovoltaic device performance by about 15%. The improved material and device properties have been obtained while maintaining deposition temperatures compatible with commercially available substrates. This temperature profiling technique can be easily applied to a manufacturing environment by adjusting the temperature as a function of substrate position instead of time.

  9. Discrete vs. Continuous Mapping of Facial Electromyography for Human-Machine-Interface Control: Performance and Training Effects

    Science.gov (United States)

    Cler, Meredith J.; Stepp, Cara E.

    2015-01-01

    Individuals with high spinal cord injuries are unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard to type five-letter words. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that were mapped to five cursor commands: move left, move right, move up, move down, and “click”. One system used a discrete movement and feedback algorithm in which the user produced one quick facial gesture, causing a corresponding discrete movement to an adjacent letter. The other system was continuously updated and allowed the user to control the cursor’s velocity by relative activation between different sEMG channels. Participants were trained on one system for four sessions on consecutive days, followed by one crossover session on the untrained system. Information transfer rates (ITRs) were high for both systems compared to other potential input modalities, both initially and with training (Session 1: 62.1 bits/min, Session 4: 105.1 bits/min). Users of the continuous system showed significantly higher ITRs than the discrete users. Future development will focus on improvements to both systems, which may offer differential advantages for users with various motor impairments. PMID:25616053

  10. The Performance of the Smart Cities in China—A Comparative Study by Means of Self-Organizing Maps and Social Networks Analysis

    Directory of Open Access Journals (Sweden)

    Dong Lu

    2015-06-01

    Full Text Available Smart cities link the city services, citizens, resource and infrastructures together and form the heart of the modern society. As a “smart” ecosystem, smart cities focus on sustainable growth, efficiency, productivity and environmentally friendly development. By comparing with the European Union, North America and other countries, smart cities in China are still in the preliminary stage. This study offers a comparative analysis of ten smart cities in China on the basis of an extensive database covering two time periods: 2005–2007 and 2008–2010. The unsupervised computational neural network self-organizing map (SOM analysis is adopted to map out the various cities based on their performance. The demonstration effect and mutual influences between these ten smart cities are also discussed by using social network analysis. Based on the smart city performance and cluster network, current problems for smart city development in China were pointed out. Future research directions for smart city research are discussed at the end this paper.

  11. Applied physics

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The Physics Division research program that is dedicated primarily to applied research goals involves the interaction of energetic particles with solids. This applied research is carried out in conjunction with the basic research studies from which it evolved

  12. An Efficient Model for NPD Performance Evaluation Using DEMATEL and Fuzzy ANP—Applied to the TFT-LCD Touch Panel Industry in Taiwan

    Directory of Open Access Journals (Sweden)

    Wen-Chin Chen

    2015-10-01

    Full Text Available As the competitive market nowadays shortens the life cycle of products, new products should be designed to meet the customer’s demand under a dynamic marketing environment so as to efficiently enhance the product strength of new products with maximization of profit. Hence, the key solution for enterprises to succeed will be the precise evaluation of new product development (NPD performance, particularly for those who manage to survive in the intensely competitive market in Taiwan. This study is to identify the thin film transistor-liquid crystal display (TFT-LCD touch panel industry and then establish an integrated model of NPD performance evaluation for enterprises. Firstly, literature review and interviews with experts are conducted to select the four aspects and 15 criteria as the main factors affecting NDP performance evaluation. Secondly, Decision Making Trial and Evaluation Laboratory (DEMATEL is employed to identify the interrelationships among those factors. Finally, a Fuzzy theory is applied to resolve the linguistic hedges and an Analytic Network Process (ANP is adopted to obtain the weights of all factors. A case study is performed to validate the proposed model in a Taiwanese TFT-LCD company. It not only provides the decision maker with a guidance system but also increases the competitive advantages for the TFT-LCD industry to design new products in the future.

  13. The self-determination theory applied in the analysis of motivation and academic performance of accounting students in a brazilian public university

    Directory of Open Access Journals (Sweden)

    Marina Salgado Borges

    2017-08-01

    Full Text Available The aim of this study was analyzing the relations between academic performance and motivation of Accounting students in a Brazilian public university based on Self-determination Theory. Methodologically, in order to reach that, structured questionnaires were applied in classrooms with the Brazilian version of the Academic Motivation Scale (AMS, in a sample of 316 students enrolled from second to tenth periods of that course, equivalent to 37.2% of the total number of students. Data were analyzed using descriptive statistics, exploratory factor analysis (AFE and multiple linear regression analysis with Ordinary Least Squares (OLS. The regression analyzes indicated a significant relationship between motivation and academic performance. The factors related to intrinsic motivation and extrinsic motivation by identified regulation are positively correlated with academic performance of students. On the other hand, the introjected for regulation extrinsic motivation factor is negatively correlated with student’s academic performance coefficient (CRA. Based on the sample analyzed, the results showed that the relationship between motivation and CRA of the student depends on the type of motivation that is present in each one.

  14. Applied Neuroscience Laboratory Complex

    Data.gov (United States)

    Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....

  15. Column-Oriented Storage Techniques for MapReduce

    OpenAIRE

    Floratou, Avrilia; Patel, Jignesh; Shekita, Eugene; Tata, Sandeep

    2011-01-01

    Users of MapReduce often run into performance problems when they scale up their workloads. Many of the problems they encounter can be overcome by applying techniques learned from over three decades of research on parallel DBMSs. However, translating these techniques to a MapReduce implementation such as Hadoop presents unique challenges that can lead to new design choices. This paper describes how column-oriented storage techniques can be incorporated in Hadoop in a way that preserves its pop...

  16. PERFORMANCE

    Directory of Open Access Journals (Sweden)

    M Cilli

    2014-10-01

    Full Text Available This study aimed to investigate the kinematic and kinetic changes when resistance is applied in horizontal and vertical directions, produced by using different percentages of body weight, caused by jumping movements during a dynamic warm-up. The group of subjects consisted of 35 voluntary male athletes (19 basketball and 16 volleyball players; age: 23.4 ± 1.4 years, training experience: 9.6 ± 2.7 years; height: 177.2 ± 5.7 cm, body weight: 69.9 ± 6.9 kg studying Physical Education, who had a jump training background and who were training for 2 hours, on 4 days in a week. A dynamic warm-up protocol containing seven specific resistance movements with specific resistance corresponding to different percentages of body weight (2%, 4%, 6%, 8%, 10% was applied randomly on non consecutive days. Effects of different warm-up protocols were assessed by pre-/post- exercise changes in jump height in the countermovement jump (CMJ and the squat jump (SJ measured using a force platform and changes in hip and knee joint angles at the end of the eccentric phase measured using a video camera. A significant increase in jump height was observed in the dynamic resistance warm-up conducted with different percentages of body weight (p 0.05. In jump movements before and after the warm-up, while no significant difference between the vertical ground reaction forces applied by athletes was observed (p>0.05, in some cases of resistance, a significant reduction was observed in hip and knee joint angles (p<0.05. The dynamic resistance warm-up method was found to cause changes in the kinematics of jumping movements, as well as an increase in jump height values. As a result, dynamic warm-up exercises could be applicable in cases of resistance corresponding to 6-10% of body weight applied in horizontal and vertical directions in order to increase the jump performance acutely.

  17. Analysis on Bilateral Hindlimb Mapping in Motor Cortex of the Rat by an Intracortical Microstimulation Method

    OpenAIRE

    Seong, Han Yu; Cho, Ji Young; Choi, Byeong Sam; Min, Joong Kee; Kim, Yong Hwan; Roh, Sung Woo; Kim, Jeong Hoon; Jeon, Sang Ryong

    2014-01-01

    Intracortical microstimulation (ICMS) is a technique that was developed to derive movement representation of the motor cortex. Although rats are now commonly used in motor mapping studies, the precise characteristics of rat motor map, including symmetry and consistency across animals, and the possibility of repeated stimulation have not yet been established. We performed bilateral hindlimb mapping of motor cortex in six Sprague-Dawley rats using ICMS. ICMS was applied to the left and the righ...

  18. Map Usage in Virtual Environments

    National Research Council Canada - National Science Library

    Cevik, Helsin

    1998-01-01

    ... of map representation as an aid in performing navigation tasks. The approach taken was first to determine and then investigate the parameters that affect virtual map representation through an experiment designed specifically for this thesis...

  19. Tracing QTLs for Leaf Blast Resistance and Agronomic Performance of Finger Millet (Eleusine coracana (L. Gaertn. Genotypes through Association Mapping and in silico Comparative Genomics Analyses.

    Directory of Open Access Journals (Sweden)

    M Ramakrishnan

    Full Text Available Finger millet is one of the small millets with high nutritive value. This crop is vulnerable to blast disease caused by Pyricularia grisea, which occurs annually during rainy and winter seasons. Leaf blast occurs at early crop stage and is highly damaging. Mapping of resistance genes and other quantitative trait loci (QTLs for agronomic performance can be of great use for improving finger millet genotypes. Evaluation of one hundred and twenty-eight finger millet genotypes in natural field conditions revealed that leaf blast caused severe setback on agronomic performance for susceptible genotypes, most significant traits being plant height and root length. Plant height was reduced under disease severity while root length was increased. Among the genotypes, IE4795 showed superior response in terms of both disease resistance and better agronomic performance. A total of seven unambiguous QTLs were found to be associated with various agronomic traits including leaf blast resistance by association mapping analysis. The markers, UGEP101 and UGEP95, were strongly associated with blast resistance. UGEP98 was associated with tiller number and UGEP9 was associated with root length and seed yield. Cross species validation of markers revealed that 12 candidate genes were associated with 8 QTLs in the genomes of grass species such as rice, foxtail millet, maize, Brachypodium stacei, B. distachyon, Panicum hallii and switchgrass. Several candidate genes were found proximal to orthologous sequences of the identified QTLs such as 1,4-β-glucanase for leaf blast resistance, cytokinin dehydrogenase (CKX for tiller production, calmodulin (CaM binding protein for seed yield and pectin methylesterase inhibitor (PMEI for root growth and development. Most of these QTLs and their putatively associated candidate genes are reported for first time in finger millet. On validation, these novel QTLs may be utilized in future for marker assisted breeding for the development of

  20. Tracing QTLs for Leaf Blast Resistance and Agronomic Performance of Finger Millet (Eleusine coracana (L.) Gaertn.) Genotypes through Association Mapping and in silico Comparative Genomics Analyses.

    Science.gov (United States)

    Ramakrishnan, M; Antony Ceasar, S; Duraipandiyan, V; Vinod, K K; Kalpana, Krishnan; Al-Dhabi, N A; Ignacimuthu, S

    2016-01-01

    Finger millet is one of the small millets with high nutritive value. This crop is vulnerable to blast disease caused by Pyricularia grisea, which occurs annually during rainy and winter seasons. Leaf blast occurs at early crop stage and is highly damaging. Mapping of resistance genes and other quantitative trait loci (QTLs) for agronomic performance can be of great use for improving finger millet genotypes. Evaluation of one hundred and twenty-eight finger millet genotypes in natural field conditions revealed that leaf blast caused severe setback on agronomic performance for susceptible genotypes, most significant traits being plant height and root length. Plant height was reduced under disease severity while root length was increased. Among the genotypes, IE4795 showed superior response in terms of both disease resistance and better agronomic performance. A total of seven unambiguous QTLs were found to be associated with various agronomic traits including leaf blast resistance by association mapping analysis. The markers, UGEP101 and UGEP95, were strongly associated with blast resistance. UGEP98 was associated with tiller number and UGEP9 was associated with root length and seed yield. Cross species validation of markers revealed that 12 candidate genes were associated with 8 QTLs in the genomes of grass species such as rice, foxtail millet, maize, Brachypodium stacei, B. distachyon, Panicum hallii and switchgrass. Several candidate genes were found proximal to orthologous sequences of the identified QTLs such as 1,4-β-glucanase for leaf blast resistance, cytokinin dehydrogenase (CKX) for tiller production, calmodulin (CaM) binding protein for seed yield and pectin methylesterase inhibitor (PMEI) for root growth and development. Most of these QTLs and their putatively associated candidate genes are reported for first time in finger millet. On validation, these novel QTLs may be utilized in future for marker assisted breeding for the development of fungal

  1. Performance of a fast and high-resolution multi-echo spin-echo sequence for prostate T2 mapping across multiple systems.

    Science.gov (United States)

    van Houdt, Petra J; Agarwal, Harsh K; van Buuren, Laurens D; Heijmink, Stijn W T P J; Haack, Søren; van der Poel, Henk G; Ghobadi, Ghazaleh; Pos, Floris J; Peeters, Johannes M; Choyke, Peter L; van der Heide, Uulke A

    2018-03-01

    To evaluate the performance of a multi-echo spin-echo sequence with k-t undersampling scheme (k-t T 2 ) in prostate cancer. Phantom experiments were performed at five systems to estimate the bias, short-term repeatability, and reproducibility across all systems expressed with the within-subject coefficient of variation (wCV). Monthly measurements were performed on two systems for long-term repeatability estimation. To evaluate clinical repeatability, two T 2 maps (voxel size 0.8 × 0.8 × 3 mm 3 ; 5 min) were acquired at separate visits on one system for 13 prostate cancer patients. Repeatability was assessed per patient in relation to spatial resolution. T 2 values were compared for tumor, peripheral zone, and transition zone. Phantom measurements showed a small bias (median = -0.9 ms) and good short-term repeatability (median wCV = 0.5%). Long-term repeatability was 0.9 and 1.1% and reproducibility between systems was 1.7%. The median bias observed in patients was -1.1 ms. At voxel level, the median wCV was 15%, dropping to 4% for structures of 0.5 cm 3 . The median tumor T 2 values (79 ms) were significantly lower (P < 0.001) than in the peripheral zone (149 ms), but overlapped with the transition zone (91 ms). Reproducible T 2 mapping of the prostate is feasible with good spatial resolution in a clinically reasonable scan time, allowing reliable measurement of T 2 in structures as small as 0.5 cm 3 . Magn Reson Med 79:1586-1594, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Site-condition map for Portugal, Western Iberia: methodology and constraints on the performance of Vs30 proxies for stable continental regions in Europe.

    Science.gov (United States)

    Vilanova, S. P.; Narciso, J.; Carvalho, J. P.; Cancela, C.; Lopes, I.; Nemser, E. S.; Borges, J.

    2014-12-01

    Information on the amplification characteristics of the near-surface formations in a regional sense is essential to adequately represent both seismic hazard maps and ground shaking maps. Due to the scarceness of shear-wave velocity data in most regions, several methods have been proposed in order to obtain first order representations of Vs30. These include the surface geology method and the topographic slope method. The latter method has become the standard way for incorporating site effects into regional studies worldwide given the convenience provided by the global Vs30 Internet server. In the framework of project SCENE we developed a shear wave velocity database for Portugal. The database consists of 87 shear-wave velocity depth profiles from a variety of lithological and geological formations. We used an iterative three-step procedure to develop the Vs30 based site-condition map: 1) to define a preliminary set of geologically defined units based on the literature; 2) to calculate the distribution of Vs30 for each unit; and 3) to perform statistical tests in order to estimate the significance of the difference in the Vs30 distribution characteristics between the units. The units were merged according to the results of the statistical tests and the procedure was repeated. We started by classifying the sites into six generalized geological units. The final set consists of three units only: F1 (igneous, metamorphic and old sedimentary rocks); F2 (Neogene and Pleistocene formations); and F3 (Holocene deposits). We used the database to evaluate the performance of Vs30 proxies. The use of proxies based either on geological units or on correlations with the topographic slope shows relatively unbiased total residual distributions of the logarithm of Vs30. However, the performance of the methods varies significantly with the generalized geological unit analyzed. Both methods are biased towards lower values of Vs30 for rock formations. The topographic-slope method is

  3. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim

    Directory of Open Access Journals (Sweden)

    José Medina Pestana

    Full Text Available Summary The kidney transplant program at Hospital do Rim (hrim is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym, has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001 and patient survival (90.5 vs. 95.1%, p=0.001. The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  4. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim.

    Science.gov (United States)

    Pestana, José Medina

    2016-10-01

    The kidney transplant program at Hospital do Rim (hrim) is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym), has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001) and patient survival (90.5 vs. 95.1%, p=0.001). The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  5. System of automated map design

    International Nuclear Information System (INIS)

    Ponomarjov, S.Yu.; Rybalko, S.I.; Proskura, N.I.

    1992-01-01

    Preprint 'System of automated map design' contains information about the program shell for construction of territory map, performing level line drawing of arbitrary two-dimension field (in particular, the radionuclide concentration field). The work schedule and data structures are supplied, as well as data on system performance. The preprint can become useful for experts in radioecology and for all persons involved in territory pollution mapping or multi-purpose geochemical mapping. (author)

  6. Spatial mapping of cadmium zinc telluride materials properties and electrical response to improve device yield and performance

    CERN Document Server

    Van Scyoc, J M; Yoon, H; Gilbert, T S; Hilton, N R; Lund, J C; James, R B

    1999-01-01

    Cadmium zinc telluride has experienced tremendous growth in its application to various radiation sensing problems over the last five years. However, there are still issues with yield, particularly of the large volume devices needed for imaging and sensitivity-critical applications. Inhomogeneities of various types and on various length scales currently prevent the fabrication of large devices of high spectral performance. This paper discusses the development of a set of characterization tools for quantifying these inhomogeneities, in order to develop improvement strategies to achieve the desired cadmium zinc telluride crystals for detector fabrication.

  7. Applied Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, H; Marinova, I; Cingoski, V [eds.

    2002-07-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics.

  8. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  9. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C., E-mail: maghali@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b, E-mail: canedo@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2009-07-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  10. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    International Nuclear Information System (INIS)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  11. The force applied to successfully turn a foetus during reattempts of external cephalic version is substantially reduced when performed under spinal analgesia.

    Science.gov (United States)

    Suen, Stephen Sik Hung; Khaw, Kim S; Law, Lai Wa; Sahota, Daljit Singh; Lee, Shara Wee Yee; Lau, Tze Kin; Leung, Tak Yeung

    2012-06-01

    To compare the forces exerted during external cephalic version (ECV) on the maternal abdomen between ( 1 ) the primary attempts performed without spinal analgesia (SA), which failed and ( 2 ) the subsequent reattempts performed under SA. Patients with an uncomplicated singleton breech-presenting pregnancy suitable for ECV were recruited. During ECV, the operator wore a pair of gloves, which had thin piezo-resistive pressure sensors measuring the contact pressure between the operator's hands and maternal abdomen. For patients who had failed ECV, reattempts by the same operator was made with patients under SA, and the applied force was measured in the same manner. The profile of the exerted forces over time during each attempt was analyzed and denoted by pressure-time integral (PTI: mmHg sec). Pain score was also graded by patients using visual analogue scale. Both PTI and pain score before and after the use of SA were then compared. Overall, eight patients who had a failed ECV without SA underwent a reattempt with SA. All of them had successful version and the median PTI of the successful attempts under SA were lower than that of the previous failed attempts performed without SA (127 386 mmHg sec vs. 298,424 mmHg sec; p = 0.017). All of them also reported a 0 pain score, which was significantly lower than that of before (median 7.5; p = 0.016). SA improves the success rate of ECV as well as reduces the force required for successful version.

  12. Influence of the Applied Working Fluid and the Arrangement of the Steering Edges on Multi-Vane Expander Performance in Micro ORC System

    Directory of Open Access Journals (Sweden)

    Józef Rak

    2018-04-01

    Full Text Available Micro-power domestic organic Rankine cycle (ORC systems are nowadays of great interest. These systems are considered for combined heat and power (CHP generation in domestic and distributed applications. The main issues of ORC systems design is selection of the expander and the working fluid. Thanks to their positive features, multi-vane expanders are especially promising for application in micro-power ORC systems. These expanders are very simple in design, small in dimensions, inexpensive and feature low gas flow capacity and expansion ratio. The application of multi-vane expanders in ORC systems is innovative and currently limited to prototype applications. However, a literature review indicates the growing interest in these machines and the potential for practical implementation. For this reason, it is necessary to conduct detailed studies on the multi-vane expanders operation in ORC systems. In this paper the results of experimental and numerical investigations on the influence of the applied working fluid and the arrangement of the steering edges on multi-vane expander performance in micro ORC system are reported. The experiments were performed using the specially designed lab test-stand, i.e. the domestic ORC system. Numerical simulations were proceeded in ANSYS CFX software (ANSYS, Inc., Canonsburg, PA, USA and were focused on determining the expander performance under various flow conditions of different working fluids. Detailed numerical analysis of the arrangement of the machine steering edges showed existence of optimal mutual position of the inlet and outlet port for which the multi-vane expander achieves maximum internal work and internal efficiency.

  13. Dual-source dual-energy CT angiography with virtual non-enhanced images and iodine map for active gastrointestinal bleeding: Image quality, radiation dose and diagnostic performance

    International Nuclear Information System (INIS)

    Sun, Hao; Hou, Xin-Yi; Xue, Hua-Dan; Li, Xiao-Guang; Jin, Zheng-Yu; Qian, Jia-Ming; Yu, Jian-Chun; Zhu, Hua-Dong

    2015-01-01

    in 84 patients, 83 (83/84, 98.8%) of which were confirmed by one or more reference standard. The AUC was 0.935 ± 0.027 and 0.947 ± 0.026 for protocols 1 and 2, respectively. There was no significant difference between protocols 1 and 2 for diagnostic performance (Z = 1.672, P > 0.05). The radiation dose reduction achieved by omitting the TNE acquisition was (30.11 ± 6.32)%. Conclusion: DSDECTA with arterial phase with single-source mode, portal-venous phase with dual-energy mode and post-processing VNE image sets and iodine map could act as an accurate screening method for detection and localization of active GIB with lower radiation dose

  14. Dual-source dual-energy CT angiography with virtual non-enhanced images and iodine map for active gastrointestinal bleeding: Image quality, radiation dose and diagnostic performance

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hao, E-mail: sunhao_robert@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Hou, Xin-Yi, E-mail: hxy_pumc@126.com [Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing (China); Xue, Hua-Dan, E-mail: bjdanna95@hotmail.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Li, Xiao-Guang, E-mail: xglee88@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Jin, Zheng-Yu, E-mail: zhengyu_jin@126.com [Department of Radiology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No. 1, Wangfujing Street, Dongcheng District, Beijing 100730 (China); Qian, Jia-Ming, E-mail: qjiaming57@gmail.com [Department of Gastroenterology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China); Yu, Jian-Chun, E-mail: yu-jch@163.com [Department of General Surgery, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China); Zhu, Hua-Dong, E-mail: huadongzhu@hotmail.com [Department of Emergency, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing (China)

    2015-05-15

    in 84 patients, 83 (83/84, 98.8%) of which were confirmed by one or more reference standard. The AUC was 0.935 ± 0.027 and 0.947 ± 0.026 for protocols 1 and 2, respectively. There was no significant difference between protocols 1 and 2 for diagnostic performance (Z = 1.672, P > 0.05). The radiation dose reduction achieved by omitting the TNE acquisition was (30.11 ± 6.32)%. Conclusion: DSDECTA with arterial phase with single-source mode, portal-venous phase with dual-energy mode and post-processing VNE image sets and iodine map could act as an accurate screening method for detection and localization of active GIB with lower radiation dose.

  15. Current trends in geomorphological mapping

    Science.gov (United States)

    Seijmonsbergen, A. C.

    2012-04-01

    Geomorphological mapping is a world currently in motion, driven by technological advances and the availability of new high resolution data. As a consequence, classic (paper) geomorphological maps which were the standard for more than 50 years are rapidly being replaced by digital geomorphological information layers. This is witnessed by the following developments: 1. the conversion of classic paper maps into digital information layers, mainly performed in a digital mapping environment such as a Geographical Information System, 2. updating the location precision and the content of the converted maps, by adding more geomorphological details, taken from high resolution elevation data and/or high resolution image data, 3. (semi) automated extraction and classification of geomorphological features from digital elevation models, broadly separated into unsupervised and supervised classification techniques and 4. New digital visualization / cartographic techniques and reading interfaces. Newly digital geomorphological information layers can be based on manual digitization of polygons using DEMs and/or aerial photographs, or prepared through (semi) automated extraction and delineation of geomorphological features. DEMs are often used as basis to derive Land Surface Parameter information which is used as input for (un) supervised classification techniques. Especially when using high-res data, object-based classification is used as an alternative to traditional pixel-based classifications, to cluster grid cells into homogeneous objects, which can be classified as geomorphological features. Classic map content can also be used as training material for the supervised classification of geomorphological features. In the classification process, rule-based protocols, including expert-knowledge input, are used to map specific geomorphological features or entire landscapes. Current (semi) automated classification techniques are increasingly able to extract morphometric, hydrological

  16. Map of Nasca Geoglyphs

    Science.gov (United States)

    Hanzalová, K.; Pavelka, K.

    2013-07-01

    The Czech Technical University in Prague in the cooperation with the University of Applied Sciences in Dresden (Germany) work on the Nasca Project. The cooperation started in 2004 and much work has been done since then. All work is connected with Nasca lines in southern Peru. The Nasca project started in 1995 and its main target is documentation and conservation of the Nasca lines. Most of the project results are presented as WebGIS application via Internet. In the face of the impending destruction of the soil drawings, it is possible to preserve this world cultural heritage for the posterity at least in a digital form. Creating of Nasca lines map is very useful. The map is in a digital form and it is also available as a paper map. The map contains planimetric component of the map, map lettering and altimetry. Thematic folder in this map is a vector layer of the geoglyphs in Nasca/Peru. Basis for planimetry are georeferenced satellite images, altimetry is created from digital elevation model. This map was created in ArcGis software.

  17. MAP OF NASCA GEOGLYPHS

    Directory of Open Access Journals (Sweden)

    K. Hanzalová

    2013-07-01

    Full Text Available The Czech Technical University in Prague in the cooperation with the University of Applied Sciences in Dresden (Germany work on the Nasca Project. The cooperation started in 2004 and much work has been done since then. All work is connected with Nasca lines in southern Peru. The Nasca project started in 1995 and its main target is documentation and conservation of the Nasca lines. Most of the project results are presented as WebGIS application via Internet. In the face of the impending destruction of the soil drawings, it is possible to preserve this world cultural heritage for the posterity at least in a digital form. Creating of Nasca lines map is very useful. The map is in a digital form and it is also available as a paper map. The map contains planimetric component of the map, map lettering and altimetry. Thematic folder in this map is a vector layer of the geoglyphs in Nasca/Peru. Basis for planimetry are georeferenced satellite images, altimetry is created from digital elevation model. This map was created in ArcGis software.

  18. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  19. Digital soil mapping: strategy for data pre-processing

    Directory of Open Access Journals (Sweden)

    Alexandre ten Caten

    2012-08-01

    Full Text Available The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM. Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

  20. Using Google Earth Engine To Apply Spectral Mixture Analysis Over Landsat 5TM Imagery To Map Fire Scars In The Alto Teles Pires River Basin, Mato Grosso State, Brazil.

    Science.gov (United States)

    Antunes Daldegan, G.; Ribeiro, F.; Roberts, D. A.

    2016-12-01

    The two most extensive biomes in Brazil, the Amazon Forest and the Cerrado (the Brazilian savanna), are subject to many fire events every dry season. Both biomes are well-known for their ecological and environmental importance but, due to the intensive human occupation over the last decades, they have been experiencing high deforestation rates with much of their natural landscape being converted to agriculture and pasture uses. The Cerrado, as a savanna, has naturally evolved adapted to fire. According to some researchers, this biome has been exposed to fire for the last 25 million years, forging the diversification of many C4 grass species, for example. The Amazon forest does not have similar characteristics and studies have shown that forest areas that have been already burned become more prone to recurrent burns. Forest patches that are close to open areas have their edges exposed to higher insolation and greater turbulence, drying the understory vegetation and litter, turning those areas more susceptible to fire events. In cases where grass species become established in the understory they can be a renewable source of fuel for recurrent burns. This study aimed to identify and map fire scars present in the region of Alto Teles Pires river basin, State of Mato Grosso - Brazil, during 10 years (2002-2011). This region is located in the transition zone between the two biomes and is known for its high deforestation rates. By taking advantage of the Landsat 5TM imagery collection present in Google Earth Engine platform as well as applying Spectral Mixture Analysis (SMA) techniques over them it was possible to estimate fractions of Green Vegetation (GV), Non-Photosynthetic Vegetation (NPV), and Soil targets, which are the surfaces that compose the vast majority of the landscape in the study region. Iteratively running SMA analysis over the imagery using burned vegetation endmembers allowed us to further identify fire scars present in the region, returning excellent

  1. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  2. Topographic mapping

    Science.gov (United States)

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  3. Mapping shape to visuomotor mapping: learning and generalisation of sensorimotor behaviour based on contextual information.

    Directory of Open Access Journals (Sweden)

    Loes C J van Dam

    2015-03-01

    Full Text Available Humans can learn and store multiple visuomotor mappings (dual-adaptation when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to "hit" it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other. The experimental results show that, although learning the associations requires explicit awareness of the cues' role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the

  4. Timed bisimulation and open maps

    DEFF Research Database (Denmark)

    Hune, Thomas; Nielsen, Mogens

    1998-01-01

    of timed bisimulation. Thus the abstract results from the theory of open maps apply, e.g. the existence of canonical models and characteristic logics. Here, we provide an alternative proof of decidability of bisimulation for finite timed transition systems in terms of open maps, and illustrate the use......Open maps have been used for defining bisimulations for a range of models, but none of these have modelled real-time. We define a category of timed transition systems, and use the general framework of open maps to obtain a notion of bisimulation. We show this to be equivalent to the standard notion...... of open maps in presenting bisimulations....

  5. Detrimental Effects of Helium Ion Irradiation on Cognitive Performance and Cortical Levels of MAP-2 in B6D2F1 Mice.

    Science.gov (United States)

    Raber, Jacob; Torres, Eileen Ruth S; Akinyeke, Tunde; Lee, Joanne; Weber Boutros, Sydney J; Turker, Mitchell S; Kronenberg, Amy

    2018-04-20

    The space radiation environment includes helium (⁴He) ions that may impact brain function. As little is known about the effects of exposures to ⁴He ions on the brain, we assessed the behavioral and cognitive performance of C57BL/6J × DBA2/J F1 (B6D2F1) mice three months following irradiation with ⁴He ions (250 MeV/n; linear energy transfer (LET) = 1.6 keV/μm; 0, 21, 42 or 168 cGy). Sham-irradiated mice and mice irradiated with 21 or 168 cGy showed novel object recognition, but mice irradiated with 42 cGy did not. In the passive avoidance test, mice received a slight foot shock in a dark compartment, and latency to re-enter that compartment was assessed 24 h later. Sham-irradiated mice and mice irradiated with 21 or 42 cGy showed a higher latency on Day 2 than Day 1, but the latency to enter the dark compartment in mice irradiated with 168 cGy was comparable on both days. ⁴He ion irradiation, at 42 and 168 cGy, reduced the levels of the dendritic marker microtubule-associated protein-2 (MAP-2) in the cortex. There was an effect of radiation on apolipoprotein E (apoE) levels in the hippocampus and cortex, with higher apoE levels in mice irradiated at 42 cGy than 168 cGy and a trend towards higher apoE levels in mice irradiated at 21 than 168 cGy. In addition, in the hippocampus, there was a trend towards a negative correlation between MAP-2 and apoE levels. While reduced levels of MAP-2 in the cortex might have contributed to the altered performance in the passive avoidance test, it does not seem sufficient to do so. The higher hippocampal and cortical apoE levels in mice irradiated at 42 than 168 cGy might have served as a compensatory protective response preserving their passive avoidance memory. Thus, there were no alterations in behavioral performance in the open filed or depressive-like behavior in the forced swim test, while cognitive impairments were seen in the object recognition and passive avoidance tests, but not in the contextual or cued fear

  6. Detrimental Effects of Helium Ion Irradiation on Cognitive Performance and Cortical Levels of MAP-2 in B6D2F1 Mice

    Directory of Open Access Journals (Sweden)

    Jacob Raber

    2018-04-01

    Full Text Available The space radiation environment includes helium (4He ions that may impact brain function. As little is known about the effects of exposures to 4He ions on the brain, we assessed the behavioral and cognitive performance of C57BL/6J × DBA2/J F1 (B6D2F1 mice three months following irradiation with 4He ions (250 MeV/n; linear energy transfer (LET = 1.6 keV/μm; 0, 21, 42 or 168 cGy. Sham-irradiated mice and mice irradiated with 21 or 168 cGy showed novel object recognition, but mice irradiated with 42 cGy did not. In the passive avoidance test, mice received a slight foot shock in a dark compartment, and latency to re-enter that compartment was assessed 24 h later. Sham-irradiated mice and mice irradiated with 21 or 42 cGy showed a higher latency on Day 2 than Day 1, but the latency to enter the dark compartment in mice irradiated with 168 cGy was comparable on both days. 4He ion irradiation, at 42 and 168 cGy, reduced the levels of the dendritic marker microtubule-associated protein-2 (MAP-2 in the cortex. There was an effect of radiation on apolipoprotein E (apoE levels in the hippocampus and cortex, with higher apoE levels in mice irradiated at 42 cGy than 168 cGy and a trend towards higher apoE levels in mice irradiated at 21 than 168 cGy. In addition, in the hippocampus, there was a trend towards a negative correlation between MAP-2 and apoE levels. While reduced levels of MAP-2 in the cortex might have contributed to the altered performance in the passive avoidance test, it does not seem sufficient to do so. The higher hippocampal and cortical apoE levels in mice irradiated at 42 than 168 cGy might have served as a compensatory protective response preserving their passive avoidance memory. Thus, there were no alterations in behavioral performance in the open filed or depressive-like behavior in the forced swim test, while cognitive impairments were seen in the object recognition and passive avoidance tests, but not in the contextual or cued

  7. Self-report measures of Executive Functioning are a determinant of academic performance in first-year students at a university of applied sciences

    Directory of Open Access Journals (Sweden)

    Maria A.E. Baars

    2015-08-01

    Full Text Available Recent studies in late adolescents (age 17+ show that brain development may proceed till around the 25th year of age. This implies that study performance in higher education could be dependent upon the stage of brain maturation and neuropsychological development. Individual differences in development of neuropsychological skills may thus have a substantial influence on the outcome of the educational process. This hypothesis was evaluated in a large survey of 1760 first-year students at a University of Applied Sciences, of which 1332 are included in the current analyses. This was because of their fit within the age range we pre-set (17-20 years’ old at start of studies. Student characteristics and three behavioural ratings of executive functioning (EF were evaluated with regard to their influence on academic performance. Self-report measures were used: self-reported attention, planning, and self-control & self-monitoring. Results showed that students with better self-reported EF at the start of the first year of their studies obtained more study credits at the end of that year than students with a lower EF self-rating. The correlation between self-control & self-monitoring on the one hand, and study progress on the other, appeared to differ for male and female students and to be influenced by the level of prior education. The results of this large-scale study could have practical relevance. The profound individual differences between students may at least partly be a consequence of their stage of development as an adolescent. Students who show lower levels of attention control, planning and self-control/self-monitoring can be expected to have a problem in study planning and study progress monitoring and hence study progress. The findings imply that interventions directed at the training of these (executive functions should be developed and used in higher education in order to improve academic achievement, learning attitude and motivation.

  8. Low-Temperature Catalytic Performance of Ni-Cu/Al2O3 Catalysts for Gasoline Reforming to Produce Hydrogen Applied in Spark Ignition Engines

    Directory of Open Access Journals (Sweden)

    Le Anh Tuan

    2016-03-01

    Full Text Available The performance of Ni-Cu/Al2O3 catalysts for steam reforming (SR of gasoline to produce a hydrogen-rich gas mixture applied in a spark ignition (SI engine was investigated at relatively low temperature. The structural and morphological features and catalysis activity were observed by X-ray diffractometry (XRD, scanning electron microscopy (SEM, and temperature programmed reduction (TPR. The results showed that the addition of copper improved the dispersion of nickel and therefore facilitated the reduction of Ni at low temperature. The highest hydrogen selectivity of 70.6% is observed over the Ni-Cu/Al2O3 catalysts at a steam/carbon ratio of 0.9. With Cu promotion, a gasoline conversion of 42.6% can be achieved at 550 °C, while with both Mo and Ce promotion, the gasoline conversions were 31.7% and 28.3%, respectively, higher than with the conventional Ni catalyst. On the other hand, initial durability testing showed that the conversion of gasoline over Ni-Cu/Al2O3 catalysts slightly decreased after 30 h reaction time.

  9. Optical and Electrical Performance of MOS-Structure Silicon Solar Cells with Antireflective Transparent ITO and Plasmonic Indium Nanoparticles under Applied Bias Voltage.

    Science.gov (United States)

    Ho, Wen-Jeng; Sue, Ruei-Siang; Lin, Jian-Cheng; Syu, Hong-Jang; Lin, Ching-Fuh

    2016-08-10

    This paper reports impressive improvements in the optical and electrical performance of metal-oxide-semiconductor (MOS)-structure silicon solar cells through the incorporation of plasmonic indium nanoparticles (In-NPs) and an indium-tin-oxide (ITO) electrode with periodic holes (perforations) under applied bias voltage. Samples were prepared using a plain ITO electrode or perforated ITO electrode with and without In-NPs. The samples were characterized according to optical reflectance, dark current voltage, induced capacitance voltage, external quantum efficiency, and photovoltaic current voltage. Our results indicate that induced capacitance voltage and photovoltaic current voltage both depend on bias voltage, regardless of the type of ITO electrode. Under a bias voltage of 4.0 V, MOS cells with perforated ITO and plain ITO, respectively, presented conversion efficiencies of 17.53% and 15.80%. Under a bias voltage of 4.0 V, the inclusion of In-NPs increased the efficiency of cells with perforated ITO and plain ITO to 17.80% and 16.87%, respectively.

  10. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  11. Graphene-coated hollow fiber membrane as the cathode in anaerobic electrochemical membrane bioreactors – Effect of configuration and applied voltage on performance and membrane fouling

    KAUST Repository

    Werner, Craig M.; Katuri, Krishna; Rao, Hari Ananda; Chen, Wei; Lai, Zhiping; Logan, Bruce E.; Amy, Gary L.; Saikaly, Pascal

    2015-01-01

    Electrically conductive, graphene-coated hollow-fiber porous membranes were used as cathodes in anaerobic electrochemical membrane bioreactors (AnEMBRs) operated at different applied voltages (0.7 V and 0.9 V) using a new rectangular reactor configuration, compared to a previous tubular design (0.7 V). The onset of biofouling was delayed and minimized in rectangular reactors operated at 0.9 V, compared to those at 0.7 V due to higher rates of hydrogen production. Maximum transmembrane pressures for the rectangular reactor were only 0.10 bar (0.7 V) or 0.05 bar (0.9 V) after 56 days of operation, compared to 0.46 bar (0.7 V) for the tubular reactor after 52 days. The thickness of the membrane biofouling layer was approximately 0.4 µm for rectangular reactors and 4 µm for the tubular reactor. Higher permeate quality (TSS = 0.05 mg/L) was achieved in the rectangular AnEMBR than the tubular AnEMBR (TSS = 17 mg/L), likely due to higher current densities that minimized the accumulation of cells in suspension. These results show that the new rectangular reactor design, which had increased rates of hydrogen production, successfully delayed the onset of cathode biofouling and improved reactor performance.

  12. Ultra-high performance liquid chromatography coupled to mass spectrometry applied to the identification of valuable phenolic compounds from Eucalyptus wood.

    Science.gov (United States)

    Santos, Sónia A O; Vilela, Carla; Freire, Carmen S R; Neto, Carlos Pascoal; Silvestre, Armando J D

    2013-11-01

    Ultra-high performance liquid chromatography (UHPLC) was applied for the first time in the analysis of wood extracts. The potential of this technique coupled to ion trap mass spectrometry in the rapid and effective detection and identification of bioactive components in complex vegetal samples was demonstrated. Several dozens of compounds were detected in less than 30min of analysis time, corresponding to more than 3-fold reduction in time, when compared to conventional HPLC analysis of similar extracts. The phenolic chemical composition of Eucalyptus grandis, Eucalyptus urograndis (E. grandis×E. urophylla) and Eucalyptus maidenii wood extracts was assessed for the first time, with the identification of 51 phenolic compounds in the three wood extracts. Twenty of these compounds are reported for the first time as Eucalyptus genus components. Ellagic acid and ellagic acid-pentoside are the major components in all extracts, followed by gallic and quinic acids in E. grandis and E. urograndis and ellagic acid-pentoside isomer, isorhamnetin-hexoside and gallic acid in E. maidenii. The antioxidant scavenging activity of the extracts was evaluated, with E. grandis wood extract showing the lowest IC50 value. Moreover, the antioxidant activity of these extracts was higher than that of the commercial antioxidant BHT and of those of the corresponding bark extracts. These results, together with the phenolic content values, open good perspectives for the exploitation of these renewable resources as a source of valuable phenolic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Performance and Fouling Study of Asymmetric PVDF Membrane Applied in the Concentration of Organic Fertilizer by Direct Contact Membrane Distillation (DCMD

    Directory of Open Access Journals (Sweden)

    Yanfei Liu

    2018-02-01

    Full Text Available This study proposes using membrane distillation (MD as an alternative to the conventional multi-stage flushing (MSF process to concentrate a semi-product of organic fertilizer. By applying a unique asymmetric polyvinylidene fluoride (PVDF membrane, which was specifically designed for MD applications using a nonsolvent thermally induced phase separation (NTIPS method, the direct contact membrane distillation (DCMD performance was investigated in terms of its sustainability in permeation flux, fouling resistance, and anti-wetting properties. It was found that the permeation flux increased with increasing flow rate, while the top-surface facing feed mode was the preferred orientation to achieve 25% higher flux than the bottom-surface facing feed mode. Compared to the commercial polytetrafluoroethylene (PTFE membrane, the asymmetric PVDF membrane exhibited excellent anti-fouling and sustainable flux, with less than 8% flux decline in a 15 h continuous operation, i.e., flux decreased slightly and was maintained as high as 74 kg·m−2·h−1 at 70 °C. Meanwhile, the lost flux was easily recovered by clean water rinsing. Overall 2.6 times concentration factor was achieved in 15 h MD operation, with 63.4% water being removed from the fertilizer sample. Further concentration could be achieved to reach the desired industrial standard of 5x concentration factor.

  14. Graphene-coated hollow fiber membrane as the cathode in anaerobic electrochemical membrane bioreactors – Effect of configuration and applied voltage on performance and membrane fouling

    KAUST Repository

    Werner, Craig M.

    2015-12-22

    Electrically conductive, graphene-coated hollow-fiber porous membranes were used as cathodes in anaerobic electrochemical membrane bioreactors (AnEMBRs) operated at different applied voltages (0.7 V and 0.9 V) using a new rectangular reactor configuration, compared to a previous tubular design (0.7 V). The onset of biofouling was delayed and minimized in rectangular reactors operated at 0.9 V, compared to those at 0.7 V due to higher rates of hydrogen production. Maximum transmembrane pressures for the rectangular reactor were only 0.10 bar (0.7 V) or 0.05 bar (0.9 V) after 56 days of operation, compared to 0.46 bar (0.7 V) for the tubular reactor after 52 days. The thickness of the membrane biofouling layer was approximately 0.4 µm for rectangular reactors and 4 µm for the tubular reactor. Higher permeate quality (TSS = 0.05 mg/L) was achieved in the rectangular AnEMBR than the tubular AnEMBR (TSS = 17 mg/L), likely due to higher current densities that minimized the accumulation of cells in suspension. These results show that the new rectangular reactor design, which had increased rates of hydrogen production, successfully delayed the onset of cathode biofouling and improved reactor performance.

  15. Determination of Fusarium toxins in functional vegetable milks applying salting-out-assisted liquid-liquid extraction combined with ultra-high-performance liquid chromatography tandem mass spectrometry.

    Science.gov (United States)

    Hamed, Ahmed M; Arroyo-Manzanares, Natalia; García-Campaña, Ana M; Gámiz-Gracia, Laura

    2017-11-01

    Vegetable milks are considered as functional foods due to their physiological benefits. Although the consumption of these products has significantly increased, they have received little attention in legislation with regard to contaminants. However, they may contain mycotoxins resulting from the use of contaminated raw materials. In this work, ultra-high-performance liquid chromatography tandem mass spectrometry has been proposed for the determination of the most relevant Fusarium toxins (fumonisin B 1 and B 2 , HT-2 and T-2 toxins, zearalenone, deoxynivalenol and fusarenon-X) in different functional beverages based on cereals, legumes and seeds. Sample treatment consisted of a simple salting-out-assisted liquid-liquid extraction with no further clean-up. The method provided limits of quantification between 3.2 and 57.7 µg L -1 , recoveries above 80% and precision with RSD lower than 12%. The method was also applied for studying the occurrence of these mycotoxins in market samples of vegetable functional beverages and deoxynivalenol was found in three oat-based commercial drinks.

  16. Optical and Electrical Performance of MOS-Structure Silicon Solar Cells with Antireflective Transparent ITO and Plasmonic Indium Nanoparticles under Applied Bias Voltage

    Directory of Open Access Journals (Sweden)

    Wen-Jeng Ho

    2016-08-01

    Full Text Available This paper reports impressive improvements in the optical and electrical performance of metal-oxide-semiconductor (MOS-structure silicon solar cells through the incorporation of plasmonic indium nanoparticles (In-NPs and an indium-tin-oxide (ITO electrode with periodic holes (perforations under applied bias voltage. Samples were prepared using a plain ITO electrode or perforated ITO electrode with and without In-NPs. The samples were characterized according to optical reflectance, dark current voltage, induced capacitance voltage, external quantum efficiency, and photovoltaic current voltage. Our results indicate that induced capacitance voltage and photovoltaic current voltage both depend on bias voltage, regardless of the type of ITO electrode. Under a bias voltage of 4.0 V, MOS cells with perforated ITO and plain ITO, respectively, presented conversion efficiencies of 17.53% and 15.80%. Under a bias voltage of 4.0 V, the inclusion of In-NPs increased the efficiency of cells with perforated ITO and plain ITO to 17.80% and 16.87%, respectively.

  17. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  18. Applied Enzymology.

    Science.gov (United States)

    Manoharan, Asha; Dreisbach, Joseph H.

    1988-01-01

    Describes some examples of chemical and industrial applications of enzymes. Includes a background, a discussion of structure and reactivity, enzymes as therapeutic agents, enzyme replacement, enzymes used in diagnosis, industrial applications of enzymes, and immobilizing enzymes. Concludes that applied enzymology is an important factor in…

  19. Interest rates mapping

    Science.gov (United States)

    Kanevski, M.; Maignan, M.; Pozdnoukhov, A.; Timonin, V.

    2008-06-01

    The present study deals with the analysis and mapping of Swiss franc interest rates. Interest rates depend on time and maturity, defining term structure of the interest rate curves (IRC). In the present study IRC are considered in a two-dimensional feature space-time and maturity. Exploratory data analysis includes a variety of tools widely used in econophysics and geostatistics. Geostatistical models and machine learning algorithms (multilayer perceptron and Support Vector Machines) were applied to produce interest rate maps. IR maps can be used for the visualisation and pattern perception purposes, to develop and to explore economical hypotheses, to produce dynamic asset-liability simulations and for financial risk assessments. The feasibility of an application of interest rates mapping approach for the IRC forecasting is considered as well.

  20. Cognitive maps and attention.

    Science.gov (United States)

    Hardt, Oliver; Nadel, Lynn

    2009-01-01

    Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.

  1. A Study on Remote Probing Method for Drawing Ecology/Nature Map and the Application (III) - Drawing the Swamp Classification Map around River

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong Woo; Cho, Jeong Keon; Jeong, Hwi Chol [Korea Environment Institute, Seoul (Korea)

    2000-12-01

    The map of ecology/nature in the amended Natural Environment Conservation Act is the necessary data, which is drawn through assessing the national land with ecological factors, to execute the Korea's environmental policy. Such important ecology/nature map should be continuously revised and improved the reliability with adding several new factors. In this point of view, this study has the significance in presenting the improvement scheme of ecology/nature map. 'A Study on Remote Probing Method for Drawing Ecology/Nature Map and the Application' that has been performed for 3 years since 1998 has researched the drawing method of subject maps that could be built in a short time - a land-covering classification map, a vegetation classification map, and a swamp classification map around river - and the promoting principles hereafter. This study also presented the possibility and limit of classification by several satellite image data, so it would be a big help to build the subject map in the Government level. The land-covering classification map, a result of the first year, has been already being built by Ministry of Environment as a national project, and the improvement scheme of the vegetation map that was presented as a result of second year has been used in building the basic ecology/nature map. We hope that the results from this study will be applied as basic data to draw an ecology/nature map and contribute to expanding the understanding on the usefulness of the several ecosystem analysis methods with applying an ecology/nature map and a remote probe. 55 refs., 38 figs., 24 tabs.

  2. Application of mapping crossover genetic algorithm in nuclear power equipment optimization design

    International Nuclear Information System (INIS)

    Li Guijiang; Yan Changqi; Wang Jianjun; Liu Chengyang

    2013-01-01

    Genetic algorithm (GA) has been widely applied in nuclear engineering. An improved method, named the mapping crossover genetic algorithm (MCGA), was developed aiming at improving the shortcomings of traditional genetic algorithm (TGA). The optimal results of benchmark problems show that MCGA has better optimizing performance than TGA. MCGA was applied to the reactor coolant pump optimization design. (authors)

  3. A q-deformed nonlinear map

    International Nuclear Information System (INIS)

    Jaganathan, Ramaswamy; Sinha, Sudeshna

    2005-01-01

    A scheme of q-deformation of nonlinear maps is introduced. As a specific example, a q-deformation procedure related to the Tsallis q-exponential function is applied to the logistic map. Compared to the canonical logistic map, the resulting family of q-logistic maps is shown to have a wider spectrum of interesting behaviours, including the co-existence of attractors-a phenomenon rare in one-dimensional maps

  4. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  5. Mapping functional connectivity

    Science.gov (United States)

    Peter Vogt; Joseph R. Ferrari; Todd R. Lookingbill; Robert H. Gardner; Kurt H. Riitters; Katarzyna Ostapowicz

    2009-01-01

    An objective and reliable assessment of wildlife movement is important in theoretical and applied ecology. The identification and mapping of landscape elements that may enhance functional connectivity is usually a subjective process based on visual interpretations of species movement patterns. New methods based on mathematical morphology provide a generic, flexible,...

  6. Operation characteristic of a heat pump of mechanical vapor recompression propelled by fans and its performance analysis applied to waste-water treatment

    Science.gov (United States)

    Weike, Pang; Wenju, Lin; Qilin, Pan; Wenye, Lin; Qunte, Dai; Luwei, Yang; Zhentao, Zhang

    2014-01-01

    In this paper, a set of heat pump (called as Mechanical Vapor Recompression, MVR) propelled by a centrifugal fan is tested and it shows some special characteristic when it works together with a falling film evaporator. Firstly, an analysis of the fan's suction and discharge parameters at stable state, such as its pressure and temperature, indicates that a phenomenon of wet compression is probably to appear during vapor compression. As a result, superheat after saturated vapor is compressed is eliminated, which reduces discharge temperature of the system. It is because drops boil away and absorb the super heat into their latent heat during vapor compression. Meanwhile, drops in the suction vapor add to the compressed vapor, which increase the given heat of the MVR heat pump. Next, assistant electric heat could adjust and keep steady of the operating pressure and temperature of an MVR heat pump. With the evaporation temperature up to be high, heat balance is broken and supplement heat needs to increase. Thirdly, the performance of an MVR heat pump is affect by the balance of falling film and evaporation that has an effect on heat transfer. Then, two parameters standing for the performance are measured as it runs in practical condition. The two important parameters are consumptive electricity power and productive water capacity. According to theoretical work in ideal condition by calculation and fan's input power by measure as running, adiabatic efficiency (ηad) of a centrifugal fan is calculated when it is applied in a heat pump of MVR. Following, based on ηad, practical SMER and COP of an MVR heat pump are discovered to be correlative with it. Finally, in dependence on productive water in theory and in practice, displacement efficiency (ηv) of centrifugal fans is obtained when compressing vapor, and so provide some references of matching a fan for an MVR heat pump. On the other hand, it is helpful to research and develop MVR heat pumps, and also to check

  7. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  8. Applied optics

    International Nuclear Information System (INIS)

    Orszag, A.; Antonetti, A.

    1988-01-01

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed [fr

  9. Conceptual maps as evaluation strategy

    Directory of Open Access Journals (Sweden)

    Dionísio Borsato

    2007-03-01

    Full Text Available The following work shows the conceptual map as an evaluation tool. In this study, an opening text was used as a previous organizer, with a theme related to the students’ daily lives. The developed activity consisted in elaborating conceptual maps before and after the experimental works. The evaluation was applied to 21 students of the 1st grade and 22 of the 3rd grade of High School. The elaborated maps were scored according to hierarchy, propositions, linking words, cross linking and examples. The classification of the maps elaborated before and after the experimental activity, was obtained having as a parameter, a referential conceptual map. In this classification many differences were observed between the first and second maps of both grades and among the groups. The elaboration of conceptual maps showed great potential as evaluation resources.

  10. Concept Mapping

    Science.gov (United States)

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  11. Mapping earthworm communities in Europe

    DEFF Research Database (Denmark)

    Rutgers, Michiel; Orgiazzi, Alberto; Gardi, Ciro

    Existing data sets on earthworm communities in Europe were collected, harmonized, modelled and depicted on a soil biodiversity map of Europe. Digital Soil Mapping was applied using multiple regressions relating relatively low density earthworm community data to soil characteristics, land use...

  12. Enhancing importance-performance analysis

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; Kristensen, Kai

    2006-01-01

    Purpose: The interpretation of the importance/performance map is based on an assumption of independence between importance and performance but many studies question the validity of this assumption. The aim of this research is to develop a new typology for job satisfaction attributes as well...... as a new importance/performance map that can be an aid for organizations when they prioritize their improvement actions based on a job satisfaction study. Design/methodology/approach: A typology for possible relationships between importance and performance in job satisfaction studies is developed based...... on theoretical considerations. This typology is then applied and validated on approximately 10,000 responses from the European Employee Index 2002. Ultimately a new importance/performance map for priority setting in job satisfaction studies is developed based on the new typology for possible relationships...

  13. Local adaptive tone mapping for video enhancement

    Science.gov (United States)

    Lachine, Vladimir; Dai, Min (.

    2015-03-01

    As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget

  14. Classifying the Diversity of Bus Mapping Systems

    Science.gov (United States)

    Said, Mohd Shahmy Mohd; Forrest, David

    2018-05-01

    This study represents the first stage of an investigation into understanding the nature of different approaches to mapping bus routes and bus network, and how they may best be applied in different public transport situations. In many cities, bus services represent an important facet of easing traffic congestion and reducing pollution. However, with the entrenched car culture in many countries, persuading people to change their mode of transport is a major challenge. To promote this modal shift, people need to know what services are available and where (and when) they go. Bus service maps provide an invaluable element of providing suitable public transport information, but are often overlooked by transport planners, and are under-researched by cartographers. The method here consists of the creation of a map evaluation form and performing assessment of published bus networks maps. The analyses were completed by a combination of quantitative and qualitative data analysis of various aspects of cartographic design and classification. This paper focuses on the resulting classification, which is illustrated by a series of examples. This classification will facilitate more in depth investigations into the details of cartographic design for such maps and help direct areas for user evaluation.

  15. Mapping racism.

    Science.gov (United States)

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  16. Applied geodesy

    International Nuclear Information System (INIS)

    Turner, S.

    1987-01-01

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  17. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  18. PAU/GNSS-R: Implementation, Performance and First Results of a Real-Time Delay-Doppler Map Reflectometer Using Global Navigation Satellite System Signals

    Directory of Open Access Journals (Sweden)

    Enric Valencia

    2008-05-01

    Full Text Available Signals from Global Navigation Satellite Systems (GNSS were originally conceived for position and speed determination, but they can be used as signals of opportunity as well. The reflection process over a given surface modifies the properties of the scattered signal, and therefore, by processing the reflected signal, relevant geophysical data regarding the surface under study (land, sea, ice… can be retrieved. In essence, a GNSS-R receiver is a multi-channel GNSS receiver that computes the received power from a given satellite at a number of different delay and Doppler bins of the incoming signal. The first approaches to build such a receiver consisted of sampling and storing the scattered signal for later post-processing. However, a real-time approach to the problem is desirable to obtain immediately useful geophysical variables and reduce the amount of data. The use of FPGA technology makes this possible, while at the same time the system can be easily reconfigured. The signal tracking and processing constraints made necessary to fully design several new blocks. The uniqueness of the implemented system described in this work is the capability to compute in real-time Delay-Doppler maps (DDMs either for four simultaneous satellites or just one, but with a larger number of bins. The first tests have been conducted from a cliff over the sea and demonstrate the successful performance of the instrument to compute DDMs in real-time from the measured reflected GNSS/R signals. The processing of these measurements shall yield quantitative relationships between the sea state (mainly driven by the surface wind and the swell and the overall DDM shape. The ultimate goal is to use the DDM shape to correct the sea state influence on the L-band brightness temperature to improve the retrieval of the sea surface salinity (SSS.

  19. Genetic Mapping

    Science.gov (United States)

    ... greatly advanced genetics research. The improved quality of genetic data has reduced the time required to identify a ... cases, a matter of months or even weeks. Genetic mapping data generated by the HGP's laboratories is freely accessible ...

  20. Application de la théorie des graphes au traitement de la carte géologique Applying the Theory of Graphs to the Treatment of Geological Maps

    Directory of Open Access Journals (Sweden)

    Bouillé F.

    2006-11-01

    Full Text Available La saisie des informations d'une carte géologique par les méthodes classiques (grilles ou relevés aléatoires de courbes ne constitue pas une base de données opérationnelle. Par contre, l'assimilation des limites géologiques à un graphe orienté répond aux critères d'optimalité (encombrement très réduit, temps minimal, fiabilité, et permet une digitalisation rationnelle de la carte, une bonne structuration du fichier, et la réalisation d'applications intéressantes : restitutions graphiques sélectives à toutes échelles, calculs de pendages, surfaces, volumes, études de corrélation. Nous avons donc établi une chaîne de traitement de la carte géologique dont chaque maillon (saisie des informations; contrôle, mise à jour, consultation, application opère sur un ou plusieurs graphes. Obtaining data from geological maps by conventional methods (grids or random curve plotting is not an operational data base. However, the comparison of geological boundaries with a directional graph meets criteria of optimalness (very small bulk, minimum time, reliability and makes it possible to digitize the map rationally, to structure the file properly and to achieve significant applications such as selective graph plotting on all scales, calculating dips, areas and volumes, and making correlotion analyses. Therefore, we worked out a geological map processing sequence in which each element (data acquisition, checking, updating, consulting, applications operates on one or several graphs.