WorldWideScience

Sample records for proposed baseline text

  1. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  2. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  3. Spectrum from the Proposed BNL Very Long Baseline Neutrino Facility

    CERN Document Server

    Kahn, S A

    2005-01-01

    This paper calculates the neutrino flux that would be seen at the far detector location from the proposed BNL Very Long Baseline Neutrino Facility. The far detector is assumed to be located at an underground facility in South Dakota 2540 km from BNL. The neutrino beam facility uses a 1 MW upgraded AGS to provide an intense proton beam on the target and a magnetic horn to focus the secondary pion beam. The paper will examine the sensitivity of the neutrino flux at the far detector to the positioning of the horn and target so as to establish alignment tolerances for the neutrino system.

  4. Sandia National Laboratories, California proposed CREATE facility environmental baseline survey.

    Energy Technology Data Exchange (ETDEWEB)

    Catechis, Christopher Spyros

    2013-10-01

    Sandia National Laboratories, Environmental Programs completed an environmental baseline survey (EBS) of 12.6 acres located at Sandia National Laboratories/California (SNL/CA) in support of the proposed Collaboration in Research and Engineering for Advanced Technology and Education (CREATE) Facility. The survey area is comprised of several parcels of land within SNL/CA, County of Alameda, California. The survey area is located within T 3S, R 2E, Section 13. The purpose of this EBS is to document the nature, magnitude, and extent of any environmental contamination of the property; identify potential environmental contamination liabilities associated with the property; develop sufficient information to assess the health and safety risks; and ensure adequate protection for human health and the environment related to a specific property.

  5. Baseline Motivation Type as a Predictor of Dropout in a Healthy Eating Text Messaging Program.

    Science.gov (United States)

    Coa, Kisha; Patrick, Heather

    2016-09-29

    Growing evidence suggests that text messaging programs are effective in facilitating health behavior change. However, high dropout rates limit the potential effectiveness of these programs. This paper describes patterns of early dropout in the HealthyYou text (HYTxt) program, with a focus on the impact of baseline motivation quality on dropout, as characterized by Self-Determination Theory (SDT). This analysis included 193 users of HYTxt, a diet and physical activity text messaging intervention developed by the US National Cancer Institute. Descriptive statistics were computed, and logistic regression models were run to examine the association between baseline motivation type and early program dropout. Overall, 43.0% (83/193) of users dropped out of the program; of these, 65.1% (54/83; 28.0% of all users) did so within the first 2 weeks. Users with higher autonomous motivation had significantly lower odds of dropping out within the first 2 weeks. A one unit increase in autonomous motivation was associated with lower odds (odds ratio 0.44, 95% CI 0.24-0.81) of early dropout, which persisted after adjusting for level of controlled motivation. Applying SDT-based strategies to enhance autonomous motivation might reduce early dropout rates, which can improve program exposure and effectiveness.

  6. PROPOSAL OF A TABLE TO CLASSIFY THE RELIABILITY OF BASELINES OBTAINED BY GNSS TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Lima Segantine

    Full Text Available The correct data processing of GNSS measurements, as well as a correct interpretation of the results are fundamental factors for analysis of quality of land surveying works. In that sense, it is important to keep in mind that, although, the statistical data provided by the majority of commercials software used for GNSS data processing, describes the credibility of the work, they do not have consistent information about the reliability of the processed coordinates. Based on that assumption, this paper proposes a classification table to classify the reliability of baselines obtained through GNSS data processing. As data input, the GNSS measurements were performed during the years 2006 and 2008, considering different seasons of the year, geometric configurations of RBMC stations and baseline lengths. As demonstrated in this paper, parameters as baseline length, ambiguity solution, PDOP value and the precision of horizontal and vertical values of coordinates can be used as reliability parameters. The proposed classification guarantees the requirements of the Brazilian Law N( 10.267/2001 of the National Institute of Colonization and Agrarian Reform (INCRA

  7. Baseline radiological monitoring at proposed uranium prospecting site at Rohil Sikar, Rajasthan

    International Nuclear Information System (INIS)

    Kumar, Rajesh; Jha, V.N.; Sahoo, N.K.; Jha, S.K.; Tripathi, R.M.

    2018-01-01

    Once economically viable grades of uranium deposits are proposed for mining and processing by the industry radiological baseline studies are required for future comparison during operational phases. The information collected during such studies serve as connecting feature between regulatory compliance and technical information. Present paper summarizes the results of baseline monitoring of atmospheric 222 Rn and gamma level in the area at prospecting mining, milling and waste disposal sites of Rohil Rajasthan

  8. A proposal to create an extension to the European baseline series.

    Science.gov (United States)

    Wilkinson, Mark; Gallo, Rosella; Goossens, An; Johansen, Jeanne D; Rustemeyer, Thomas; Sánchez-Pérez, Javier; Schuttelaar, Marie L; Uter, Wolfgang

    2018-02-01

    The current European baseline series consists of 30 allergens, and was last updated in 2015. To use data from the European Surveillance System on Contact Allergies (ESSCA) to propose an extension to the European baseline series in response to changes in environmental exposures. Data from departmental and national extensions to the baseline series, together with some temporary additions from departments contributing to the ESSCA, were collated during 2013-2014. In total, 31689 patients were patch tested in 46 European departments. Many departments and national groups already consider the current European baseline series to be a suboptimal screen, and use their own extensions to it. The haptens tested are heterogeneous, although there are some consistent themes. Potential haptens to include in an extension to the European baseline series comprise sodium metabisulfite, formaldehyde-releasing preservatives, additional markers of fragrance allergy, propolis, Compositae mix, and 2-hydroxyethyl methacrylate. In combination with other published work from the ESSCA, changes to the current European baseline series are proposed for discussion. As well as addition of the allergens listed above, it is suggested that primin and clioquinol should be deleted from the series, owing to reduced environmental exposure. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Baseline environmental survey of proposed uranium mining projects of Domiasiat, Meghalaya

    International Nuclear Information System (INIS)

    Khathing, D.T.; Myrboh, B.; Nongkynrih, P.; War, S.A.; Marbaniang, D.G.; Iongwai, P.S.

    2005-01-01

    West Khasi Hills District of Meghalaya is identified as having a large and rich deposits of Uranium. However, actual extraction on a commercial scale that may lead to an increase in the socio-economic development of the state in particular and the country in general, is yet to be undertaken. This is due to lack of any baseline environmental survey giving rise to speculative information and causing a fear psychosis in the minds of the locals populace about the negative effects of Uranium mining. A preoperational survey and environmental monitoring of the proposed mining sites and its adjacent areas would establish the baseline status of the natural radioactivity and some chemical constituents in different environmental matrices via. air, water, soil, biota and aquatic ecosystems. The North Eastern Hill University, Shillong, Meghalaya has undertaken the Project funded by DST and BRNS, Department of Atomic Energy, Govt. of India which aims to provide baseline environmental data on ambient air, water and soil quality in and around the proposed Uranium mining site of Domiasiat, West Khasi Hills in the state of Meghalaya. Trace elements (elements like Mg, Zn, Ca, K, Na, Se, As, Fe, Cu, Co, Cr, Ni, Pb, Cd, Mn etc) and the status of the activity in the samples are determined. (author)

  10. Ecological baseline study of the Yakima Firing Center proposed land acquisition: A status report

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, L.E.; Beedlow, P.A.; Eberhardt, L.E.; Dauble, D.D.; Fitzner, R.E.

    1989-01-01

    This report provides baseline environmental information for the property identified for possible expansion of the Yakima Firing Center. Results from this work provide general descriptions of the animals and major plant communities present. A vegetation map derived from a combination of on-site surveillance and remotely sensed imagery is provided as part of this report. Twenty-seven wildlife species of special interest (protected, sensitive, furbearer, game animal, etc.), and waterfowl, were observed on the proposed expansion area. Bird censuses revealed 13 raptorial species (including four of special interest: bald eagle, golden eagle, osprey, and prairie falcon); five upland game bird species (sage grouse, California quail, chukar, gray partridge, and ring-necked pheasant); common loons (a species proposed for state listing as threatened); and five other species of special interest (sage thrasher, loggerhead shrike, mourning dove, sage sparrow, and long-billed curlew). Estimates of waterfowl abundance are included for the Priest Rapids Pool of the Columbia River. Six small mammal species were captured during this study; one, the sagebrush vole, is a species of special interest. Two large animal species, mule deer and elk, were noted on the site. Five species of furbearing animals were observed (coyote, beaver, raccoon, mink, and striped skunk). Four species of reptiles and one amphibian were noted. Fisheries surveys were conducted to document the presence of gamefish, and sensitive-classified fish and aquatic invertebrates. Rainbow trout were the only fish collected within the boundaries of the proposed northern expansion area. 22 refs., 10 figs., 4 tabs.

  11. 77 FR 22247 - Veterinary Feed Directive; Draft Text for Proposed Regulation

    Science.gov (United States)

    2012-04-13

    .... FDA-2010-N-0155] Veterinary Feed Directive; Draft Text for Proposed Regulation AGENCY: Food and Drug... the efficiency of FDA's Veterinary Feed Directive (VFD) program. The Agency is making this draft text..., rm. 1061, Rockville, MD 20852. FOR FURTHER INFORMATION CONTACT: Sharon Benz, Center for Veterinary...

  12. A proposal to create an extension to the European baseline series

    DEFF Research Database (Denmark)

    Wilkinson, Mark; Gallo, Rosella; Goossens, An

    2018-01-01

    exposures. METHODS: Data from departmental and national extensions to the baseline series, together with some temporary additions from departments contributing to the ESSCA, were collated during 2013-2014. RESULTS: In total, 31689 patients were patch tested in 46 European departments. Many departments...

  13. A comparison of video modeling, text-based instruction, and no instruction for creating multiple baseline graphs in Microsoft Excel.

    Science.gov (United States)

    Tyner, Bryan C; Fienup, Daniel M

    2015-09-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance. Participants who used VM constructed graphs significantly faster and with fewer errors than those who used text-based instruction or no instruction. Implications for instruction are discussed. © Society for the Experimental Analysis of Behavior.

  14. Constraining proposed combinations of ice history and Earth rheology using VLBI determined baseline length rates in North America

    Science.gov (United States)

    Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.

    1993-01-01

    We predict the present-day rates of change of the lengths of 19 North American baselines due to the glacial isostatic adjustment process. Contrary to previously published research, we find that the three dimensional motion of each of the sites defining a baseline, rather than only the radial motions of these sites, needs to be considered to obtain an accurate estimate of the rate of change of the baseline length. Predictions are generated using a suite of Earth models and late Pleistocene ice histories, these include specific combinations of the two which have been proposed in the literature as satisfying a variety of rebound related geophysical observations from the North American region. A number of these published models are shown to predict rates which differ significantly from the VLBI observations.

  15. A Comparison of Video Modeling, Text-Based Instruction, and No Instruction for Creating Multiple Baseline Graphs in Microsoft Excel

    Science.gov (United States)

    Tyner, Bryan C.; Fienup, Daniel M.

    2015-01-01

    Graphing is socially significant for behavior analysts; however, graphing can be difficult to learn. Video modeling (VM) may be a useful instructional method but lacks evidence for effective teaching of computer skills. A between-groups design compared the effects of VM, text-based instruction, and no instruction on graphing performance.…

  16. Baseline requirements of the proposed action for the Transportation Management Division routing models

    International Nuclear Information System (INIS)

    Johnson, P.E.; Joy, D.S.

    1995-02-01

    The potential impacts associated with the transportation of hazardous materials are important to shippers, carriers, and the general public. This is particularly true for shipments of radioactive material. The shippers are primarily concerned with safety, security, efficiency, and equipment requirements. The carriers are concerned with the potential impact that radioactive shipments may have on their operations--particularly if such materials are involved in an accident. The general public has also expressed concerns regarding the safety of transporting radioactive and other hazardous materials through their communities. Because transportation routes are a central concern in hazardous material transport, the prediction of likely routes is the first step toward resolution of these issues. In response to these routing needs, several models have been developed over the past fifteen years at Oak Ridge National Laboratory (ORNL). The HIGHWAY routing model is used to predict routes for truck transportation, the INTERLINE routing model is used to predict both rail and barge routes, and the AIRPORT locator model is used to determine airports with specified criteria near a specific location. As part of the ongoing improvement of the US Department of Energy's (DOE) Environmental Management Transportation Management Division's (EM-261) computer systems and development efforts, a Baseline Requirements Assessment Session on the HIGHWAY, INTERLINE, and AIRPORT models was held at ORNL on April 27, 1994. The purpose of this meeting was to discuss the existing capabilities of the models and data bases and to review enhancements of the models and data bases to expand their usefulness. The results of the Baseline Requirements Assessment Section will be discussed in this report. The discussions pertaining to the different models are contained in separate sections

  17. Understanding Kendal aquifer system: a baseline analysis for sustainable water management proposal

    Science.gov (United States)

    Lukman, A.; Aryanto, M. D.; Pramudito, A.; Andhika, A.; Irawan, D. E.

    2017-07-01

    North coast of Java has been grown as the center of economic activities and major connectivity hub for Sumatra and Bali. Sustainable water management must support such role. One of the basis is to understand the baseline of groundwater occurrences and potential. However the complex alluvium aquiver system has not been well-understood. A geoelectric measurements were performed to determine which rock layer has a good potential as groundwater aquifers in the northern coast of Kaliwungu Regency, Kendal District, Central Java province. Total of 10 vertical electrical sounding (VES) points has been performed, using a Schlumberger configuration with the current electrode spacing (AB/2) varies between 200 - 300 m and the potential difference electrode spacing (MN/2) varies between 0.5 to 20 m with depths target ranging between 150 - 200 m. Geoelectrical data processing is done using Ip2win software which generates resistivity value, thickness and depth of subsurface rock layers. Based on the correlation between resistivity value with regional geology, hydrogeology and local well data, we identify three aquifer layers. The first layer is silty clay with resistivity values vary between 0 - 10 ohm.m, then the second layer is tuffaceous claystone with resistivity value between 10 - 60 ohm.m. Both layers serve as impermeable layer. The third layer is sandy tuff with resistivity value between 60 - 100 ohm.m which serves as a confined aquifer layer located at 70 - 100 m below surface. Its thickness is vary between 70 to 110 m. The aquifer layer is a mixing of volcanic and alluvium sediment, which is a member of Damar Formation. The stratification of the aquifer system may change in short distance and depth. This natural setting prevent us to make a long continuous correlation between layers. Aquifer discharge is estimated between 5 - 71 L/s with the potential deep well locations lies in the west and southeast part of the study area. These hydrogeological settings should be used

  18. A Proposal for a Three Detector Short-Baseline Neutrino Oscillation Program in the Fermilab Booster Neutrino Beam

    CERN Document Server

    Antonello, M.; Bellini, V.; Benetti, P.; Bertolucci, S.; Bilokon, H.; Boffelli, F.; Bonesini, M.; Bremer, J.; Calligarich, E.; Centro, S.; Cocco, A.G.; Dermenev, A.; Falcone, A.; Farnese, C.; Fava, A.; Ferrari, A.; Gibin, D.; Gninenko, S.; Golubev, N.; Guglielmi, A.; Ivashkin, A.; Kirsanov, M.; Kisiel, J.; Kose, U.; Mammoliti, F.; Mannocchi, G.; Menegolli, A.; Meng, G.; Mladenov, D.; Montanari, C.; Nessi, M.; Nicoletto, M.; Noto, F.; Picchi, P.; Pietropaolo, F.; Plonski, P.; Potenza, R.; Rappoldi, A.; Raselli, G.L.; Rossella, M.; Rubbia, C.; Sala, P.; Scaramelli, A.; Sobczyk, J.; Spanu, M.; Stefan, D.; Sulej, R.; Sutera, C.M.; Torti, M.; Tortorici, F.; Varanini, F.; Ventura, S.; Vignoli, C.; Wachala, T.; Zani, A.; Adams, C.; Andreopoulos, C.; Ankowski, A.M.; Asaadi, J.; Bagby, L.; Baller, B.; Barros, N.; Bass, M.; Bishai, M.; Bitadze, A.; Bugel, L.; Camilleri, L.; Cavanna, F.; Chen, H.; Chi, C.; Church, E.; Cianci, D.; Collin, G.H.; Conrad, J.M.; De Geronimo, G.; Dharmapalan, R.; Djurcic, Z.; Ereditato, A.; Esquivel, J.; Evans, J.; Fleming, B.T.; Foreman, W.M.; Freestone, J.; Gamble, T.; Garvey, G.; Genty, V.; Goldi, D.; Gramellini, E.; Greenlee, H.; Guenette, R.; Hackenburg, A.; Hanni, R.; Ho, J.; Howell, J.; James, C.; Jen, C.M.; Jones, B.J.P.; Kalousis, L.N.; Karagiorgi, G.; Ketchum, W.; Klein, J.; Klinger, J.; Kreslo, I.; Kudryavtsev, V.A.; Lissauer, D.; Livesly, P.; Louis, W.C.; Luthi, M.; Mariani, C.; Mavrokoridis, K.; McCauley, N.; McConkey, N.; Mercer, I.; Miao, T.; Mills, G.B.; Montanari, D.; Moon, J.; Moss, Z.; Mufson, S.; Norris, B.; Nowak, J.; Pal, S.; Palamara, O.; Pater, J.; Pavlovic, Z.; Perkin, J.; Pulliam, G.; Qian, X.; Qiuguang, L.; Radeka, V.; Rameika, R.; Ratoff, P.N.; Richardson, M.; von Rohr, C.Rudolf; Russell, B.; Schmitz, D.W.; Shaevitz, M.H.; Sippach, B.; Soderberg, M.; Soldner-Rembold, S.; Spitz, J.; Spooner, N.; Strauss, T.; Szelc, A.M.; Taylor, C.E.; Terao, K.; Thiesse, M.; Thompson, L.; Thomson, M.; Thorn, C.; Toups, M.; Touramanis, C.; Van de Water, R.G.; Weber, M.; Whittington, D.; Wongjirad, T.; Yu, B.; Zeller, G.P.; Zennamo, J.; Acciarri, R.; An, R.; Barr, G.; Blake, A.; Bolton, T.; Bromberg, C.; Caratelli, D.; Carls, B.; Convery, M.; Dytmam, S.; Eberly, B.; Gollapinni, S.; Graham, M.; Grosso, R.; Hen, O.; Hewes, J.; Horton-Smith, G.; Johnson, R.A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Li, Y.; Littlejohn, B.; Lockwitz, S.; Lundberg, B.; Marchionni, A.; Marshall, J.; McDonald, K.; Meddage, V.; Miceli, T.; Mooney, M.; Moulai, M.H.; Murrells, R.; Naples, D.; Nienaber, P.; Paolone, V.; Papavassiliou, V.; Pate, S.; Pordes, S.; Raaf, J.L.; Rebel, B.; Rochester, L.; Schukraft, A.; Seligman, W.; St. John, J.; Tagg, N.; Tsai, Y.; Usher, T.; Wolbers, S.; Woodruff, K.; Xu, M.; Yang, T.; Zhang, C.; Badgett, W.; Biery, K.; Brice, S.J.; Dixon, S.; Geynisman, M.; Moore, C.; Snider, E.; Wilson, P.

    2015-01-01

    A Short-Baseline Neutrino (SBN) physics program of three LAr-TPC detectors located along the Booster Neutrino Beam (BNB) at Fermilab is presented. This new SBN Program will deliver a rich and compelling physics opportunity, including the ability to resolve a class of experimental anomalies in neutrino physics and to perform the most sensitive search to date for sterile neutrinos at the eV mass-scale through both appearance and disappearance oscillation channels. Using data sets of 6.6e20 protons on target (P.O.T.) in the LAr1-ND and ICARUS T600 detectors plus 13.2e20 P.O.T. in the MicroBooNE detector, we estimate that a search for muon neutrino to electron neutrino appearance can be performed with ~5 sigma sensitivity for the LSND allowed (99% C.L.) parameter region. In this proposal for the SBN Program, we describe the physics analysis, the conceptual design of the LAr1-ND detector, the design and refurbishment of the T600 detector, the necessary infrastructure required to execute the program, and a possible...

  19. A proposal for a drug information database and text templates for generating package inserts

    Directory of Open Access Journals (Sweden)

    Okuya R

    2013-07-01

    Full Text Available Ryo Okuya,1 Masaomi Kimura,2 Michiko Ohkura,2 Fumito Tsuchiya3 1Graduate School of Engineering and Science, 2Faculty of Engineering, Shibaura Institute of Technology, Tokyo, 3School of Pharmacy, International University of Health and Welfare, Tokyo, Japan Abstract: To prevent prescription errors caused by information systems, a database to store complete and accurate drug information in a user-friendly format is needed. In previous studies, the primary method for obtaining data stored in a database is to extract drug information from package inserts by employing pattern matching or more sophisticated methods such as text mining. However, it is difficult to obtain a complete database because there is no strict rule concerning expressions used to describe drug information in package inserts. The authors' strategy was to first build a database and then automatically generate package inserts by embedding data in the database using templates. To create this database, the support of pharmaceutical companies to input accurate data is required. It is expected that this system will work, because these companies can earn merit for newly developed drugs to decrease the effort to create package inserts from scratch. This study designed the table schemata for the database and text templates to generate the package inserts. To handle the variety of drug-specific information in the package inserts, this information in drug composition descriptions was replaced with labels and the replacement descriptions utilizing cluster analysis were analyzed. To improve the method by which frequently repeated ingredient information and/or supplementary information are stored, the method was modified by introducing repeat tags in the templates to indicate repetition and improving the insertion of data into the database. The validity of this method was confirmed by inputting the drug information described in existing package inserts and checking that the method could

  20. Proposals of geological sites for L/ILW and HLW repositories. Geological background. Text volume

    International Nuclear Information System (INIS)

    2008-01-01

    On April 2008, the Swiss Federal Council approved the conceptual part of the Sectoral Plan for Deep Geological Repositories. The Plan sets out the details of the site selection procedure for geological repositories for low- and intermediate-level waste (L/ILW) and high-level waste (HLW). It specifies that selection of geological siting regions and sites for repositories in Switzerland will be conducted in three stages, the first one (the subject of this report) being the definition of geological siting regions within which the repository projects will be elaborated in more detail in the later stages of the Sectoral Plan. The geoscientific background is based on the one hand on an evaluation of the geological investigations previously carried out by Nagra on deep geological disposal of HLW and L/ILW in Switzerland (investigation programmes in the crystalline basement and Opalinus Clay in Northern Switzerland, investigations of L/ILW sites in the Alps, research in rock laboratories in crystalline rock and clay); on the other hand, new geoscientific studies have also been carried out in connection with the site selection process. Formulation of the siting proposals is conducted in five steps: A) In a first step, the waste inventory is allocated to the L/ILW and HLW repositories; B) The second step involves defining the barrier and safety concepts for the two repositories. With a view to evaluating the geological siting possibilities, quantitative and qualitative guidelines and requirements on the geology are derived on the basis of these concepts. These relate to the time period to be considered, the space requirements for the repository, the properties of the host rock (depth, thickness, lateral extent, hydraulic conductivity), long-term stability, reliability of geological findings and engineering suitability; C) In the third step, the large-scale geological-tectonic situation is assessed and large-scale areas that remain under consideration are defined. For the L

  1. The 1993 baseline biological studies and proposed monitoring plan for the Device Assembly Facility at the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Woodward, B.D.; Hunter, R.B.; Greger, P.D.; Saethre, M.B.

    1995-02-01

    This report contains baseline data and recommendations for future monitoring of plants and animals near the new Device Assembly Facility (DAF) on the Nevada Test Site (NTS). The facility is a large structure designed for safely assembling nuclear weapons. Baseline data was collected in 1993, prior to the scheduled beginning of DAF operations in early 1995. Studies were not performed prior to construction and part of the task of monitoring operational effects will be to distinguish those effects from the extensive disturbance effects resulting from construction. Baseline information on species abundances and distributions was collected on ephemeral and perennial plants, mammals, reptiles, and birds in the desert ecosystems within three kilometers (km) of the DAF. Particular attention was paid to effects of selected disturbances, such as the paved road, sewage pond, and the flood-control dike, associated with the facility. Radiological monitoring of areas surrounding the DAF is not included in this report.

  2. A TEACHING PROPOSAL OF PRODUCTION OF DISSERTATIVE-ARGUMENTATIVE TEXTS BASED ON THE THEORY OF SEMANTIC BLOCKS

    Directory of Open Access Journals (Sweden)

    Cláudio Primo Delanoy

    2015-12-01

    Full Text Available This paper aims to explain a teaching proposal of production of dissertative-argumentative texts, based on concepts and principles of the Theory of Argumentation within Language (ADL of Ducrot (1990, 2009, and above all in tools made available by the Theory of Semantic Blocks (TBS, Carel (1995, 2005, and Carel and Ducrot (2005. In order to do so, first, the text production proposal of Enem 2012 is analyzed, so as to find the basic semantic units of its motivational texts, which, by being associated to argumentative aspects of semantic blocks that originate those semantic units, may guide effective argumentative routes to be realized in dissertative argumentative text from semantic relations within the same block. It is verified, also, to what extent argumentative transgressive chaining are presented in argumentative essays as more convincing than the normative argumentative ones. As a result, this work may provide theoretical and methodological support for teachers that have been working directly with the teaching of reading and writing, in basic or superior education levels.

  3. A teaching proposal on electrostatics based on the history of science through the reading of historical texts and argumentative discussions

    International Nuclear Information System (INIS)

    Castells, Marina; Konstantinidou, Aikaterini; Cerveró, Josep M.

    2015-01-01

    Researches on electrostatics’ conceptions found that students have ideas and conceptions that disagree with the scientific models and that might explain students’ learning difficulties. To favour the change of student’s ideas and conceptions, a teaching sequence that relies on a historical study of electrostatics is proposed. It begins with an exploration of electrostatics phenomena that students would do with everyday materials. About these phenomena they must draw their own explanations that will be shared and discussed in the class. The teacher will collect and summarize the ideas and explanations which are nearer the history of science. A brief history of electrostatics is introduced then, and some texts from scientists are used in an activity role-play-debate type in which the 'supporters of a single fluid' and 'supporters of two fluids' have to present arguments for their model and/or against the other model to explain the phenomena observed in the exploration phase. In the following, students will read texts related to science applications, the main aim of this activity is to relate electrostatics phenomena with current electricity. The first text explains how Franklin understood the nature of the lightning and the lightning rod and the second is a chapter of a roman about one historical episode situated in the Barcelona of the XVIII Century. Students will use the historical models of one and of two fluids to explain these two phenomena, and will compare them with the scientific explanation of the 'accepted' science of nowadays introduced by the teacher. With this type of teaching proposal, conceptual aspect of electrostatics will be learnt, but also the students will learn about the nature and history of science and culture, as well as about the practice of argumentation.

  4. A teaching proposal on electrostatics based on the history of science through the reading of historical texts and argumentative discussions

    Science.gov (United States)

    Castells, Marina; Konstantinidou, Aikaterini; Cerveró, Josep M.

    2016-05-01

    Researches on electrostatics' conceptions found that students have ideas and conceptions that disagree with the scientific models and that might explain students' learning difficulties. To favour the change of student's ideas and conceptions, a teaching sequence that relies on a historical study of electrostatics is proposed. It begins with an exploration of electrostatics phenomena that students would do with everyday materials. About these phenomena they must draw their own explanations that will be shared and discussed in the class. The teacher will collect and summarize the ideas and explanations which are nearer the history of science. A brief history of electrostatics is introduced then, and some texts from scientists are used in an activity role-play-debate type in which the "supporters of a single fluid" and "supporters of two fluids" have to present arguments for their model and/or against the other model to explain the phenomena observed in the exploration phase. In the following, students will read texts related to science applications, the main aim of this activity is to relate electrostatics phenomena with current electricity. The first text explains how Franklin understood the nature of the lightning and the lightning rod and the second is a chapter of a roman about one historical episode situated in the Barcelona of the XVIII Century. Students will use the historical models of one and of two fluids to explain these two phenomena, and will compare them with the scientific explanation of the "accepted" science of nowadays introduced by the teacher. With this type of teaching proposal, conceptual aspect of electrostatics will be learnt, but also the students will learn about the nature and history of science and culture, as well as about the practice of argumentation.

  5. A proposal of texts for political ideological work and the value formation of the future physical culture professional

    Directory of Open Access Journals (Sweden)

    Ana Isel Rodríguez-Cruz

    2013-08-01

    Full Text Available The values like complex formations of the personality are very related to the person's own existence and they have a lot to do with each individual's ideological political formation, that’s why, by means of a texts selection associated to this theme and from the Communicative Spanish subject, the authors propose to deep in the ideological political work and the main values in correspondence with the expectations, interests and the necessities of the current Cuban society. After the application of theoretical, empirical and statistical methods the necessity of reinforcing the ideological political work and the formation of values in our students was verified. In this way, the subject team puts in practice the educational ways and the performance ways by the teacher, in such way from the classes, the teacher generates changes in the students and contributes to the professional's formation that demands the modern society. For this reason, the teacher works with texts through diverse themes related like history, sport, personalities, and important events. (The texts make emphasis in the fight that at present is taken place for the free of the Cuban five, among others. The texts analysis includes the search of key words, the relationships among significant and meaning, the translation, interpretation and extrapolation of the same ones, the paragraph qualities, as well as the rhetorical patterns or methods of development of it, among other aspects. All of these items contribute to reinforce the values that the students have and at the same time the teacher's work facilitates the students express their feelings and thinking in correspondence with their personality.

  6. Synthesis and Comparison of Baseline Avian and Bat Use, Raptor Nesting and Mortality Information from Proposed and Existing Wind Developments: Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, Wallace P.

    2002-12-01

    Primarily due to concerns generated from observed raptor mortality at the Altamont Pass (CA) wind plant, one of the first commercial electricity generating wind plants in the U.S., new proposed wind projects both within and outside of California have received a great deal of scrutiny and environmental review. A large amount of baseline and operational monitoring data have been collected at proposed and existing U.S. wind plants. The primary use of the avian baseline data collected at wind developments has been to estimate the overall project impacts (e.g., very low, low, moderate, and high relative mortality) on birds, especially raptors and sensitive species (e.g., state and federally listed species). In a few cases, these data have also been used for guiding placement of turbines within a project boundary. This new information has strengthened our ability to accurately predict and mitigate impacts from new projects. This report should assist various stakeholders in the interpretation and use of this large information source in evaluating new projects. This report also suggests that the level of baseline data (e.g., avian use data) required to adequately assess expected impacts of some projects may be reduced. This report provides an evaluation of the ability to predict direct impacts on avian resources (primarily raptors and waterfowl/waterbirds) using less than an entire year of baseline avian use data (one season, two seasons, etc.). This evaluation is important because pre-construction wildlife surveys can be one of the most time-consuming aspects of permitting wind power projects. For baseline data, this study focuses primarily on standardized avian use data usually collected using point count survey methodology and raptor nest survey data. In addition to avian use and raptor nest survey data, other baseline data is usually collected at a proposed project to further quantify potential impacts. These surveys often include vegetation mapping and state or

  7. Baseline radionuclide concentrations in soils and vegetation around the proposed Weapons Engineering Tritium Facility and the Weapons Subsystems Laboratory at TA-16

    International Nuclear Information System (INIS)

    Fresquez, P.R.; Ennis, M.

    1995-09-01

    A preoperational environmental survey is required by the Department of Energy (DOE) for all federally funded research facilities that have the potential to cause adverse impacts on the environment. Therefore, in accordance with DOE Order 5400.1, an environmental survey was conducted over the proposed sites of the Weapons Engineering Tritium Facility (WETF) and the Weapons Subsystems Laboratory (WSL) at Los Alamos National Laboratory (LANL) at TA-16. Baseline concentrations of tritium ( 3 H), plutonium ( 238 Pu and 239 Pu) and total uranium were measured in soils, vegetation (pine needles and oak leaves) and ground litter. Tritium was also measured from air samples, while cesium ( 137 Cs) was measured in soils. The mean concentration of airborne tritiated water during 1987 was 3.9 pCi/m 3 . Although the mean annual concentration of 3 H in soil moisture at the 0--5 cm (2 in) soil depth was measured at 0.6 pCi/mL, a better background level, based on long-term regional data, was considered to be 2.6 pCi/mL. Mean values for 137 Cs, 218 Pu, 239 Pu, and total uranium in soils collected from the 0--5 cm depth were 1.08 pCi/g, 0.0014 pCi/g, 0.0325 pCi/g, and 4.01 microg/g, respectively. Ponderosa pine (Pinus ponderosa) needles contained higher values of 238 Pu, 239 Pu, and total uranium than did leaves collected from gambel's oak (Quercus gambelii). In contrast, leaves collected from gambel's oak contained higher levels of 137 Cs than what pine needles did

  8. Supplemental Environmental Baseline Survey for Proposed Land Use Permit Modification for Expansion of the Dynamic Explosive Test Site (DETS) 9940 Main Complex Parking Lot

    Energy Technology Data Exchange (ETDEWEB)

    Peek, Dennis W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    The “subject property” is comprised of a parcel of land within the Kirtland Military Reservation, Bernalillo County, New Mexico, as shown on the map in Appendix B of this document. The land requirement for the parking lot addition to the 9940 Main Complex is approximately 2.7 acres. The scope of this Supplemental Environmental Baseline Survey (SEBS) is for the parking lot addition land transfer only. For details on the original 9940 Main Complex see Environmental Baseline Survey, Land Use Permit Request for the 9940 Complex PERM/0-KI-00-0001, August 21, 2003, and for details on the 9940 Complex Expansion see Environmental Baseline Survey, Proposed Land Use Permit Expansion for 9940 DETS Complex, June 24, 2009. The 2.7-acre parcel of land for the new parking lot, which is the subject of this EBS (also referred to as the “subject property”), is adjacent to the southwest boundary of the original 12.3- acre 9940 Main Complex. No testing is known to have taken place on the subject property site. The only activity known to have taken place was the burial of overhead utility lines in 2014. Adjacent to the subject property, the 9940 Main Complex was originally a 12.3-acre site used by the Department of Energy (DOE) under a land use permit from the United States Air Force (USAF). Historical use of the site, dating from 1964, included arming, fusing, and firing of explosives and testing of explosives systems components. In the late 1970s and early 1980s experiments at the 9940 Main Complex shifted toward reactor safety issues. From 1983 to 1988, fuel coolant interaction (FCI) experiments were conducted, as were experiments with conventional high explosives (HE). Today, the land is used for training of the Nuclear Emergency Response community and for research on energetic materials. In 2009, the original complex was expanded to include four additional 20-acre areas: 9940 Training South, 9940 Training East, T-Range 6, and Training West Landing Zone. The proposed use of

  9. Baseline rationing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Østerdal, Lars Peter Raahave

    The standard problem of adjudicating conflicting claims describes a situation in which a given amount of a divisible good has to be allocated among agents who hold claims against it exceeding the available amount. This paper considers more general rationing problems in which, in addition to claims...... to international protocols for the reduction of greenhouse emissions, or water distribution in drought periods. We define a family of allocation methods for such general rationing problems - called baseline rationing rules - and provide an axiomatic characterization for it. Any baseline rationing rule within...... the family is associated with a standard rule and we show that if the latter obeys some properties reflecting principles of impartiality, priority and solidarity, the former obeys them too....

  10. Ask and Ye Shall Receive? Automated Text Mining of Michigan Capital Facility Finance Bond Election Proposals to Identify Which Topics Are Associated with Bond Passage and Voter Turnout

    Science.gov (United States)

    Bowers, Alex J.; Chen, Jingjing

    2015-01-01

    The purpose of this study is to bring together recent innovations in the research literature around school district capital facility finance, municipal bond elections, statistical models of conditional time-varying outcomes, and data mining algorithms for automated text mining of election ballot proposals to examine the factors that influence the…

  11. Proposal to generate 10 TW level femtosecond X-ray pulses from a baseline undulator in conventional SASE regime at the European XFEL

    International Nuclear Information System (INIS)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni; Zagorodnov, Igor; Geloni, Gianluca

    2013-08-01

    Output characteristics of the European XFEL have been previously studied assuming an operation point at 5 kA peak current. In this paper we explore the possibility to go well beyond such nominal peak current level. In order to illustrate the potential of the European XFEL accelerator complex we consider a bunch with 0.25 nC charge, compressed up to a peak current of 45 kA. An advantage of operating at such high peak current is the increase of the X-ray output peak power without any modification to the baseline design. Based on start-to-end simulations, we demonstrate that such high peak current, combined with undulator tapering, allows one to achieve up to a 100-fold increase in a peak power in the conventional SASE regime, compared to the nominal mode of operation. In particular, we find that 10 TW-power level, femtosecond x-ray pulses can be generated in the photon energy range between 3 keV and 5 keV, which is optimal for single biomolecule imaging. Our simulations are based on the exploitation of all the 21 cells foreseen for the SASE3 undulator beamline, and indicate that one can achieve diffraction to the desired resolution with 15 mJ (corresponding to about 3.10 13 photons) in pulses of about 3 fs, in the case of a 100 nm focus at the photon energy of 3.5 keV.

  12. Proposal to generate 10 TW level femtosecond X-ray pulses from a baseline undulator in conventional SASE regime at the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni; Zagorodnov, Igor [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany)

    2013-08-15

    Output characteristics of the European XFEL have been previously studied assuming an operation point at 5 kA peak current. In this paper we explore the possibility to go well beyond such nominal peak current level. In order to illustrate the potential of the European XFEL accelerator complex we consider a bunch with 0.25 nC charge, compressed up to a peak current of 45 kA. An advantage of operating at such high peak current is the increase of the X-ray output peak power without any modification to the baseline design. Based on start-to-end simulations, we demonstrate that such high peak current, combined with undulator tapering, allows one to achieve up to a 100-fold increase in a peak power in the conventional SASE regime, compared to the nominal mode of operation. In particular, we find that 10 TW-power level, femtosecond x-ray pulses can be generated in the photon energy range between 3 keV and 5 keV, which is optimal for single biomolecule imaging. Our simulations are based on the exploitation of all the 21 cells foreseen for the SASE3 undulator beamline, and indicate that one can achieve diffraction to the desired resolution with 15 mJ (corresponding to about 3.10{sup 13} photons) in pulses of about 3 fs, in the case of a 100 nm focus at the photon energy of 3.5 keV.

  13. “By his wind, he put Yam into his net” – (R. H. [Chaim] Cohen correction proposal of the BHS text of Job 26:13

    Directory of Open Access Journals (Sweden)

    Osvaldo Luiz Ribeiro

    2015-07-01

    Full Text Available Is formulated as a proposed textual criticism the suggestion of correction of the text of Job 26.13 of the Hebrew Bible Stuttgartensia, constant of dissertation by Harold R. (Chaim Cohen , 1975, published in 1978, with the title of Biblical Hapax in the Light of Akkadian and Ugaritic . Cohen presents two statements: 1 retrieves the recommendation of Tur-Sinai ( 1941, that the word hrpX in Job 26.13 should be translated from Akkadian cognate , "saparru", playing to him as "network", so that , then, would treat a case of hapax legomena. Also, 2 Cohen says there were copyist error in the transmission of the Hebrew verse - two independent original vocabulary - ~X and ~y - have been mistakenly clumped by the scribe and processed in the now constant standard text of BHS , ~yIm:åv'. The Cohen’s suggestions recover the condition of the four parallel synonymic verses in Job 26.12-13, since Yam, appearing in if and then corrected v . 13a, compose parallel with the other dragons mentioned in v. 12a, 12b and 13b. Job 26.13 should then be read as follows: " with his wind, he put Yam on your network". Not identified any version or comment that had heeded the suggestion of Cohen.

  14. ICARUS+NESSiE: A proposal for short baseline neutrino anomalies with innovative LAr imaging detectors coupled with large muon spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Gibin, D., E-mail: daniele.gibin@pd.infn.it

    2013-04-15

    The proposal for an experimental search for sterile neutrinos beyond the Standard Model with a new CERN-SPS neutrino beam is presented. The experiment is based on two identical LAr-TPC's followed by magnetized spectrometers, observing the electron and muon neutrino events at 1600 and 300 m from the proton target. This project will exploit the ICARUS T600, moved from LNGS to the CERN “Far” position. An additional 1/4 of the T600 detector will be constructed and located in the “Near” position. Two spectrometers will be placed downstream of the two LAr-TPC detectors to greatly complement the physics capabilities. Comparing the two detectors, in absence of oscillations, all cross sections and experimental biases cancel out. Any difference of the event distributions at the locations of the two detectors might be attributed to the possible existence of ν-oscillations, presumably due to additional neutrinos with a mixing angle sin{sup 2}(2θ{sub new}) and a larger mass difference Δm{sub new}{sup 2}. The superior quality of the LAr imaging TPC, in particular its unique electron-π{sub 0} discrimination allows full rejection of backgrounds and offers a lossless ν{sub e} detection capability. The determination of the muon charge with the spectrometers allows the full separation of ν{sub μ} from anti-ν{sub μ} and therefore controlling systematics from muon mis-identification largely at high momenta.

  15. ICARUS+NESSiE: A proposal for short baseline neutrino anomalies with innovative LAr imaging detectors coupled with large muon spectrometers

    Science.gov (United States)

    Gibin, D.

    2013-04-01

    The proposal for an experimental search for sterile neutrinos beyond the Standard Model with a new CERN-SPS neutrino beam is presented. The experiment is based on two identical LAr-TPC's followed by magnetized spectrometers, observing the electron and muon neutrino events at 1600 and 300 m from the proton target. This project will exploit the ICARUS T600, moved from LNGS to the CERN "Far" position. An additional 1/4 of the T600 detector will be constructed and located in the "Near" position. Two spectrometers will be placed downstream of the two LAr-TPC detectors to greatly complement the physics capabilities. Comparing the two detectors, in absence of oscillations, all cross sections and experimental biases cancel out. Any difference of the event distributions at the locations of the two detectors might be attributed to the possible existence of ν-oscillations, presumably due to additional neutrinos with a mixing angle sin2(2θ) and a larger mass difference Δmnew2. The superior quality of the LAr imaging TPC, in particular its unique electron-π0 discrimination allows full rejection of backgrounds and offers a lossless νe detection capability. The determination of the muon charge with the spectrometers allows the full separation of νμ from anti-νμ and therefore controlling systematics from muon mis-identification largely at high momenta.

  16. Report made on behalf of the parity mixed commission in charge of proposing a text about the dispositions of the project of energy orientation law remaining to be discussed; Rapport fait au nom de la commission mixte paritaire (1) chargee de proposer un texte sur les dispositions restant en discussion du projet de loi d'orientation sur l'energie

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The project of energy orientation law aims at fixing the main principles of the French energy policy for the next decades. It foresees: the re-launching of the French nuclear program (building of an experimental European pressurized reactor (EPR)), the reinforcement of the mastery of energy demand (3% per year, creation of energy saving certificates and reinforcement of buildings energy efficiency rules), and the sustain of renewable energies development. This document presents, first, a direct comparison, article by article, of the text adopted in second lecture by the House of Commons, with the text adopted in second lecture by the Senate. Then, a text is proposed for the last dispositions that remained to be discussed and is presented in the second part of the report. (J.S.)

  17. Textos argumentativos em materiais didáticos: que proposta seguir? Argumentative texts in course materials: which proposal should one follow?

    Directory of Open Access Journals (Sweden)

    Maria Inês Batista Campos

    2011-01-01

    Full Text Available Neste trabalho, o objetivo é descrever detidamente algumas propostas didáticas de produção de textos argumentativos em dois livros didáticos de Língua Portuguesa dirigidos ao Ensino Médio com diferentes perspectivas linguístico-discursivas. A finalidade é analisar essas atividades à luz dos pressupostos teórico-metodológicos definidos pelos autores no "livro do professor" e estabelecer confrontos nas duas abordagens. Finalmente, buscaremos identificar alguns conceitos linguísticos, como linguagem, texto e argumentação, que sustentam essas atividades para a escrita de textos argumentativos, reconhecendo que algumas delas pouco ou nada contribuem para a produção de um texto argumentativo que defenda um ponto de vista com clareza.This paper aims at describing in detail some teaching suggestions for the creation of argumentative texts in two Portuguese language course books for the teaching of Portuguese developed for highschool students from different linguistic-discursive perspective. The objective is to analyze these activities under the view of the theoretical-methodological assumptions defined by the authors in the "teacher's book" and compare the two approaches. Finally, we seek to identify some linguistic concepts, such language, text, and argumentation, which support the activities for the writing of argumentative texts, recognizing that some of them provide little or no contribution to the production of an argumentative text that clearly supports a point of view.

  18. Text Mining.

    Science.gov (United States)

    Trybula, Walter J.

    1999-01-01

    Reviews the state of research in text mining, focusing on newer developments. The intent is to describe the disparate investigations currently included under the term text mining and provide a cohesive structure for these efforts. A summary of research identifies key organizations responsible for pushing the development of text mining. A section…

  19. Program Baseline Change Control Procedure

    International Nuclear Information System (INIS)

    1993-02-01

    This procedure establishes the responsibilities and process for approving initial issues of and changes to the technical, cost, and schedule baselines, and selected management documents developed by the Office of Civilian Radioactive Waste Management (OCRWM) for the Civilian Radioactive Waste Management System. This procedure implements the OCRWM Baseline Management Plan and DOE Order 4700.1, Chg 1. It streamlines the change control process to enhance integration, accountability, and traceability of Level 0 and Level I decisions through standardized Baseline Change Proposal (BCP) forms to be used by the Level 0, 1, 2, and 3 Baseline Change Control Boards (BCCBs) and to be tracked in the OCRWM-wide Configuration Information System (CIS) Database.This procedure applies to all technical, cost, and schedule baselines controlled by the Energy System Acquisition Advisory Board (ESAAB) BCCB (Level 0) and, OCRWM Program Baseline Control Board (PBCCB) (Level 1). All baseline BCPs initiated by Level 2 or lower BCCBs, which require approval from ESAAB or PBCCB, shall be processed in accordance with this procedure. This procedure also applies to all Program-level management documents controlled by the OCRWM PBCCB

  20. XML and Free Text.

    Science.gov (United States)

    Riggs, Ken Roger

    2002-01-01

    Discusses problems with marking free text, text that is either natural language or semigrammatical but unstructured, that prevent well-formed XML from marking text for readily available meaning. Proposes a solution to mark meaning in free text that is consistent with the intended simplicity of XML versus SGML. (Author/LRW)

  1. Communication dated 16 June 2008 received from the Permanent Mission of the Islamic Republic of Iran to the Agency concerning the text of the 'Islamic Republic of Iran's proposed package for constructive negotiation'

    International Nuclear Information System (INIS)

    2008-01-01

    The Secretariat has received a Note Verbale dated 16 June 2008 from the Permanent Mission of the Islamic Republic of Iran attaching the text of the 'Islamic Republic of Iran's proposed package for constructive negotiation'. The Note Verbale and, as requested therein, its attachment, are circulated herewith for the information of the Member States

  2. Program reference schedule baseline

    International Nuclear Information System (INIS)

    1986-07-01

    This Program Reference Schedule Baseline (PRSB) provides the baseline Program-level milestones and associated schedules for the Civilian Radioactive Waste Management Program. It integrates all Program-level schedule-related activities. This schedule baseline will be used by the Director, Office of Civilian Radioactive Waste Management (OCRWM), and his staff to monitor compliance with Program objectives. Chapter 1 includes brief discussions concerning the relationship of the PRSB to the Program Reference Cost Baseline (PRCB), the Mission Plan, the Project Decision Schedule, the Total System Life Cycle Cost report, the Program Management Information System report, the Program Milestone Review, annual budget preparation, and system element plans. Chapter 2 includes the identification of all Level 0, or Program-level, milestones, while Chapter 3 presents and discusses the critical path schedules that correspond to those Level 0 milestones

  3. Long Baseline Observatory (LBO)

    Data.gov (United States)

    Federal Laboratory Consortium — The Long Baseline Observatory (LBO) comprises ten radio telescopes spanning 5,351 miles. It's the world's largest, sharpest, dedicated telescope array. With an eye...

  4. Report made on behalf of the parity mixed commission in charge of proposing a text about the dispositions of the project of energy orientation law remaining to be discussed

    International Nuclear Information System (INIS)

    2005-01-01

    The project of energy orientation law aims at fixing the main principles of the French energy policy for the next decades. It foresees: the re-launching of the French nuclear program (building of an experimental European pressurized reactor (EPR)), the reinforcement of the mastery of energy demand (3% per year, creation of energy saving certificates and reinforcement of buildings energy efficiency rules), and the sustain of renewable energies development. This document presents, first, a direct comparison, article by article, of the text adopted in second lecture by the House of Commons, with the text adopted in second lecture by the Senate. Then, a text is proposed for the last dispositions that remained to be discussed and is presented in the second part of the report. (J.S.)

  5. A long baseline global stereo matching based upon short baseline estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  6. First Grade Baseline Evaluation

    Science.gov (United States)

    Center for Innovation in Assessment (NJ1), 2013

    2013-01-01

    The First Grade Baseline Evaluation is an optional tool that can be used at the beginning of the school year to help teachers get to know the reading and language skills of each student. The evaluation is composed of seven screenings. Teachers may use the entire evaluation or choose to use those individual screenings that they find most beneficial…

  7. Rationing with baselines

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Østerdal, Lars Peter Raahave

    2013-01-01

    We introduce a new operator for general rationing problems in which, besides conflicting claims, individual baselines play an important role in the rationing process. The operator builds onto ideas of composition, which are not only frequent in rationing, but also in related problems...... such as bargaining, choice, and queuing. We characterize the operator and show how it preserves some standard axioms in the literature on rationing. We also relate it to recent contributions in such literature....

  8. The TDAQ Baseline Architecture

    CERN Multimedia

    Wickens, F J

    The Trigger-DAQ community is currently busy preparing material for the DAQ, HLT and DCS TDR. Over the last few weeks a very important step has been a series of meetings to complete agreement on the baseline architecture. An overview of the architecture indicating some of the main parameters is shown in figure 1. As reported at the ATLAS Plenary during the February ATLAS week, the main area where the baseline had not yet been agreed was around the Read-Out System (ROS) and details in the DataFlow. The agreed architecture has: Read-Out Links (ROLs) from the RODs using S-Link; Read-Out Buffers (ROB) sited near the RODs, mounted in a chassis - today assumed to be a PC, using PCI bus at least for configuration, control and monitoring. The baseline assumes data aggregation, in the ROB and/or at the output (which could either be over a bus or in the network). Optimization of the data aggregation will be made in the coming months, but the current model has each ROB card receiving input from 4 ROLs, and 3 such c...

  9. THE US LONG BASELINE NEUTRINO EXPERIMENT STUDY.

    Energy Technology Data Exchange (ETDEWEB)

    BISHAI,M.

    2007-08-06

    The US Long Baseline Neutrino Experiment Study was commissioned jointly by Brookhaven National Laboratory (BNL)and Fermi National Accelerator Laboratory (FNAL) to investigate the potential for future U.S. based long baseline neutrino oscillation experiments using MW class conventional neutrino beams that can be produced at FNAL. The experimental baselines are based on two possible detector locations: (1) off-axis to the existing FNAL NuMI beamline at baselines of 700 to 810 km and (2) NSF's proposed future Deep Underground Science and Engineering Laboratory (DUSEL) at baselines greater than 1000km. Two detector technologies are considered: a megaton class Water Cherenkov detector deployed deep underground at a DUSEL site, or a 100kT Liquid Argon Time-Projection Chamber (TPC) deployed on the surface at any of the proposed sites. The physics sensitivities of the proposed experiments are summarized. We find that conventional horn focused wide-band neutrino beam options from FNAL aimed at a massive detector with a baseline of > 1000km have the best sensitivity to CP violation and the neutrino mass hierarchy for values of the mixing angle {theta}{sub 13} down to 2{sup o}.

  10. 2017 Annual Technology Baseline

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hand, M. M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Eberle, Annika [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Beiter, Philipp C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Kurup, Parthiv [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Turchi, Craig S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Feldman, David J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Margolis, Robert M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Augustine, Chad R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Maness, Michael [Formerly NREL; O' Connor, Patrick [Oak Ridge National Laboratory

    2018-03-26

    Consistent cost and performance data for various electricity generation technologies can be difficult to find and may change frequently for certain technologies. With the Annual Technology Baseline (ATB), the National Renewable Energy Laboratory annually provides an organized and centralized set of such cost and performance data. The ATB uses the best information from the Department of Energy national laboratories' renewable energy analysts as well as information from the Energy Information Administration for fuel-based technologies. The ATB has been reviewed by experts and it includes the following electricity generation technologies: land-based wind, offshore wind, utility-scale solar photovoltaics (PV), commercial-scale solar PV, residential-scale solar PV, concentrating solar power, geothermal power, hydropower, coal, natural gas, nuclear, and conventional biopower. This webinar presentation introduces the 2017 ATB.

  11. Biofuels Baseline 2008

    Energy Technology Data Exchange (ETDEWEB)

    Hamelinck, C.; Koper, M.; Berndes, G.; Englund, O.; Diaz-Chavez, R.; Kunen, E.; Walden, D.

    2011-10-15

    The European Union is promoting the use of biofuels and other renewable energy in transport. In April 2009, the Renewable Energy Directive (2009/28/EC) was adopted that set a 10% target for renewable energy in transport in 2020. The directive sets several requirements to the sustainability of biofuels marketed in the frame of the Directive. The Commission is required to report to the European Parliament on a regular basis on a range of sustainability impacts resulting from the use of biofuels in the EU. This report serves as a baseline of information for regular monitoring on the impacts of the Directive. Chapter 2 discusses the EU biofuels market, the production and consumption of biofuels and international trade. It is derived where the feedstock for EU consumed biofuels originally come from. Chapter 3 discusses the biofuel policy framework in the EU and major third countries of supply. It looks at various policy aspects that are relevant to comply with the EU sustainability requirements. Chapter 4 discusses the environmental and social sustainability aspects associated with EU biofuels and their feedstock. Chapter 5 discusses the macro-economic effects that indirectly result from increased EU biofuels consumption, on commodity prices and land use. Chapter 6 presents country factsheets for main third countries that supplied biofuels to the EU market in 2008.

  12. Hydrogeology baseline study Aurora Mine

    International Nuclear Information System (INIS)

    1996-01-01

    A baseline hydrogeologic study was conducted in the area of Syncrude's proposed Aurora Mine in order to develop a conceptual regional hydrogeologic model for the area that could be used to understand groundwater flow conditions. Geologic information was obtained from over 2,000 coreholes and from data obtained between 1980 and 1996 regarding water level for the basal aquifer. A 3-D numerical groundwater flow model was developed to provide quantitative estimates of the potential environmental impacts of the proposed mining operations on the groundwater flow system. The information was presented in the context of a regional study area which encompassed much of the Athabasca Oil Sands Region, and a local study area which was defined by the lowlands of the Muskeg River Basin. Characteristics of the topography, hydrology, climate, geology, and hydrogeology of the region are described. The conclusion is that groundwater flow in the aquifer occurs mostly in a westerly direction beneath the Aurora Mine towards its inferred discharge location along the Athabasca River. Baseflow in the Muskeg River is mostly related to discharge from shallow surficial aquifers. Water in the river under baseflow conditions was fresh, of calcium-carbonate type, with very little indication of mineralization associated with deeper groundwater in the Aurora Mine area. 44 refs., 5 tabs., 31 figs

  13. Baseline conditions at Olkiluoto

    International Nuclear Information System (INIS)

    2003-09-01

    The main purpose of this report is to establish a reference point - defined as the data collected up until the end of year 2002 - for the coming phases of the Finnish spent nuclear fuel disposal programme. The focus is: to define the current surface and underground conditions at the site, both as regards the properties for which a change is expected and for the properties which are of particular interest for long-term safety or environmental impact; to establish, as far as possible, the natural fluctuation of properties that are potentially affected by construction of the underground laboratory, the ONKALO, and to provide references to data on parameters or use in model development and testing and to use models to assist in understanding and interpreting the data. The emphasis of the baseline description is on bedrock characteristics that are relevant to the long-term safety of a spent fuel repository and, hence, to include the hydrogeological, hydrogeochemical, rock mechanical, tectonic and seismic conditions of the site. The construction of the ONKALO will also affect some conditions on the surface, and, therefore, a description of the main characteristics of the nature and the man-made constructions at Olkiluoto is also given. This report is primarily a road map to the available information on the prevailing conditions at the Olkiluoto site and a framework for understanding of data collected. Hence, it refers to numerous available background reports and other archived information produced over the past 20 years or more, and forms a recapitulation and revaluation of the characterisation data of the Olkiluoto site. (orig.)

  14. Directed Activities Related to Text: Text Analysis and Text Reconstruction.

    Science.gov (United States)

    Davies, Florence; Greene, Terry

    This paper describes Directed Activities Related to Text (DART), procedures that were developed and are used in the Reading for Learning Project at the University of Nottingham (England) to enhance learning from texts and that fall into two broad categories: (1) text analysis procedures, which require students to engage in some form of analysis of…

  15. Strategy as Texts

    DEFF Research Database (Denmark)

    Obed Madsen, Søren

    of the strategy into four categories. Second, the managers produce new texts based on the original strategy document by using four different ways of translation models. The study’s findings contribute to three areas. Firstly, it shows that translation is more than a sociological process. It is also...... a craftsmanship that requires knowledge and skills, which unfortunately seems to be overlooked in both the literature and in practice. Secondly, it shows that even though a strategy text is in singular, the translation makes strategy plural. Thirdly, the article proposes a way to open up the black box of what......This article shows empirically how managers translate a strategy plan at an individual level. By analysing how managers in three organizations translate strategies, it identifies that the translation happens in two steps: First, the managers decipher the strategy by coding the different parts...

  16. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  17. Predicting Prosody from Text for Text-to-Speech Synthesis

    CERN Document Server

    Rao, K Sreenivasa

    2012-01-01

    Predicting Prosody from Text for Text-to-Speech Synthesis covers the specific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing.

  18. Performance Measurement Baseline Change Request

    Data.gov (United States)

    Social Security Administration — The Performance Measurement Baseline Change Request template is used to document changes to scope, cost, schedule, or operational performance metrics for SSA's Major...

  19. DairyBISS Baseline report

    NARCIS (Netherlands)

    Buizer, N.N.; Berhanu, Tinsae; Murutse, Girmay; Vugt, van S.M.

    2015-01-01

    This baseline report of the Dairy Business Information Service and Support (DairyBISS) project presents the findings of a baseline survey among 103 commercial farms and 31 firms and advisors working in the dairy value chain. Additional results from the survey among commercial dairy farms are

  20. Text Maps: Helping Students Navigate Informational Texts.

    Science.gov (United States)

    Spencer, Brenda H.

    2003-01-01

    Notes that a text map is an instructional approach designed to help students gain fluency in reading content area materials. Discusses how the goal is to teach students about the important features of the material and how the maps can be used to build new understandings. Presents the procedures for preparing and using a text map. (SG)

  1. Oscillation Baselining and Analysis Tool

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-27

    PNNL developed a new tool for oscillation analysis and baselining. This tool has been developed under a new DOE Grid Modernization Laboratory Consortium (GMLC) Project (GM0072 - “Suite of open-source applications and models for advanced synchrophasor analysis”) and it is based on the open platform for PMU analysis. The Oscillation Baselining and Analysis Tool (OBAT) performs the oscillation analysis and identifies modes of oscillations (frequency, damping, energy, and shape). The tool also does oscillation event baselining (fining correlation between oscillations characteristics and system operating conditions).

  2. Report on behalf of the ''Commission mixte Paritaire'' commissioned to propose a text on the arrangements remaining in discussion and concerning the law project relative to the gas and electric power market and to the energy public utilities

    International Nuclear Information System (INIS)

    2002-12-01

    In the framework of the energy market opening, this document presents by a comparative table, the recommendations of the Senate and the National Assembly. After examination of this table, the ''Commission Paritaire'' elaborates a common text, presented in the second part of the document. (A.L.B.)

  3. 324 Building Baseline Radiological Characterization

    Energy Technology Data Exchange (ETDEWEB)

    R.J. Reeder, J.C. Cooper

    2010-06-24

    This report documents the analysis of radiological data collected as part of the characterization study performed in 1998. The study was performed to create a baseline of the radiological conditions in the 324 Building.

  4. Baseline restoration using current conveyors

    International Nuclear Information System (INIS)

    Morgado, A.M.L.S.; Simoes, J.B.; Correia, C.M.

    1996-01-01

    A good performance of high resolution nuclear spectrometry systems, at high pulse rates, demands restoration of baseline between pulses, in order to remove rate dependent baseline shifts. This restoration is performed by circuits named baseline restorers (BLRs) which also remove low frequency noise, such as power supply hum and detector microphonics. This paper presents simple circuits for baseline restoration based on a commercial current conveyor (CCII01). Tests were performed, on two circuits, with periodic trapezoidal shaped pulses in order to measure the baseline restoration for several pulse rates and restorer duty cycles. For the current conveyor based Robinson restorer, the peak shift was less than 10 mV, for duty cycles up to 60%, at high pulse rates. Duty cycles up to 80% were also tested, being the maximum peak shift 21 mV. The peak shift for the current conveyor based Grubic restorer was also measured. The maximum value found was 30 mV at 82% duty cycle. Keeping the duty cycle below 60% improves greatly the restorer performance. The ability of both baseline restorer architectures to reject low frequency modulation is also measured, with good results on both circuits

  5. Report on behalf of the ''Commission mixte Paritaire'' commissioned to propose a text on the arrangements remaining in discussion and concerning the law project relative to the gas and electric power market and to the energy public utilities; Rapport au nom de la Commission mixte paritaire chargee de proposer un texte sur les dispositions restant en discussion du projet de loi relatif aux marches energetiques et au service public de l'energie

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-12-01

    In the framework of the energy market opening, this document presents by a comparative table, the recommendations of the Senate and the National Assembly. After examination of this table, the ''Commission Paritaire'' elaborates a common text, presented in the second part of the document. (A.L.B.)

  6. DEEP LEARNING MODEL FOR BILINGUAL SENTIMENT CLASSIFICATION OF SHORT TEXTS

    Directory of Open Access Journals (Sweden)

    Y. B. Abdullin

    2017-01-01

    Full Text Available Sentiment analysis of short texts such as Twitter messages and comments in news portals is challenging due to the lack of contextual information. We propose a deep neural network model that uses bilingual word embeddings to effectively solve sentiment classification problem for a given pair of languages. We apply our approach to two corpora of two different language pairs: English-Russian and Russian-Kazakh. We show how to train a classifier in one language and predict in another. Our approach achieves 73% accuracy for English and 74% accuracy for Russian. For Kazakh sentiment analysis, we propose a baseline method, that achieves 60% accuracy; and a method to learn bilingual embeddings from a large unlabeled corpus using a bilingual word pairs.

  7. Accelerated Best Basis Inventory Baselining Task

    International Nuclear Information System (INIS)

    SASAKI, L.M.

    2001-01-01

    The baselining effort was recently proposed to bring the Best-Basis Inventory (BBI) and Question No.8 of the Tank Interpretive Report (TIR) for all 177 tanks to the current standards and protocols and to prepare a TIR Question No.8 if one is not already available. This plan outlines the objectives and methodology of the accelerated BBI baselining task. BBI baselining meetings held during December 2000 resulted in a revised BBI methodology and an initial set of BBI creation rules to be used in the baselining effort. The objectives of the BBI baselining effort are to: (1) Provide inventories that are consistent with the revised BBI methodology and new BBI creation rules. (2) Split the total tank waste in each tank into six waste phases, as appropriate (Supernatant, saltcake solids, saltcake liquid, sludge solids, sludge liquid, and retained gas). In some tanks, the solids and liquid portions of the sludge and/or saltcake may be combined into a single sludge or saltcake phase. (3) Identify sampling events that are to be used for calculating the BBIs. (4) Update waste volumes for subsequent reconciliation with the Hanlon (2001) waste tank summary. (5) Implement new waste type templates. (6) Include any sample data that might have been unintentionally omitted in the previous BBI and remove any sample data that should not have been included. Sample data to be used in the BBI must be available on TWINS. (7) Ensure that an inventory value for each standard BBI analyte is provided for each waste component. Sample based inventories for supplemental BBI analytes will be included when available. (8) Provide new means and confidence interval reports if one is not already available and include uncertainties in reporting inventory values

  8. A Mean-Shift-Based Feature Descriptor for Wide Baseline Stereo Matching

    Directory of Open Access Journals (Sweden)

    Yiwen Dou

    2015-01-01

    Full Text Available We propose a novel Mean-Shift-based building approach in wide baseline. Initially, scale-invariance feature transform (SIFT approach is used to extract relatively stable feature points. As to each matching SIFT feature point, it needs a reasonable neighborhood range so as to choose feature points set. Subsequently, in view of selecting repeatable and high robust feature points, Mean-Shift controls corresponding feature scale. At last, our approach is employed to depth image acquirement in wide baseline and Graph Cut algorithm optimizes disparity information. Compared with the existing methods such as SIFT, speeded up robust feature (SURF, and normalized cross-correlation (NCC, the presented approach has the advantages of higher robustness and accuracy rate. Experimental results on low resolution image and weak feature description in wide baseline confirm the validity of our approach.

  9. An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry

    Directory of Open Access Journals (Sweden)

    Dong-mei Yao

    2016-01-01

    Full Text Available According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production processes and gives the general solving method of each kind of model according to the production data. Then the energy plan implementation effect can be evaluated and also whether the system is running normally can be determined through the baseline model. Finally, this method is used on cracked gas compressor unit of ethylene plant in some petrochemical enterprise; it can be proven that this method is correct and practical.

  10. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  11. Physics Potential of Long-Baseline Experiments

    Directory of Open Access Journals (Sweden)

    Sanjib Kumar Agarwalla

    2014-01-01

    Full Text Available The discovery of neutrino mixing and oscillations over the past decade provides firm evidence for new physics beyond the Standard Model. Recently, θ13 has been determined to be moderately large, quite close to its previous upper bound. This represents a significant milestone in establishing the three-flavor oscillation picture of neutrinos. It has opened up exciting prospects for current and future long-baseline neutrino oscillation experiments towards addressing the remaining fundamental questions, in particular the type of the neutrino mass hierarchy and the possible presence of a CP-violating phase. Another recent and crucial development is the indication of non-maximal 2-3 mixing angle, causing the octant ambiguity of θ23. In this paper, I will review the phenomenology of long-baseline neutrino oscillations with a special emphasis on sub-leading three-flavor effects, which will play a crucial role in resolving these unknowns. First, I will give a brief description of neutrino oscillation phenomenon. Then, I will discuss our present global understanding of the neutrino mass-mixing parameters and will identify the major unknowns in this sector. After that, I will present the physics reach of current generation long-baseline experiments. Finally, I will conclude with a discussion on the physics capabilities of accelerator-driven possible future long-baseline precision oscillation facilities.

  12. Developing RESRAD-BASELINE for environmental baseline risk assessment

    International Nuclear Information System (INIS)

    Cheng, Jing-Jy.

    1995-01-01

    RESRAD-BASELINE is a computer code developed at Argonne developed at Argonne National Laboratory for the US Department of Energy (DOE) to perform both radiological and chemical risk assessments. The code implements the baseline risk assessment guidance of the US Environmental Protection Agency (EPA 1989). The computer code calculates (1) radiation doses and cancer risks from exposure to radioactive materials, and (2) hazard indexes and cancer risks from exposure to noncarcinogenic and carcinogenic chemicals, respectively. The user can enter measured or predicted environmental media concentrations from the graphic interface and can simulate different exposure scenarios by selecting the appropriate pathways and modifying the exposure parameters. The database used by PESRAD-BASELINE includes dose conversion factors and slope factors for radionuclides and toxicity information and properties for chemicals. The user can modify the database for use in the calculation. Sensitivity analysis can be performed while running the computer code to examine the influence of the input parameters. Use of RESRAD-BASELINE for risk analysis is easy, fast, and cost-saving. Furthermore, it ensures in consistency in methodology for both radiological and chemical risk analyses

  13. Text-Fabric

    NARCIS (Netherlands)

    Roorda, Dirk

    2016-01-01

    Text-Fabric is a Python3 package for Text plus Annotations. It provides a data model, a text file format, and a binary format for (ancient) text plus (linguistic) annotations. The emphasis of this all is on: data processing; sharing data; and contributing modules. A defining characteristic is that

  14. Contextual Text Mining

    Science.gov (United States)

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the…

  15. E-text

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    text can be defined by taking as point of departure the digital format in which everything is represented in the binary alphabet. While the notion of text, in most cases, lends itself to be independent of medium and embodiment, it is also often tacitly assumed that it is, in fact, modeled around...... the print medium, rather than written text or speech. In late 20th century, the notion of text was subject to increasing criticism as in the question raised within literary text theory: is there a text in this class? At the same time, the notion was expanded by including extra linguistic sign modalities...

  16. Long baseline neutrino oscillation experiments

    International Nuclear Information System (INIS)

    Gallagher, H.

    2006-01-01

    In this paper I will review briefly the experimental results which established the existence of neutrino mixing, the current generation of long baseline accelerator experiments, and the prospects for the future. In particular I will focus on the recent analysis of the MINOS experiment. (author)

  17. Baseline Report on HB2320

    Science.gov (United States)

    State Council of Higher Education for Virginia, 2015

    2015-01-01

    Staff provides this baseline report as a summary of its preliminary considerations and initial research in fulfillment of the requirements of HB2320 from the 2015 session of the General Assembly. Codified as § 23-7.4:7, this legislation compels the Education Secretary and the State Council of Higher Education for Virginia (SCHEV) Director, in…

  18. Quality Inspection of Printed Texts

    DEFF Research Database (Denmark)

    Pedersen, Jesper Ballisager; Nasrollahi, Kamal; Moeslund, Thomas B.

    2016-01-01

    -folded: for costumers of the printing and verification system, the overall grade used to verify if the text is of sufficient quality, while for printer's manufacturer, the detailed character/symbols grades and quality measurements are used for the improvement and optimization of the printing task. The proposed system...

  19. Texting on the Move

    Science.gov (United States)

    ... text. What's the Big Deal? The problem is multitasking. No matter how young and agile we are, ... on something other than the road. In fact, driving while texting (DWT) can be more dangerous than ...

  20. Text Coherence in Translation

    Science.gov (United States)

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can…

  1. Biomarker Identification Using Text Mining

    Directory of Open Access Journals (Sweden)

    Hui Li

    2012-01-01

    Full Text Available Identifying molecular biomarkers has become one of the important tasks for scientists to assess the different phenotypic states of cells or organisms correlated to the genotypes of diseases from large-scale biological data. In this paper, we proposed a text-mining-based method to discover biomarkers from PubMed. First, we construct a database based on a dictionary, and then we used a finite state machine to identify the biomarkers. Our method of text mining provides a highly reliable approach to discover the biomarkers in the PubMed database.

  2. Bandwidth Optimization of Normal Equation Matrix in Bundle Block Adjustment in Multi-baseline Rotational Photography

    Directory of Open Access Journals (Sweden)

    WANG Xiang

    2016-02-01

    Full Text Available A new bandwidth optimization method of normal equation matrix in bundle block adjustment in multi-baseline rotational close range photography by image index re-sorting is proposed. The equivalent exposure station of each image is calculated by its object space coverage and the relationship with other adjacent images. Then, according to the coordinate relations between equivalent exposure stations, new logical indices of all images are computed, based on which, the optimized bandwidth value can be obtained. Experimental results show that the bandwidth determined by our proposed method is significantly better than its original value, thus the operational efficiency, as well as the memory consumption of multi-baseline rotational close range photography in real-data applications, is optimized to a certain extent.

  3. Vocabulary Constraint on Texts

    Directory of Open Access Journals (Sweden)

    C. Sutarsyah

    2008-01-01

    Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.  It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.

  4. Baseline budgeting for continuous improvement.

    Science.gov (United States)

    Kilty, G L

    1999-05-01

    This article is designed to introduce the techniques used to convert traditionally maintained department budgets to baseline budgets. This entails identifying key activities, evaluating for value-added, and implementing continuous improvement opportunities. Baseline Budgeting for Continuous Improvement was created as a result of a newly named company president's request to implement zero-based budgeting. The president was frustrated with the mind-set of the organization, namely, "Next year's budget should be 10 to 15 percent more than this year's spending." Zero-based budgeting was not the answer, but combining the principles of activity-based costing and the Just-in-Time philosophy of eliminating waste and continuous improvement did provide a solution to the problem.

  5. Dictionaries for text production

    DEFF Research Database (Denmark)

    Fuertes-Olivera, Pedro; Bergenholtz, Henning

    2018-01-01

    Dictionaries for Text Production are information tools that are designed and constructed for helping users to produce (i.e. encode) texts, both oral and written texts. These can be broadly divided into two groups: (a) specialized text production dictionaries, i.e., dictionaries that only offer...... a small amount of lexicographic data, most or all of which are typically used in a production situation, e.g. synonym dictionaries, grammar and spelling dictionaries, collocation dictionaries, concept dictionaries such as the Longman Language Activator, which is advertised as the World’s First Production...... Dictionary; (b) general text production dictionaries, i.e., dictionaries that offer all or most of the lexicographic data that are typically used in a production situation. A review of existing production dictionaries reveals that there are many specialized text production dictionaries but only a few general...

  6. Long-baseline neutrino oscillation experiments

    International Nuclear Information System (INIS)

    Crane, D.; Goodman, M.

    1994-01-01

    There is no unambiguous definition for long baseline neutrino oscillation experiments. The term is generally used for accelerator neutrino oscillation experiments which are sensitive to Δm 2 2 , and for which the detector is not on the accelerator site. The Snowmass N2L working group met to discuss the issues facing such experiments. The Fermilab Program Advisory Committee adopted several recommendations concerning the Fermilab neutrino program at their Aspen meeting immediately prior to the Snowmass Workshop. This heightened the attention for the proposals to use Fermilab for a long-baseline neutrino experiment at the workshop. The plan for a neutrino oscillation program at Brookhaven was also thoroughly discussed. Opportunities at CERN were considered, particularly the use of detectors at the Gran Sasso laboratory. The idea to build a neutrino beam from KEK towards Superkamiokande was not discussed at the Snowmass meeting, but there has been considerable development of this idea since then. Brookhaven and KEK would use low energy neutrino beams, while FNAL and CERN would plan have medium energy beams. This report will summarize a few topics common to LBL proposals and attempt to give a snapshot of where things stand in this fast developing field

  7. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  8. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint

    Directory of Open Access Journals (Sweden)

    Ang Gong

    2015-12-01

    Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  9. Instant Sublime Text starter

    CERN Document Server

    Haughee, Eric

    2013-01-01

    A starter which teaches the basic tasks to be performed with Sublime Text with the necessary practical examples and screenshots. This book requires only basic knowledge of the Internet and basic familiarity with any one of the three major operating systems, Windows, Linux, or Mac OS X. However, as Sublime Text 2 is primarily a text editor for writing software, many of the topics discussed will be specifically relevant to software development. That being said, the Sublime Text 2 Starter is also suitable for someone without a programming background who may be looking to learn one of the tools of

  10. Large-baseline InSAR for precise topographic mapping: a framework for TanDEM-X large-baseline data

    Directory of Open Access Journals (Sweden)

    M. Pinheiro

    2017-09-01

    Full Text Available The global Digital Elevation Model (DEM resulting from the TanDEM-X mission provides information about the world topography with outstanding precision. In fact, performance analysis carried out with the already available data have shown that the global product is well within the requirements of 10 m absolute vertical accuracy and 2 m relative vertical accuracy for flat to moderate terrain. The mission's science phase took place from October 2014 to December 2015. During this phase, bistatic acquisitions with across-track separation between the two satellites up to 3.6 km at the equator were commanded. Since the relative vertical accuracy of InSAR derived elevation models is, in principle, inversely proportional to the system baseline, the TanDEM-X science phase opened the doors for the generation of elevation models with improved quality with respect to the standard product. However, the interferometric processing of the large-baseline data is troublesome due to the increased volume decorrelation and very high frequency of the phase variations. Hence, in order to fully profit from the increased baseline, sophisticated algorithms for the interferometric processing, and, in particular, for the phase unwrapping have to be considered. This paper proposes a novel dual-baseline region-growing framework for the phase unwrapping of the large-baseline interferograms. Results from two experiments with data from the TanDEM-X science phase are discussed, corroborating the expected increased level of detail of the large-baseline DEMs.

  11. Linguistics in Text Interpretation

    DEFF Research Database (Denmark)

    Togeby, Ole

    2011-01-01

    A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....

  12. LocText

    DEFF Research Database (Denmark)

    Cejuela, Juan Miguel; Vinchurkar, Shrikant; Goldberg, Tatyana

    2018-01-01

    trees and was trained and evaluated on a newly improved LocTextCorpus. Combined with an automatic named-entity recognizer, LocText achieved high precision (P = 86%±4). After completing development, we mined the latest research publications for three organisms: human (Homo sapiens), budding yeast...

  13. Systematic text condensation

    DEFF Research Database (Denmark)

    Malterud, Kirsti

    2012-01-01

    To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies.......To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies....

  14. The Perfect Text.

    Science.gov (United States)

    Russo, Ruth

    1998-01-01

    A chemistry teacher describes the elements of the ideal chemistry textbook. The perfect text is focused and helps students draw a coherent whole out of the myriad fragments of information and interpretation. The text would show chemistry as the central science necessary for understanding other sciences and would also root chemistry firmly in the…

  15. Text 2 Mind Map

    OpenAIRE

    Iona, John

    2017-01-01

    This is a review of the web resource 'Text 2 Mind Map' www.Text2MindMap.com. It covers what the resource is, and how it might be used in Library and education context, in particular for School Librarians.

  16. Text File Comparator

    Science.gov (United States)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  17. Dynamic baseline detection method for power data network service

    Science.gov (United States)

    Chen, Wei

    2017-08-01

    This paper proposes a dynamic baseline Traffic detection Method which is based on the historical traffic data for the Power data network. The method uses Cisco's NetFlow acquisition tool to collect the original historical traffic data from network element at fixed intervals. This method uses three dimensions information including the communication port, time, traffic (number of bytes or number of packets) t. By filtering, removing the deviation value, calculating the dynamic baseline value, comparing the actual value with the baseline value, the method can detect whether the current network traffic is abnormal.

  18. Zum Bildungspotenzial biblischer Texte

    Directory of Open Access Journals (Sweden)

    Theis, Joachim

    2017-11-01

    Full Text Available Biblical education as a holistic process goes far beyond biblical learning. It must be understood as a lifelong process, in which both biblical texts and their understanders operate appropriating their counterpart in a dialogical way. – Neither does the recipient’s horizon of understanding appear as an empty room, which had to be filled with the text only, nor is the latter a dead material one could only examine cognitively. The recipient discovers the meaning of the biblical text recomposing it by existential appropriation. So the text is brought to live in each individual reality. Both scientific insights and subjective structures as well as the understanders’ community must be included to avoid potential one-sidednesses. Unfortunately, a special negative association obscures the approach of the bible very often: Still biblical work as part of religious education appears in a cognitively oriented habit, which is neither regarding the vitality and sovereignty of the biblical texts nor the students’ desire for meaning. Moreover, the bible is getting misused for teaching moral terms or pontifications. Such downfalls can be disrupted by biblical didactics which are empowerment didactics. Regarding the sovereignty of biblical texts, these didactics assist the understander with his/her individuation by opening the texts with focus on the understander’s otherness. Thus each the text and the recipient become subjects in a dialogue. The approach of the Biblical-Enabling-Didactics leads the Bible to become always new a book of life. Understanding them from within their hermeneutics, empowerment didactics could be raised to the principle of biblical didactics in general and grow into an essential element of holistic education.

  19. Emotion detection from text

    Science.gov (United States)

    Ramalingam, V. V.; Pandian, A.; Jaiswal, Abhijeet; Bhatia, Nikhar

    2018-04-01

    This paper presents a novel method based on concept of Machine Learning for Emotion Detection using various algorithms of Support Vector Machine and major emotions described are linked to the Word-Net for enhanced accuracy. The approach proposed plays a promising role to augment the Artificial Intelligence in the near future and could be vital in optimization of Human-Machine Interface.

  20. STATUS OF THE US LONG BASELINE NEUTRINO EXPERIMENT STUDY.

    Energy Technology Data Exchange (ETDEWEB)

    BISHAI,M.

    2006-09-21

    The US Long Baseline Neutrino Experiment Study was commissioned jointly by Brookhaven National Laboratory and Fermi National Accelerator Laboratory to investigate the potential for future U.S. based long baseline neutrino oscillation experiments beyond the currently planned program. The Study focused on MW class convention at neutrino beams that can be produced at Fermilab or BNL. The experimental baselines are based on two possible detector locations: (1) off-axis to the existing Fermilab NuMI beamline at baselines of 700 to 810 km and (2) NSF's proposed future Deep Underground Science and Engineering Laboratory (DUSEL) at baselines greater than 1000 km. Two detector technologies are considered: a megaton class Water Cherenkov detector deployed deep underground at a DUSEL site, or a 100kT Liquid Argon Time-Projection Chamber (TPC) deployed on the surface at any of the proposed sites. The physics sensitivities of the proposed experiments are summarized. We find that conventional horn focused wide-band neutrino beam options from Fermilab or BNL aimed at a massive detector with a baseline of > 1000 km have the best sensitivity to CP violation and the neutrino mass hierarchy for values of the mixing angle {theta}{sub 13} down to 2.2{sup o}.

  1. EST: Evading Scientific Text.

    Science.gov (United States)

    Ward, Jeremy

    2001-01-01

    Examines chemical engineering students' attitudes to text and other parts of English language textbooks. A questionnaire was administered to a group of undergraduates. Results reveal one way students get around the problem of textbook reading. (Author/VWL)

  2. nal Sesotho texts

    African Journals Online (AJOL)

    with literary texts written in indigenous South African languages. The project ... Homi Bhabha uses the words of Salman Rushdie to underline the fact that new .... I could not conceptualise an African-language-to-African-language dictionary. An.

  3. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  4. Plagiarism in Academic Texts

    Directory of Open Access Journals (Sweden)

    Marta Eugenia Rojas-Porras

    2012-08-01

    Full Text Available The ethical and social responsibility of citing the sources in a scientific or artistic work is undeniable. This paper explores, in a preliminary way, academic plagiarism in its various forms. It includes findings based on a forensic analysis. The purpose of this paper is to raise awareness on the importance of considering these details when writing and publishing a text. Hopefully, this analysis may put the issue under discussion.

  5. Machine Translation from Text

    Science.gov (United States)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  6. Integrated Baseline Review (IBR) Handbook

    Science.gov (United States)

    Fleming, Jon F.; Terrell, Stefanie M.

    2018-01-01

    The purpose of this handbook is intended to be a how-to guide to prepare for, conduct, and close-out an Integrated Baseline Review (IBR). It discusses the steps that should be considered, describes roles and responsibilities, tips for tailoring the IBR based on risk, cost, and need for management insight, and provides lessons learned from past IBRs. Appendices contain example documentation typically used in connection with an IBR. Note that these appendices are examples only, and should be tailored to meet the needs of individual projects and contracts.

  7. Environmental Baseline File National Transportation

    International Nuclear Information System (INIS)

    Harris, M.

    1999-01-01

    This Environmental Baseline File summarizes and consolidates information related to the national-level transportation of commercial spent nuclear fuel. Topics addressed include: shipments of commercial spent nuclear fuel based on mostly truck and mostly rail shipping scenarios; transportation routing for commercial spent nuclear fuel sites and DOE sites; radionuclide inventories for various shipping container capacities; transportation routing; populations along transportation routes; urbanized area population densities; the impacts of historical, reasonably foreseeable, and general transportation; state-level food transfer factors; Federal Guidance Report No. 11 and 12 radionuclide dose conversion factors; and national average atmospheric conditions

  8. Baseline atmospheric program Australia 1993

    International Nuclear Information System (INIS)

    Francey, R.J.; Dick, A.L.; Derek, N.

    1996-01-01

    This publication reports activities, program summaries and data from the Cape Grim Baseline Air Pollution Station in Tasmania, during the calendar year 1993. These activities represent Australia's main contribution to the Background Air Pollution Monitoring Network (BAPMoN), part of the World Meteorological Organization's Global Atmosphere Watch (GAW). The report includes 5 research reports covering trace gas sampling, ozone and radon interdependence, analysis of atmospheric dimethylsulfide and carbon-disulfide, sampling of trace gas composition of the troposphere, and sulfur aerosol/CCN relationship in marine air. Summaries of program reports for the calendar year 1993 are also included. Tabs., figs., refs

  9. Baseline LAW Glass Formulation Testing

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Mooers, Cavin; Bazemore, Gina; Pegg, Ian L.; Hight, Kenneth; Lai, Shan Tao; Buechele, Andrew; Rielley, Elizabeth; Gan, Hao; Muller, Isabelle S.; Cecil, Richard

    2013-01-01

    The major objective of the baseline glass formulation work was to develop and select glass formulations that are compliant with contractual and processing requirements for each of the LAW waste streams. Other objectives of the work included preparation and characterization of glasses with respect to the properties of interest, optimization of sulfate loading in the glasses, evaluation of ability to achieve waste loading limits, testing to demonstrate compatibility of glass melts with melter materials of construction, development of glass formulations to support ILAW qualification activities, and identification of glass formulation issues with respect to contract specifications and processing requirements

  10. Reasoning with Annotations of Texts

    OpenAIRE

    Ma , Yue; Lévy , François; Ghimire , Sudeep

    2011-01-01

    International audience; Linguistic and semantic annotations are important features for text-based applications. However, achieving and maintaining a good quality of a set of annotations is known to be a complex task. Many ad hoc approaches have been developed to produce various types of annotations, while comparing those annotations to improve their quality is still rare. In this paper, we propose a framework in which both linguistic and domain information can cooperate to reason with annotat...

  11. FED baseline engineering studies report

    Energy Technology Data Exchange (ETDEWEB)

    Sager, P.H.

    1983-04-01

    Studies were carried out on the FED Baseline to improve design definition, establish feasibility, and reduce cost. Emphasis was placed on cost reduction, but significant feasibility concerns existed in several areas, and better design definition was required to establish feasibility and provide a better basis for cost estimates. Design definition and feasibility studies included the development of a labyrinth shield ring concept to prevent radiation streaming between the torus spool and the TF coil cryostat. The labyrinth shield concept which was developed reduced radiation streaming sufficiently to permit contact maintenance of the inboard EF coils. Various concepts of preventing arcing between adjacent shield sectors were also explored. It was concluded that installation of copper straps with molybdenum thermal radiation shields would provide the most reliable means of preventing arcing. Other design studies included torus spool electrical/structural concepts, test module shielding, torus seismic response, poloidal conditions in the magnets, disruption characteristics, and eddy current effects. These additional studies had no significant impact on cost but did confirm the feasibility of the basic FED Baseline concept.

  12. FED baseline engineering studies report

    International Nuclear Information System (INIS)

    Sager, P.H.

    1983-04-01

    Studies were carried out on the FED Baseline to improve design definition, establish feasibility, and reduce cost. Emphasis was placed on cost reduction, but significant feasibility concerns existed in several areas, and better design definition was required to establish feasibility and provide a better basis for cost estimates. Design definition and feasibility studies included the development of a labyrinth shield ring concept to prevent radiation streaming between the torus spool and the TF coil cryostat. The labyrinth shield concept which was developed reduced radiation streaming sufficiently to permit contact maintenance of the inboard EF coils. Various concepts of preventing arcing between adjacent shield sectors were also explored. It was concluded that installation of copper straps with molybdenum thermal radiation shields would provide the most reliable means of preventing arcing. Other design studies included torus spool electrical/structural concepts, test module shielding, torus seismic response, poloidal conditions in the magnets, disruption characteristics, and eddy current effects. These additional studies had no significant impact on cost but did confirm the feasibility of the basic FED Baseline concept

  13. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  14. TEXT Energy Storage System

    International Nuclear Information System (INIS)

    Weldon, W.F.; Rylander, H.G.; Woodson, H.H.

    1977-01-01

    The Texas Experimental Tokamak (TEXT) Enery Storage System, designed by the Center for Electromechanics (CEM), consists of four 50 MJ, 125 V homopolar generators and their auxiliaries and is designed to power the toroidal and poloidal field coils of TEXT on a two-minute duty cycle. The four 50 MJ generators connected in series were chosen because they represent the minimum cost configuration and also represent a minimal scale up from the successful 5.0 MJ homopolar generator designed, built, and operated by the CEM

  15. New mathematical cuneiform texts

    CERN Document Server

    Friberg, Jöran

    2016-01-01

    This monograph presents in great detail a large number of both unpublished and previously published Babylonian mathematical texts in the cuneiform script. It is a continuation of the work A Remarkable Collection of Babylonian Mathematical Texts (Springer 2007) written by Jöran Friberg, the leading expert on Babylonian mathematics. Focussing on the big picture, Friberg explores in this book several Late Babylonian arithmetical and metro-mathematical table texts from the sites of Babylon, Uruk and Sippar, collections of mathematical exercises from four Old Babylonian sites, as well as a new text from Early Dynastic/Early Sargonic Umma, which is the oldest known collection of mathematical exercises. A table of reciprocals from the end of the third millennium BC, differing radically from well-documented but younger tables of reciprocals from the Neo-Sumerian and Old-Babylonian periods, as well as a fragment of a Neo-Sumerian clay tablet showing a new type of a labyrinth are also discussed. The material is presen...

  16. The Emar Lexical Texts

    NARCIS (Netherlands)

    Gantzert, Merijn

    2011-01-01

    This four-part work provides a philological analysis and a theoretical interpretation of the cuneiform lexical texts found in the Late Bronze Age city of Emar, in present-day Syria. These word and sign lists, commonly dated to around 1100 BC, were almost all found in the archive of a single school.

  17. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word

  18. Texts and Readers.

    Science.gov (United States)

    Iser, Wolfgang

    1980-01-01

    Notes that, since fictional discourse need not reflect prevailing systems of meaning and norms or values, readers gain detachment from their own presuppositions; by constituting and formulating text-sense, readers are constituting and formulating their own cognition and becoming aware of the operations for doing so. (FL)

  19. Documents and legal texts

    International Nuclear Information System (INIS)

    2017-01-01

    This section treats of the following documents and legal texts: 1 - Belgium 29 June 2014 - Act amending the Act of 22 July 1985 on Third-Party Liability in the Field of Nuclear Energy; 2 - Belgium, 7 December 2016. - Act amending the Act of 22 July 1985 on Third-Party Liability in the Field of Nuclear Energy

  20. Baseline Estimation and Outlier Identification for Halocarbons

    Science.gov (United States)

    Wang, D.; Schuck, T.; Engel, A.; Gallman, F.

    2017-12-01

    The aim of this paper is to build a baseline model for halocarbons and to statistically identify the outliers under specific conditions. In this paper, time series of regional CFC-11 and Chloromethane measurements was discussed, which taken over the last 4 years at two locations, including a monitoring station at northwest of Frankfurt am Main (Germany) and Mace Head station (Ireland). In addition to analyzing time series of CFC-11 and Chloromethane, more importantly, a statistical approach of outlier identification is also introduced in this paper in order to make a better estimation of baseline. A second-order polynomial plus harmonics are fitted to CFC-11 and chloromethane mixing ratios data. Measurements with large distance to the fitting curve are regard as outliers and flagged. Under specific requirement, the routine is iteratively adopted without the flagged measurements until no additional outliers are found. Both model fitting and the proposed outlier identification method are realized with the help of a programming language, Python. During the period, CFC-11 shows a gradual downward trend. And there is a slightly upward trend in the mixing ratios of Chloromethane. The concentration of chloromethane also has a strong seasonal variation, mostly due to the seasonal cycle of OH. The usage of this statistical method has a considerable effect on the results. This method efficiently identifies a series of outliers according to the standard deviation requirements. After removing the outliers, the fitting curves and trend estimates are more reliable.

  1. Pinellas Plant Environmental Baseline Report

    Energy Technology Data Exchange (ETDEWEB)

    1997-06-01

    The Pinellas Plant has been part of the Department of Energy`s (DOE) nuclear weapons complex since the plant opened in 1957. In March 1995, the DOE sold the Pinellas Plant to the Pinellas County Industry Council (PCIC). DOE has leased back a large portion of the plant site to facilitate transition to alternate use and safe shutdown. The current mission is to achieve a safe transition of the facility from defense production and prepare the site for alternative uses as a community resource for economic development. Toward that effort, the Pinellas Plant Environmental Baseline Report (EBR) discusses the current and past environmental conditions of the plant site. Information for the EBR is obtained from plant records. Historical process and chemical usage information for each area is reviewed during area characterizations.

  2. Integrated Baseline Review (IBR) Handbook

    Science.gov (United States)

    Fleming, Jon F.; Kehrer, Kristen C.

    2016-01-01

    The purpose of this handbook is intended to be a how-to guide to prepare for, conduct, and close-out an Integrated Baseline Review (IBR). It discusses the steps that should be considered, describes roles and responsibilities, tips for tailoring the IBR based on risk, cost, and need for management insight, and provides lessons learned from past IBRs. Appendices contain example documentation typically used in connection with an IBR. Note that these appendices are examples only, and should be tailored to meet the needs of individual projects and contracts. Following the guidance in this handbook will help customers and suppliers preparing for an IBR understand the expectations of the IBR, and ensure that the IBR meets the requirements for both in-house and contract efforts.

  3. Base-line studies for DAE establishments

    International Nuclear Information System (INIS)

    Puranik, V.D.

    2012-01-01

    to ensure that the seasonal variations in parameters are considered. The data is generated for an area covering at least 30 km radium around the proposed location of the facility, in a manner, such that very dense data is generated close to the location and it becomes gradually sparce for the distant areas. Base-line data is generated with the help of local universities and institutions under constant interaction and supervision of the departmental personnel. The work is carried out as per the set protocols laid down by the department for such purposes. The protocols conform to the international practices for carrying out such work. The studies include measurement of concentrations of naturally occurring and man-made radionuclides and also heavy toxic metals and other pollutants in various environmental matrices such as air sub soil water, surface water, soil, sediment, biota and locally consumed food items including meat, fish, milk, eggs, vegetables and cereals. Studies on density and variety of flora and fauna in the region are carried out. Health status and demographic status is recorded in detail. Hydrogeological studies are carried out to establish ground water movement at the location. Based on the data so generated, a Remote Sensing and Geographic Information System is prepared to collate the data. For coastal locations, studies of the nearby marine environment are also carried out. The baseline data is a valuable set of information of the environmental status of a location prevailing before the start of the departmental activity. Its importance is two fold - firstly because, it can not be generated after the start of the activity at the given location and secondly because it is the most authentic data set which can be referred later to assess the environmental impact of the facility by way of evaluating the changes in the environmental parameters, if any. (author)

  4. Magical properties of a 2540 km baseline superbeam experiment

    International Nuclear Information System (INIS)

    Raut, Sushant K.; Singh, Ravi Shanker; Uma Sankar, S.

    2011-01-01

    Lack of any information on the CP violating phase δ CP weakens our ability to determine neutrino mass hierarchy. Magic baseline of 7500 km was proposed to overcome this problem. However, to obtain large enough fluxes, at this very long baseline, one needs new techniques of generating high intensity neutrino beams. In this Letter, we highlight the magical properties of a 2540 km baseline. At such a baseline, using a narrow band neutrino superbeam whose no oscillation event rate peaks around the energy 3.5 GeV, we can determine neutrino mass hierarchy independently of the CP phase. For sin 2 2θ 13 ≥0.05, a very modest exposure of 10 Kiloton-years is sufficient to determine the hierarchy. For 0.02≤sin 2 2θ 13 ≤0.05, an exposure of about 100 Kiloton-years is needed.

  5. Reading Authentic Texts

    DEFF Research Database (Denmark)

    Balling, Laura Winther

    2013-01-01

    Most research on cognates has focused on words presented in isolation that are easily defined as cognate between L1 and L2. In contrast, this study investigates what counts as cognate in authentic texts and how such cognates are read. Participants with L1 Danish read news articles in their highly...... proficient L2, English, while their eye-movements were monitored. The experiment shows a cognate advantage for morphologically simple words, but only when cognateness is defined relative to translation equivalents that are appropriate in the context. For morphologically complex words, a cognate disadvantage...... word predictability indexed by the conditional probability of each word....

  6. Documents and legal texts

    International Nuclear Information System (INIS)

    2016-01-01

    This section treats of the following documents and legal texts: 1 - Brazil: Law No. 13,260 of 16 March 2016 (To regulate the provisions of item XLIII of Article 5 of the Federal Constitution on terrorism, dealing with investigative and procedural provisions and redefining the concept of a terrorist organisation; and amends Laws No. 7,960 of 21 December 1989 and No. 12,850 of 2 August 2013); 2 - India: The Atomic Energy (Amendment) Act, 2015; Department Of Atomic Energy Notification (Civil Liability for Nuclear Damage); 3 - Japan: Act on Subsidisation, etc. for Nuclear Damage Compensation Funds following the implementation of the Convention on Supplementary Compensation for Nuclear Damage

  7. Journalistic Text Production

    DEFF Research Database (Denmark)

    Haugaard, Rikke Hartmann

    , a multiple case study investigated three professional text producers’ practices as they unfolded in their natural setting at the Spanish newspaper, El Mundo. • Results indicate that journalists’ revisions are related to form markedly more often than to content. • Results suggest two writing phases serving...... at the Spanish newspaper, El Mundo, in Madrid. The study applied a combination of quantitative and qualitative methods, i.e. keystroke logging, participant observation and retrospective interview. Results indicate that journalists’ revisions are related to form markedly more often than to content (approx. three...

  8. Weitere Texte physiognomischen Inhalts

    Directory of Open Access Journals (Sweden)

    Böck, Barbara

    2004-12-01

    Full Text Available The present article offers the edition of three cuneiform texts belonging to the Akkadian handbook of omens drawn from the physical appearance as well as the morals and behaviour of man. The book comprising up to 27 chapters with more than 100 omens each was entitled in antiquity Alamdimmû. The edition of the three cuneiform tablets completes, thus, the author's monographic study on the ancient Mesopotamian divinatory discipline of physiognomy (Die babylonisch-assyrische Morphoskopie (Wien 2000 [=AfO Beih. 27].

    En este artículo se presenta la editio princeps de tres textos cuneiformes conservados en el British Museum (Londres y el Vorderasiatisches Museum (Berlín, que pertenecen al libro asirio-babilonio de presagios fisiognómicos. Este libro, titulado originalmente Alamdimmû ('forma, figura', consta de 27 capítulos, cada uno con más de cien presagios escritos en lengua acadia. Los tres textos completan así el estudio monográfico de la autora sobre la disciplina adivinatoria de la fisiognomía en el antiguo Oriente (Die babylonisch-assyrische Morphoskopie (Wien 2000 [=AfO Beih. 27].

  9. Utah Text Retrieval Project

    Energy Technology Data Exchange (ETDEWEB)

    Hollaar, L A

    1983-10-01

    The Utah Text Retrieval project seeks well-engineered solutions to the implementation of large, inexpensive, rapid text information retrieval systems. The project has three major components. Perhaps the best known is the work on the specialized processors, particularly search engines, necessary to achieve the desired performance and cost. The other two concern the user interface to the system and the system's internal structure. The work on user interface development is not only concentrating on the syntax and semantics of the query language, but also on the overall environment the system presents to the user. Environmental enhancements include convenient ways to browse through retrieved documents, access to other information retrieval systems through gateways supporting a common command interface, and interfaces to word processing systems. The system's internal structure is based on a high-level data communications protocol linking the user interface, index processor, search processor, and other system modules. This allows them to be easily distributed in a multi- or specialized-processor configuration. It also allows new modules, such as a knowledge-based query reformulator, to be added. 15 references.

  10. Mobile Robots Path Planning Using the Overall Conflict Resolution and Time Baseline Coordination

    Directory of Open Access Journals (Sweden)

    Yong Ma

    2014-01-01

    Full Text Available This paper aims at resolving the path planning problem in a time-varying environment based on the idea of overall conflict resolution and the algorithm of time baseline coordination. The basic task of the introduced path planning algorithms is to fulfill the automatic generation of the shortest paths from the defined start poses to their end poses with consideration of generous constraints for multiple mobile robots. Building on this, by using the overall conflict resolution, within the polynomial based paths, we take into account all the constraints including smoothness, motion boundary, kinematics constraints, obstacle avoidance, and safety constraints among robots together. And time baseline coordination algorithm is proposed to process the above formulated problem. The foremost strong point is that much time can be saved with our approach. Numerical simulations verify the effectiveness of our approach.

  11. The California Baseline Methane Survey

    Science.gov (United States)

    Duren, R. M.; Thorpe, A. K.; Hopkins, F. M.; Rafiq, T.; Bue, B. D.; Prasad, K.; Mccubbin, I.; Miller, C. E.

    2017-12-01

    The California Baseline Methane Survey is the first systematic, statewide assessment of methane point source emissions. The objectives are to reduce uncertainty in the state's methane budget and to identify emission mitigation priorities for state and local agencies, utilities and facility owners. The project combines remote sensing of large areas with airborne imaging spectroscopy and spatially resolved bottom-up data sets to detect, quantify and attribute emissions from diverse sectors including agriculture, waste management, oil and gas production and the natural gas supply chain. Phase 1 of the project surveyed nearly 180,000 individual facilities and infrastructure components across California in 2016 - achieving completeness rates ranging from 20% to 100% per emission sector at < 5 meters spatial resolution. Additionally, intensive studies of key areas and sectors were performed to assess source persistence and variability at times scales ranging from minutes to months. Phase 2 of the project continues with additional data collection in Spring and Fall 2017. We describe the survey design and measurement, modeling and analysis methods. We present initial findings regarding the spatial, temporal and sectoral distribution of methane point source emissions in California and their estimated contribution to the state's total methane budget. We provide case-studies and lessons learned about key sectors including examples where super-emitters were identified and mitigated. We summarize challenges and recommendations for future methane research, inventories and mitigation guidance within and beyond California.

  12. 2016 Annual Technology Baseline (ATB)

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley; Kurup, Parthiv; Hand, Maureen; Feldman, David; Sigrin, Benjamin; Lantz, Eric; Stehly, Tyler; Augustine, Chad; Turchi, Craig; O' Connor, Patrick; Waldoch, Connor

    2016-09-01

    Consistent cost and performance data for various electricity generation technologies can be difficult to find and may change frequently for certain technologies. With the Annual Technology Baseline (ATB), National Renewable Energy Laboratory provides an organized and centralized dataset that was reviewed by internal and external experts. It uses the best information from the Department of Energy laboratory's renewable energy analysts and Energy Information Administration information for conventional technologies. The ATB will be updated annually in order to provide an up-to-date repository of current and future cost and performance data. Going forward, we plan to revise and refine the values using best available information. The ATB includes both a presentation with notes (PDF) and an associated Excel Workbook. The ATB includes the following electricity generation technologies: land-based wind; offshore wind; utility-scale solar PV; concentrating solar power; geothermal power; hydropower plants (upgrades to existing facilities, powering non-powered dams, and new stream-reach development); conventional coal; coal with carbon capture and sequestration; integrated gasification combined cycle coal; natural gas combustion turbines; natural gas combined cycle; conventional biopower. Nuclear laboratory's renewable energy analysts and Energy Information Administration information for conventional technologies. The ATB will be updated annually in order to provide an up-to-date repository of current and future cost and performance data. Going forward, we plan to revise and refine the values using best available information.

  13. Documents and legal texts

    International Nuclear Information System (INIS)

    2013-01-01

    This section reprints a selection of recently published legislative texts and documents: - Russian Federation: Federal Law No.170 of 21 November 1995 on the use of atomic energy, Adopted by the State Duma on 20 October 1995; - Uruguay: Law No.19.056 On the Radiological Protection and Safety of Persons, Property and the Environment (4 January 2013); - Japan: Third Supplement to Interim Guidelines on Determination of the Scope of Nuclear Damage resulting from the Accident at the Tokyo Electric Power Company Fukushima Daiichi and Daini Nuclear Power Plants (concerning Damages related to Rumour-Related Damage in the Agriculture, Forestry, Fishery and Food Industries), 30 January 2013; - France and the United States: Joint Statement on Liability for Nuclear Damage (Aug 2013); - Franco-Russian Nuclear Power Declaration (1 November 2013)

  14. Interconnectedness und digitale Texte

    Directory of Open Access Journals (Sweden)

    Detlev Doherr

    2013-04-01

    Full Text Available Zusammenfassung Die multimedialen Informationsdienste im Internet werden immer umfangreicher und umfassender, wobei auch die nur in gedruckter Form vorliegenden Dokumente von den Bibliotheken digitalisiert und ins Netz gestellt werden. Über Online-Dokumentenverwaltungen oder Suchmaschinen können diese Dokumente gefunden und dann in gängigen Formaten wie z.B. PDF bereitgestellt werden. Dieser Artikel beleuchtet die Funktionsweise der Humboldt Digital Library, die seit mehr als zehn Jahren Dokumente von Alexander von Humboldt in englischer Übersetzung im Web als HDL (Humboldt Digital Library kostenfrei zur Verfügung stellt. Anders als eine digitale Bibliothek werden dabei allerdings nicht nur digitalisierte Dokumente als Scan oder PDF bereitgestellt, sondern der Text als solcher und in vernetzter Form verfügbar gemacht. Das System gleicht damit eher einem Informationssystem als einer digitalen Bibliothek, was sich auch in den verfügbaren Funktionen zur Auffindung von Texten in unterschiedlichen Versionen und Übersetzungen, Vergleichen von Absätzen verschiedener Dokumente oder der Darstellung von Bilden in ihrem Kontext widerspiegelt. Die Entwicklung von dynamischen Hyperlinks auf der Basis der einzelnen Textabsätze der Humboldt‘schen Werke in Form von Media Assets ermöglicht eine Nutzung der Programmierschnittstelle von Google Maps zur geographischen wie auch textinhaltlichen Navigation. Über den Service einer digitalen Bibliothek hinausgehend, bietet die HDL den Prototypen eines mehrdimensionalen Informationssystems, das mit dynamischen Strukturen arbeitet und umfangreiche thematische Auswertungen und Vergleiche ermöglicht. Summary The multimedia information services on Internet are becoming more and more comprehensive, even the printed documents are digitized and republished as digital Web documents by the libraries. Those digital files can be found by search engines or management tools and provided as files in usual formats as

  15. Ecotaxonmic baseline evaluation of the plant species in a ...

    African Journals Online (AJOL)

    The survey of the flora composition of an ecosystem is important in several environmental baseline studies. An ecotaxonomic assessment was carried out in Ase-Ndoni proposed Rivgas Refinery project site in other to find out the plant species of medicinal and other economic values. The investigation was carried out to ...

  16. 2016 Annual Technology Baseline (ATB) - Webinar Presentation

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley; Kurup, Parthiv; Hand, Maureen; Feldman, David; Sigrin, Benjamin; Lantz, Eric; Stehly, Tyler; Augustine, Chad; Turchi, Craig; Porro, Gian; O' Connor, Patrick; Waldoch, Connor

    2016-09-13

    This deck was presented for the 2016 Annual Technology Baseline Webinar. The presentation describes the Annual Technology Baseline, which is a compilation of current and future cost and performance data for electricity generation technologies.

  17. 75 FR 66748 - Notice of Baseline Filings

    Science.gov (United States)

    2010-10-29

    ...- 000] Notice of Baseline Filings October 22, 2010. ONEOK Gas Transportation, L.L.C Docket No. PR11-68... above submitted a revised baseline filing of their Statement of Operating Conditions for services...

  18. Documents and legal texts

    International Nuclear Information System (INIS)

    2015-01-01

    This section treats of the following Documents and legal texts: 1 - Canada: Nuclear Liability and Compensation Act (An Act respecting civil liability and compensation for damage in case of a nuclear incident, repealing the Nuclear Liability Act and making consequential amendments to other acts); 2 - Japan: Act on Compensation for Nuclear Damage (The purpose of this act is to protect persons suffering from nuclear damage and to contribute to the sound development of the nuclear industry by establishing a basic system regarding compensation in case of nuclear damage caused by reactor operation etc.); Act on Indemnity Agreements for Compensation of Nuclear Damage; 3 - Slovak Republic: Act on Civil Liability for Nuclear Damage and on its Financial Coverage and on Changes and Amendments to Certain Laws (This Act regulates: a) The civil liability for nuclear damage incurred in the causation of a nuclear incident, b) The scope of powers of the Nuclear Regulatory Authority (hereinafter only as the 'Authority') in relation to the application of this Act, c) The competence of the National Bank of Slovakia in relation to the supervised financial market entities in the financial coverage of liability for nuclear damage; and d) The penalties for violation of this Act)

  19. Documents and legal texts

    International Nuclear Information System (INIS)

    2014-01-01

    This section of the Bulletin presents the recently published documents and legal texts sorted by country: - Brazil: Resolution No. 169 of 30 April 2014. - Japan: Act Concerning Exceptions to Interruption of Prescription Pertaining to Use of Settlement Mediation Procedures by the Dispute Reconciliation Committee for Nuclear Damage Compensation in relation to Nuclear Damage Compensation Disputes Pertaining to the Great East Japan Earthquake (Act No. 32 of 5 June 2013); Act Concerning Measures to Achieve Prompt and Assured Compensation for Nuclear Damage Arising from the Nuclear Plant Accident following the Great East Japan Earthquake and Exceptions to the Extinctive Prescription, etc. of the Right to Claim Compensation for Nuclear Damage (Act No. 97 of 11 December 2013); Fourth Supplement to Interim Guidelines on Determination of the Scope of Nuclear Damage Resulting from the Accident at the Tokyo Electric Power Company Fukushima Daiichi and Daini Nuclear Power Plants (Concerning Damages Associated with the Prolongation of Evacuation Orders, etc.); Outline of 'Fourth Supplement to Interim Guidelines (Concerning Damages Associated with the Prolongation of Evacuation Orders, etc.)'. - OECD Nuclear Energy Agency: Decision and Recommendation of the Steering Committee Concerning the Application of the Paris Convention to Nuclear Installations in the Process of Being Decommissioned; Joint Declaration on the Security of Supply of Medical Radioisotopes. - United Arab Emirates: Federal Decree No. (51) of 2014 Ratifying the Convention on Supplementary Compensation for Nuclear Damage; Ratification of the Federal Supreme Council of Federal Decree No. (51) of 2014 Ratifying the Convention on Supplementary Compensation for Nuclear Damage

  20. Automated analysis of instructional text

    Energy Technology Data Exchange (ETDEWEB)

    Norton, L.M.

    1983-05-01

    The development of a capability for automated processing of natural language text is a long-range goal of artificial intelligence. This paper discusses an investigation into the issues involved in the comprehension of descriptive, as opposed to illustrative, textual material. The comprehension process is viewed as the conversion of knowledge from one representation into another. The proposed target representation consists of statements of the prolog language, which can be interpreted both declaratively and procedurally, much like production rules. A computer program has been written to model in detail some ideas about this process. The program successfully analyzes several heavily edited paragraphs adapted from an elementary textbook on programming, automatically synthesizing as a result of the analysis a working Prolog program which, when executed, can parse and interpret let commands in the basic language. The paper discusses the motivations and philosophy of the project, the many kinds of prerequisite knowledge which are necessary, and the structure of the text analysis program. A sentence-by-sentence account of the analysis of the sample text is presented, describing the syntactic and semantic processing which is involved. The paper closes with a discussion of lessons learned from the project, possible alternative approaches, and possible extensions for future work. The entire project is presented as illustrative of the nature and complexity of the text analysis process, rather than as providing definitive or optimal solutions to any aspects of the task. 12 references.

  1. 324 Building Baseline Radiological Characterization

    International Nuclear Information System (INIS)

    Reeder, R.J.; Cooper, J.C.

    2010-01-01

    This report documents the analysis of radiological data collected as part of the characterization study performed in 1998. The study was performed to create a baseline of the radiological conditions in the 324 Building. A total of 85 technical (100 square centimeter (cm 2 )) smears were collected from the Room 147 hoods, the Shielded Materials Facility (SMF), and the Radiochemical Engineering Cells (REC). Exposure rate readings (window open and window closed) were taken at a distance of 2.5 centimeters (cm) and 30 cm from the surface of each smear. Gross beta-gamma and alpha counts of each smear were also performed. The smear samples were analyzed by gamma energy analysis (GEA). Alpha energy analysis (AEA) and strontium-90 analysis were also performed on selected smears. GEA results for one or more samples reported the presence of manganese-54, cobalt-60, silver-108m antimony-125, cesium-134, cesium-137, europium-154, europium-155, and americium-241. AEA results reported the presence of plutonium-239/240, plutonium-238/ 241 Am, curium-243/244, curium-242, and americium-243. Tables 5 through 9 present a summary by location of the estimated maximum removable and total contamination levels in the Room 147 hoods, the SMF, and the REC. The smear sample survey data and laboratory analytical results are presented in tabular form by sample in Appendix A. The Appendix A tables combine survey data documented in radiological survey reports found in Appendix B and laboratory analytical results reported in the 324 Building Physical and Radiological Characterization Study (Berk, Hill, and Landsman 1998), supplemented by the laboratory analytical results found in Appendix C.

  2. Program Baseline Change Control Board charter

    International Nuclear Information System (INIS)

    1993-02-01

    The purpose of this Charter is to establish the Program Baseline Change Control Board (PBCCB) for the Office of Civilian Radioactive Waste Management (OCRWM) Program, and to describe its organization, responsibilities, and basic methods of operation. Guidance for implementing this Charter is provided by the OCRWM Baseline Management Plan (BMP) and OCRWM Program Baseline Change Control Procedure

  3. 40 CFR 1042.825 - Baseline determination.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Baseline determination. 1042.825... Provisions for Remanufactured Marine Engines § 1042.825 Baseline determination. (a) For the purpose of this... not valid. (f) Use good engineering judgment for all aspects of the baseline determination. We may...

  4. The artists' text as work of art

    NARCIS (Netherlands)

    van Rijn, I.A.M.J.

    2017-01-01

    Artists’ texts are texts written and produced by visual artists. Their number increasing since the 2000s, it becomes important to clarify their obscure relationship to art institutions. Analysing and comparing four different artists’ texts on a textual level, this research proposes an alternative to

  5. A Fully Customized Baseline Removal Framework for Spectroscopic Applications.

    Science.gov (United States)

    Giguere, Stephen; Boucher, Thomas; Carey, C J; Mahadevan, Sridhar; Dyar, M Darby

    2017-07-01

    The task of proper baseline or continuum removal is common to nearly all types of spectroscopy. Its goal is to remove any portion of a signal that is irrelevant to features of interest while preserving any predictive information. Despite the importance of baseline removal, median or guessed default parameters are commonly employed, often using commercially available software supplied with instruments. Several published baseline removal algorithms have been shown to be useful for particular spectroscopic applications but their generalizability is ambiguous. The new Custom Baseline Removal (Custom BLR) method presented here generalizes the problem of baseline removal by combining operations from previously proposed methods to synthesize new correction algorithms. It creates novel methods for each technique, application, and training set, discovering new algorithms that maximize the predictive accuracy of the resulting spectroscopic models. In most cases, these learned methods either match or improve on the performance of the best alternative. Examples of these advantages are shown for three different scenarios: quantification of components in near-infrared spectra of corn and laser-induced breakdown spectroscopy data of rocks, and classification/matching of minerals using Raman spectroscopy. Software to implement this optimization is available from the authors. By removing subjectivity from this commonly encountered task, Custom BLR is a significant step toward completely automatic and general baseline removal in spectroscopic and other applications.

  6. Speaker-dependent Dictionary-based Speech Enhancement for Text-Dependent Speaker Verification

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Thomsen, Dennis Alexander Lehmann; Tan, Zheng-Hua

    2016-01-01

    not perform well in this setting. In this work we compare the performance of different noise reduction methods under different noise conditions in terms of speaker verification when the text is known and the system is trained on clean data (mis-matched conditions). We furthermore propose a new approach based......The problem of text-dependent speaker verification under noisy conditions is becoming ever more relevant, due to increased usage for authentication in real-world applications. Classical methods for noise reduction such as spectral subtraction and Wiener filtering introduce distortion and do...... on dictionary-based noise reduction and compare it to the baseline methods....

  7. Way to increase the user access at the LCLS baseline

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg; Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-10-15

    Although the LCLS photon beam is meant for a single user, the baseline undulator is long enough to serve two users simultaneously. To this end, we propose a setup composed of two simple elements: an X-ray mirror pair for X-ray beam deflection, and a short (4 m-long) magnetic chicane, which creates an offset for mirror pair installation in the middle of the baseline undulator. The insertable mirror pair can be used for spatial separation of the X-ray beams generated in the first and in the second half of the baseline undulator. The method of deactivating one half and activating another half of the undulator is based on the rapid switching of the FEL amplification process. As proposed elsewhere, using a kicker installed upstream of the LCLS baseline undulator and an already existing corrector in the first half of the undulator, it is possible to rapidly switch the X-ray beam from one user to another, thus providing two active beamlines at any time. We present simulation results dealing with the LCLS baseline, and show that it is possible to generate two saturated SASE X-ray beams in the whole 0.8-8 keV photon energy range in the same baseline undulator. These can be exploited to serve two users. Implementation of the proposed technique does not perturb the baseline mode of operation of the LCLS undulator. Moreover, the magnetic chicane setup is very flexible, and can be used as a self-seeding setup too. We present simulation results for the LCLS baseline undulator with SHAB (second harmonic afterburner) and show that one can produce monochromatic radiation at the 2nd harmonic as well as at the 1st. We describe an efficient way for obtaining multi-user operation at the LCLS hard X-ray FEL. To this end, a photon beam distribution system based on the use of crystals in the Bragg reflection geometry is proposed. The reflectivity of crystal deflectors can be switched fast enough by flipping the crystals with piezoelectric devices similar to those for X-ray phase retarders

  8. Report made on behalf of the mixed parity commission in charge of the text proposal about the remaining dispositions to be discussed of the project of law relative to the electric and gas public utilities and to the electric and gas companies; Rapport fait au nom de la Commission mixte paritaire chargee de proposer un texte sur les dispositions restant en discussion du projet de loi relatif au service public de l'electricite et du gaz et aux entreprises electriques et gaziere

    Energy Technology Data Exchange (ETDEWEB)

    Lenoir, J.C.; Poniatowski, L

    2004-07-01

    This project of law aims to adapt the electricity and gas sector to the new economical context of opening of the energy markets to competition. It gives to energy companies the internal organization base necessary to warrant a high level of service and a transparent and non-discriminatory access of third parties to transport and distribution networks. These evolutions will allow Electricite de France (EdF) and Gaz de France (GdF) companies to compete on equal terms with their European competitors. It confirms first the prime role of public utility of both companies and then transposes the dispositions of the European directives relative to the organization of EdF and GdF integrated companies. It foresees the creation of two daughter companies for the management of energy transport activities. The project of law foresees also the change of the status of EdF and GdF companies and the reform of the retirement pensions of the personnel. This report presents, first, in a comparative table the articles adopted by the French house of commons and the changes adopted by the Senate. Then, a common text is proposed by the mixed parity commission for the articles that remained under discussion. (J.S.)

  9. Text summarization as a decision support aid

    Directory of Open Access Journals (Sweden)

    Workman T

    2012-05-01

    Full Text Available Abstract Background PubMed data potentially can provide decision support information, but PubMed was not exclusively designed to be a point-of-care tool. Natural language processing applications that summarize PubMed citations hold promise for extracting decision support information. The objective of this study was to evaluate the efficiency of a text summarization application called Semantic MEDLINE, enhanced with a novel dynamic summarization method, in identifying decision support data. Methods We downloaded PubMed citations addressing the prevention and drug treatment of four disease topics. We then processed the citations with Semantic MEDLINE, enhanced with the dynamic summarization method. We also processed the citations with a conventional summarization method, as well as with a baseline procedure. We evaluated the results using clinician-vetted reference standards built from recommendations in a commercial decision support product, DynaMed. Results For the drug treatment data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.848 and 0.377, while conventional summarization produced 0.583 average recall and 0.712 average precision, and the baseline method yielded average recall and precision values of 0.252 and 0.277. For the prevention data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.655 and 0.329. The baseline technique resulted in recall and precision scores of 0.269 and 0.247. No conventional Semantic MEDLINE method accommodating summarization for prevention exists. Conclusion Semantic MEDLINE with dynamic summarization outperformed conventional summarization in terms of recall, and outperformed the baseline method in both recall and precision. This new approach to text summarization demonstrates potential in identifying decision support data for multiple needs.

  10. A Kalman Filter-Based Short Baseline RTK Algorithm for Single-Frequency Combination of GPS and BDS

    Directory of Open Access Journals (Sweden)

    Sihao Zhao

    2014-08-01

    Full Text Available The emerging Global Navigation Satellite Systems (GNSS including the BeiDou Navigation Satellite System (BDS offer more visible satellites for positioning users. To employ those new satellites in a real-time kinematic (RTK algorithm to enhance positioning precision and availability, a data processing model for the dual constellation of GPS and BDS is proposed and analyzed. A Kalman filter-based algorithm is developed to estimate the float ambiguities for short baseline scenarios. The entire work process of the high-precision algorithm based on the proposed model is deeply investigated in detail. The model is validated with real GPS and BDS data recorded from one zero and two short baseline experiments. Results show that the proposed algorithm can generate fixed baseline output with the same precision level as that of either a single GPS or BDS RTK algorithm. The significantly improved fixed rate and time to first fix of the proposed method demonstrates a better availability and effectiveness on processing multi-GNSSs.

  11. MALDI-TOF Baseline Drift Removal Using Stochastic Bernstein Approximation

    Directory of Open Access Journals (Sweden)

    Howard Daniel

    2006-01-01

    Full Text Available Stochastic Bernstein (SB approximation can tackle the problem of baseline drift correction of instrumentation data. This is demonstrated for spectral data: matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF data. Two SB schemes for removing the baseline drift are presented: iterative and direct. Following an explanation of the origin of the MALDI-TOF baseline drift that sheds light on the inherent difficulty of its removal by chemical means, SB baseline drift removal is illustrated for both proteomics and genomics MALDI-TOF data sets. SB is an elegant signal processing method to obtain a numerically straightforward baseline shift removal method as it includes a free parameter that can be optimized for different baseline drift removal applications. Therefore, research that determines putative biomarkers from the spectral data might benefit from a sensitivity analysis to the underlying spectral measurement that is made possible by varying the SB free parameter. This can be manually tuned (for constant or tuned with evolutionary computation (for .

  12. 40 CFR 74.20 - Data for baseline and alternative baseline.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Data for baseline and alternative baseline. 74.20 Section 74.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... baseline and alternative baseline. (a) Acceptable data. (1) The designated representative of a combustion...

  13. Text

    International Nuclear Information System (INIS)

    Anon.

    2009-01-01

    The purpose of this act is to safeguard against the dangers and harmful effects of radioactive waste and to contribute to public safety and environmental protection by laying down requirements for the safe and efficient management of radioactive waste. We will find definitions, interrelation with other legislation, responsibilities of the state and local governments, responsibilities of radioactive waste management companies and generators, formulation of the basic plan for the control of radioactive waste, radioactive waste management ( with public information, financing and part of spent fuel management), Korea radioactive waste management corporation ( business activities, budget), establishment of a radioactive waste fund in order to secure the financial resources required for radioactive waste management, and penalties in case of improper operation of radioactive waste management. (N.C.)

  14. FAIR - Baseline technical report. Executive summary

    International Nuclear Information System (INIS)

    Gutbrod, H.H.; Augustin, I.; Eickhoff, H.; Gross, K.D.; Henning, W.F.; Kraemer, D.; Walter, G.

    2006-09-01

    This document presents the Executive Summary, the first of six volumes comprising the 2006 Baseline Technical Report (BTR) for the international FAIR project (Facility for Antiproton and Ion Research). The BTR provides the technical description, cost, schedule, and assessments of risk for the proposed new facility. The purpose of the BTR is to provide a reliable basis for the construction, commissioning and operation of FAIR. The BTR is one of the central documents requested by the FAIR International Steering Committee (ISC) and its working groups, in order to prepare the legal process and the decisions on the construction and operation of FAIR in an international framework. It provides the technical basis for legal contracts on contributions to be made by, so far, 13 countries within the international FAIR Consortium. The BTR begins with this extended Executive Summary as Volume 1, which is also intended for use as a stand-alone document. The Executive Summary provides brief summaries of the accelerator facilities, the scientific programs and experimental stations, civil construction and safety, and of the workproject structure, costs and schedule. (orig.)

  15. Data-Driven Baseline Estimation of Residential Buildings for Demand Response

    Directory of Open Access Journals (Sweden)

    Saehong Park

    2015-09-01

    Full Text Available The advent of advanced metering infrastructure (AMI generates a large volume of data related with energy service. This paper exploits data mining approach for customer baseline load (CBL estimation in demand response (DR management. CBL plays a significant role in measurement and verification process, which quantifies the amount of demand reduction and authenticates the performance. The proposed data-driven baseline modeling is based on the unsupervised learning technique. Specifically we leverage both the self organizing map (SOM and K-means clustering for accurate estimation. This two-level approach efficiently reduces the large data set into representative weight vectors in SOM, and then these weight vectors are clustered by K-means clustering to find the load pattern that would be similar to the potential load pattern of the DR event day. To verify the proposed method, we conduct nationwide scale experiments where three major cities’ residential consumption is monitored by smart meters. Our evaluation compares the proposed solution with the various types of day matching techniques, showing that our approach outperforms the existing methods by up to a 68.5% lower error rate.

  16. Low-Power Bitstream-Residual Decoder for H.264/AVC Baseline Profile Decoding

    Directory of Open Access Journals (Sweden)

    Xu Ke

    2009-01-01

    Full Text Available Abstract We present the design and VLSI implementation of a novel low-power bitstream-residual decoder for H.264/AVC baseline profile. It comprises a syntax parser, a parameter decoder, and an Inverse Quantization Inverse Transform (IQIT decoder. The syntax parser detects and decodes each incoming codeword in the bitstream under the control of a hierarchical Finite State Machine (FSM; the IQIT decoder performs inverse transform and quantization with pipelining and parallelism. Various power reduction techniques, such as data-driven based on statistic results, nonuniform partition, precomputation, guarded evaluation, hierarchical FSM decomposition, TAG method, zero-block skipping, and clock gating , are adopted and integrated throughout the bitstream-residual decoder. With innovative architecture, the proposed design is able to decode QCIF video sequences of 30 fps at a clock rate as low as 1.5 MHz. A prototype H.264/AVC baseline decoding chip utilizing the proposed decoder is fabricated in UMC 0.18  m 1P6M CMOS technology. The proposed design is measured under 1 V 1.8 V supply with 0.1 V step. It dissipates 76  W at 1 V and 253  W at 1.8 V.

  17. Hazard Baseline Downgrade Effluent Treatment Facility

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    This Hazard Baseline Downgrade reviews the Effluent Treatment Facility, in accordance with Department of Energy Order 5480.23, WSRC11Q Facility Safety Document Manual, DOE-STD-1027-92, and DOE-EM-STD-5502-94. It provides a baseline grouping based on the chemical and radiological hazards associated with the facility. The Determination of the baseline grouping for ETF will aid in establishing the appropriate set of standards for the facility

  18. Integrated planning: A baseline development perspective

    International Nuclear Information System (INIS)

    Clauss, L.; Chang, D.

    1994-01-01

    The FEMP Baseline establishes the basis for integrating environmental activity technical requirements with their cost and schedule elements. The result is a path forward to successfully achieving the FERMCO mission. Specific to cost management, the FEMP Baseline has been incorporate into the FERMCO Project Control System (PCS) to provide a time-phased budget plan against which contractor performance is measured with an earned value management system. The result is the Performance Measurement Baseline (PMB), an important tool for keeping cost under control

  19. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  20. Teaching Text Structure: Examining the Affordances of Children's Informational Texts

    Science.gov (United States)

    Jones, Cindy D.; Clark, Sarah K.; Reutzel, D. Ray

    2016-01-01

    This study investigated the affordances of informational texts to serve as model texts for teaching text structure to elementary school children. Content analysis of a random sampling of children's informational texts from top publishers was conducted on text structure organization and on the inclusion of text features as signals of text…

  1. Baseline Design and Performance Analysis of Laser Altimeter for Korean Lunar Orbiter

    Directory of Open Access Journals (Sweden)

    Hyung-Chul Lim

    2016-09-01

    Full Text Available Korea’s lunar exploration project includes the launching of an orbiter, a lander (including a rover, and an experimental orbiter (referred to as a lunar pathfinder. Laser altimeters have played an important scientific role in lunar, planetary, and asteroid exploration missions since their first use in 1971 onboard the Apollo 15 mission to the Moon. In this study, a laser altimeter was proposed as a scientific instrument for the Korean lunar orbiter, which will be launched by 2020, to study the global topography of the surface of the Moon and its gravitational field and to support other payloads such as a terrain mapping camera or spectral imager. This study presents the baseline design and performance model for the proposed laser altimeter. Additionally, the study discusses the expected performance based on numerical simulation results. The simulation results indicate that the design of system parameters satisfies performance requirements with respect to detection probability and range error even under unfavorable conditions.

  2. Modeling and Simulation of Offshore Wind Power Platform for 5 MW Baseline NREL Turbine

    Directory of Open Access Journals (Sweden)

    Taufik Roni Sahroni

    2015-01-01

    Full Text Available This paper presents the modeling and simulation of offshore wind power platform for oil and gas companies. Wind energy has become the fastest growing renewable energy in the world and major gains in terms of energy generation are achievable when turbines are moved offshore. The objective of this project is to propose new design of an offshore wind power platform. Offshore wind turbine (OWT is composed of three main structures comprising the rotor/blades, the tower nacelle, and the supporting structure. The modeling analysis was focused on the nacelle and supporting structure. The completed final design was analyzed using finite element modeling tool ANSYS to obtain the structure’s response towards loading conditions and to ensure it complies with guidelines laid out by classification authority Det Norske Veritas. As a result, a new model of the offshore wind power platform for 5 MW Baseline NREL turbine was proposed.

  3. Baseline development, economic risk, and schedule risk: An integrated approach

    International Nuclear Information System (INIS)

    Tonkinson, J.A.

    1994-01-01

    The economic and schedule risks of Environmental Restoration (ER) projects are commonly analyzed toward the end of the baseline development process. Risk analysis is usually performed as the final element of the scheduling or estimating processes for the purpose of establishing cost and schedule contingency. However, there is an opportunity for earlier assessment of risks, during development of the technical scope and Work Breakdown Structure (WBS). Integrating the processes of risk management and baselining provides for early incorporation of feedback regarding schedule and cost risk into the proposed scope of work. Much of the information necessary to perform risk analysis becomes available during development of the technical baseline, as the scope of work and WBS are being defined. The analysis of risk can actually be initiated early on during development of the technical baseline and continue throughout development of the complete project baseline. Indeed, best business practices suggest that information crucial to the success of a project be analyzed and incorporated into project planning as soon as it is available and usable

  4. Flexible frontiers for text division into rows

    Directory of Open Access Journals (Sweden)

    Dan L. Lacrămă

    2009-01-01

    Full Text Available This paper presents an original solution for flexible hand-written text division into rows. Unlike the standard procedure, the proposed method avoids the isolated characters extensions amputation and reduces the recognition error rate in the final stage.

  5. Ontology Assisted Formal Specification Extraction from Text

    Directory of Open Access Journals (Sweden)

    Andreea Mihis

    2010-12-01

    Full Text Available In the field of knowledge processing, the ontologies are the most important mean. They make possible for the computer to understand better the natural language and to make judgments. In this paper, a method which use ontologies in the semi-automatic extraction of formal specifications from a natural language text is proposed.

  6. TAPIR--Finnish national geochemical baseline database.

    Science.gov (United States)

    Jarva, Jaana; Tarvainen, Timo; Reinikainen, Jussi; Eklund, Mikael

    2010-09-15

    In Finland, a Government Decree on the Assessment of Soil Contamination and Remediation Needs has generated a need for reliable and readily accessible data on geochemical baseline concentrations in Finnish soils. According to the Decree, baseline concentrations, referring both to the natural geological background concentrations and the diffuse anthropogenic input of substances, shall be taken into account in the soil contamination assessment process. This baseline information is provided in a national geochemical baseline database, TAPIR, that is publicly available via the Internet. Geochemical provinces with elevated baseline concentrations were delineated to provide regional geochemical baseline values. The nationwide geochemical datasets were used to divide Finland into geochemical provinces. Several metals (Co, Cr, Cu, Ni, V, and Zn) showed anomalous concentrations in seven regions that were defined as metal provinces. Arsenic did not follow a similar distribution to any other elements, and four arsenic provinces were separately determined. Nationwide geochemical datasets were not available for some other important elements such as Cd and Pb. Although these elements are included in the TAPIR system, their distribution does not necessarily follow the ones pre-defined for metal and arsenic provinces. Regional geochemical baseline values, presented as upper limit of geochemical variation within the region, can be used as trigger values to assess potential soil contamination. Baseline values have also been used to determine upper and lower guideline values that must be taken into account as a tool in basic risk assessment. If regional geochemical baseline values are available, the national guideline values prescribed in the Decree based on ecological risks can be modified accordingly. The national geochemical baseline database provides scientifically sound, easily accessible and generally accepted information on the baseline values, and it can be used in various

  7. C-018H Pre-Operational Baseline Sampling Plan

    International Nuclear Information System (INIS)

    Guzek, S.J.

    1993-01-01

    The objective of this task is to field characterize and sample the soil at selected locations along the proposed effluent line routes for Project C-018H. The overall purpose of this effort is to meet the proposed plan to discontinue the disposal of contaminated liquids into the Hanford soil column as described by DOE (1987). Detailed information describing proposed transport pipeline route and associated Kaiser Engineers Hanford Company (KEH) preliminary drawings (H288746...755) all inclusive, have been prepared by KEH (1992). The information developed from field monitoring and sampling will be utilized to characterize surface and subsurface soil along the proposed C-018H effluent pipeline and it's associated facilities. Potentially existing contaminant levels may be encountered therefore, soil characterization will provide a construction preoperational baseline reference, develop personnel safety requirements, and determine the need for any changes in the proposed routes prior to construction of the pipeline

  8. Measuring complexity with multifractals in texts. Translation effects

    International Nuclear Information System (INIS)

    Ausloos, M.

    2012-01-01

    Highlights: ► Two texts in English and one in Esperanto are transformed into 6 time series. ► D(q) and f(alpha) of such (and shuffled) time series are obtained. ► A model for text construction is presented based on a parametrized Cantor set. ► The model parameters can also be used when examining machine translated texts. ► Suggested extensions to higher dimensions: in 2D image analysis and on hypertexts. - Abstract: Should quality be almost a synonymous of complexity? To measure quality appears to be audacious, even very subjective. It is hereby proposed to use a multifractal approach in order to quantify quality, thus through complexity measures. A one-dimensional system is examined. It is known that (all) written texts can be one-dimensional nonlinear maps. Thus, several written texts by the same author are considered, together with their translation, into an unusual language, Esperanto, and asa baseline their corresponding shuffled versions. Different one-dimensional time series can be used: e.g. (i) one based on word lengths, (ii) the other based on word frequencies; both are used for studying, comparing and discussing the map structure. It is shown that a variety in style can be measured through the D(q) and f(α) curves characterizing multifractal objects. This allows to observe on the one hand whether natural and artificial languages significantly influence the writing and the translation, and whether one author’s texts differ technically from each other. In fact, the f(α) curves of the original texts are similar to each other, but the translated text shows marked differences. However in each case, the f(α) curves are far from being parabolic, – in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. Criteria are thereby suggested for estimating a text quality, as if it is a time series only. A model is introduced in order to substantiate the findings: it consists in considering a text as a random Cantor set

  9. Spectral signature verification using statistical analysis and text mining

    Science.gov (United States)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is

  10. AN INTEGRATED RANSAC AND GRAPH BASED MISMATCH ELIMINATION APPROACH FOR WIDE-BASELINE IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    M. Hasheminasab

    2015-12-01

    Full Text Available In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and

  11. FAQs about Baseline Testing among Young Athletes

    Science.gov (United States)

    ... a similar exam conducted by a health care professional during the season if an athlete has a suspected concussion. Baseline testing generally takes place during the pre-season—ideally prior to the first practice. It is important to note that some baseline ...

  12. 75 FR 74706 - Notice of Baseline Filings

    Science.gov (United States)

    2010-12-01

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Notice of Baseline Filings November 24, 2010. Centana Intrastate Pipeline, LLC. Docket No. PR10-84-001. Centana Intrastate Pipeline, LLC... applicants listed above submitted a revised baseline filing of their Statement of Operating Conditions for...

  13. 76 FR 8725 - Notice of Baseline Filings

    Science.gov (United States)

    2011-02-15

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Notice of Baseline Filings Enstor Grama Ridge Storage and Docket No. PR10-97-002. Transportation, L.L.C.. EasTrans, LLC Docket No. PR10-30-001... revised baseline filing of their Statement of Operating Conditions for services provided under section 311...

  14. 75 FR 57268 - Notice of Baseline Filings

    Science.gov (United States)

    2010-09-20

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR10-103-000; Docket No. PR10-104-000; Docket No. PR10-105- 000 (Not Consolidated)] Notice of Baseline Filings September 13..., 2010, and September 10, 2010, respectively the applicants listed above submitted their baseline filing...

  15. 75 FR 65010 - Notice of Baseline Filings

    Science.gov (United States)

    2010-10-21

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR11-1-000; Docket No. PR11-2-000; Docket No. PR11-3-000] Notice of Baseline Filings October 14, 2010. Cranberry Pipeline Docket..., 2010, respectively the applicants listed above submitted their baseline filing of its Statement of...

  16. 76 FR 5797 - Notice of Baseline Filings

    Science.gov (United States)

    2011-02-02

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR10-114-001; Docket No. PR10-129-001; Docket No. PR10-131- 001; Docket No. PR10-68-002 Not Consolidated] Notice of Baseline... applicants listed above submitted a revised baseline filing of their Statement of Operating Conditions for...

  17. 75 FR 70732 - Notice of Baseline Filings

    Science.gov (United States)

    2010-11-18

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR11-71-000; Docket No. PR11-72-000; Docket No. PR11-73- 000] Notice of Baseline Filings November 10, 2010. Docket No. PR11-71-000..., 2010, the applicants listed above submitted their baseline filing of their Statement of Operating...

  18. Measurement of baseline and orientation between distributed aerospace platforms.

    Science.gov (United States)

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  19. Important Text Characteristics for Early-Grades Text Complexity

    Science.gov (United States)

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified…

  20. Stripe-PZT Sensor-Based Baseline-Free Crack Diagnosis in a Structure with a Welded Stiffener

    Directory of Open Access Journals (Sweden)

    Yun-Kyu An

    2016-09-01

    Full Text Available This paper proposes a stripe-PZT sensor-based baseline-free crack diagnosis technique in the heat affected zone (HAZ of a structure with a welded stiffener. The proposed technique enables one to identify and localize a crack in the HAZ using only current data measured using a stripe-PZT sensor. The use of the stripe-PZT sensor makes it possible to significantly improve the applicability to real structures and minimize man-made errors associated with the installation process by embedding multiple piezoelectric sensors onto a printed circuit board. Moreover, a new frequency-wavenumber analysis-based baseline-free crack diagnosis algorithm minimizes false alarms caused by environmental variations by avoiding simple comparison with the baseline data accumulated from the pristine condition of a target structure. The proposed technique is numerically as well as experimentally validated using a plate-like structure with a welded stiffener, reveling that it successfully identifies and localizes a crack in HAZ.

  1. Baseline Architecture of ITER Control System

    Science.gov (United States)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  2. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  3. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  4. Robust extraction of baseline signal of atmospheric trace species using local regression

    Directory of Open Access Journals (Sweden)

    A. F. Ruckstuhl

    2012-11-01

    Full Text Available The identification of atmospheric trace species measurements that are representative of well-mixed background air masses is required for monitoring atmospheric composition change at background sites. We present a statistical method based on robust local regression that is well suited for the selection of background measurements and the estimation of associated baseline curves. The bootstrap technique is applied to calculate the uncertainty in the resulting baseline curve. The non-parametric nature of the proposed approach makes it a very flexible data filtering method. Application to carbon monoxide (CO measured from 1996 to 2009 at the high-alpine site Jungfraujoch (Switzerland, 3580 m a.s.l., and to measurements of 1,1-difluoroethane (HFC-152a from Jungfraujoch (2000 to 2009 and Mace Head (Ireland, 1995 to 2009 demonstrates the feasibility and usefulness of the proposed approach.

    The determined average annual change of CO at Jungfraujoch for the 1996 to 2009 period as estimated from filtered annual mean CO concentrations is −2.2 ± 1.1 ppb yr−1. For comparison, the linear trend of unfiltered CO measurements at Jungfraujoch for this time period is −2.9 ± 1.3 ppb yr−1.

  5. Text analysis methods, text analysis apparatuses, and articles of manufacture

    Science.gov (United States)

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  6. Classroom Texting in College Students

    Science.gov (United States)

    Pettijohn, Terry F.; Frazier, Erik; Rieser, Elizabeth; Vaughn, Nicholas; Hupp-Wilds, Bobbi

    2015-01-01

    A 21-item survey on texting in the classroom was given to 235 college students. Overall, 99.6% of students owned a cellphone and 98% texted daily. Of the 138 students who texted in the classroom, most texted friends or significant others, and indicate the reason for classroom texting is boredom or work. Students who texted sent a mean of 12.21…

  7. Hanford Site technical baseline database. Revision 1

    International Nuclear Information System (INIS)

    Porter, P.E.

    1995-01-01

    This report lists the Hanford specific files (Table 1) that make up the Hanford Site Technical Baseline Database. Table 2 includes the delta files that delineate the differences between this revision and revision 0 of the Hanford Site Technical Baseline Database. This information is being managed and maintained on the Hanford RDD-100 System, which uses the capabilities of RDD-100, a systems engineering software system of Ascent Logic Corporation (ALC). This revision of the Hanford Site Technical Baseline Database uses RDD-100 version 3.0.2.2 (see Table 3). Directories reflect those controlled by the Hanford RDD-100 System Administrator. Table 4 provides information regarding the platform. A cassette tape containing the Hanford Site Technical Baseline Database is available

  8. The Text Retrieval Conferences (TRECs)

    Science.gov (United States)

    1998-10-01

    per- form a monolingual run in the target language to act as a baseline. Thirteen groups participated in the TREC-6 CLIR track. Three major...language; the use of machine-readable bilingual dictionaries or other existing linguistic re- sources; and the use of corpus resources to train or...formance for each method. In general, the best cross- language performance was between 50%-75% as ef- fective as a quality monolingual run. The TREC-7

  9. Baseline assessment of the fish and benthic communities of the Flower Garden Banks (NODC Accession 0118358)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  10. Baseline assessment of fish and benthic communities of the Flower Garden Banks (NODC Accession 0118358)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  11. Baseline assessment of benthic communities of the Flower Garden Banks (2010 - present): 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  12. Baseline assessment of fish communities of the Flower Garden Banks (2010 - present): 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  13. Observation of [Formula: see text] and [Formula: see text] decays.

    Science.gov (United States)

    Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baszczyk, M; Batozskaya, V; Batsukh, B; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Betancourt, C; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bitadze, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S-F; Chobanova, V; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Furfaro, E; Färber, C; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Govorkova, E; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Griffith, P; Grillo, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hatch, M; He, J; Head, T; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, H; Hulsbergen, W; Humair, T; Hushchyn, M; Hussain, N; Hutchcroft, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Kariuki, J M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Koliiev, S; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kosmyntseva, A; Kozachuk, A; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Popov, A; Popov, D; Popovici, B; Poslavskii, S; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Toriello, F; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Vernet, M; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Viemann, H; Vilasis-Cardona, X; Vitti, M; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xing, Z; Xu, Z; Yang, Z; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhang, Y; Zhelezov, A; Zheng, Y; Zhokhov, A; Zhu, X; Zhukov, V; Zucchelli, S

    2017-01-01

    The decays [Formula: see text] and [Formula: see text] are observed for the first time using a data sample corresponding to an integrated luminosity of 3.0 fb[Formula: see text], collected by the LHCb experiment in proton-proton collisions at the centre-of-mass energies of 7 and 8[Formula: see text]. The branching fractions relative to that of [Formula: see text] are measured to be [Formula: see text]where the first uncertainties are statistical and the second are systematic.

  14. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    Science.gov (United States)

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national…

  15. From Text to Political Positions: Text analysis across disciplines

    NARCIS (Netherlands)

    Kaal, A.R.; Maks, I.; van Elfrinkhof, A.M.E.

    2014-01-01

    ABSTRACT From Text to Political Positions addresses cross-disciplinary innovation in political text analysis for party positioning. Drawing on political science, computational methods and discourse analysis, it presents a diverse collection of analytical models including pure quantitative and

  16. Damage Identification of Bridge Based on Chebyshev Polynomial Fitting and Fuzzy Logic without Considering Baseline Model Parameters

    Directory of Open Access Journals (Sweden)

    Yu-Bo Jiao

    2015-01-01

    Full Text Available The paper presents an effective approach for damage identification of bridge based on Chebyshev polynomial fitting and fuzzy logic systems without considering baseline model data. The modal curvature of damaged bridge can be obtained through central difference approximation based on displacement modal shape. Depending on the modal curvature of damaged structure, Chebyshev polynomial fitting is applied to acquire the curvature of undamaged one without considering baseline parameters. Therefore, modal curvature difference can be derived and used for damage localizing. Subsequently, the normalized modal curvature difference is treated as input variable of fuzzy logic systems for damage condition assessment. Numerical simulation on a simply supported bridge was carried out to demonstrate the feasibility of the proposed method.

  17. Financial Statement Fraud Detection using Text Mining

    OpenAIRE

    Rajan Gupta; Nasib Singh Gill

    2013-01-01

    Data mining techniques have been used enormously by the researchers’ community in detecting financial statement fraud. Most of the research in this direction has used the numbers (quantitative information) i.e. financial ratios present in the financial statements for detecting fraud. There is very little or no research on the analysis of text such as auditor’s comments or notes present in published reports. In this study we propose a text mining approach for detecting financial statement frau...

  18. Text mining from ontology learning to automated text processing applications

    CERN Document Server

    Biemann, Chris

    2014-01-01

    This book comprises a set of articles that specify the methodology of text mining, describe the creation of lexical resources in the framework of text mining and use text mining for various tasks in natural language processing (NLP). The analysis of large amounts of textual data is a prerequisite to build lexical resources such as dictionaries and ontologies and also has direct applications in automated text processing in fields such as history, healthcare and mobile applications, just to name a few. This volume gives an update in terms of the recent gains in text mining methods and reflects

  19. Working with text tools, techniques and approaches for text mining

    CERN Document Server

    Tourte, Gregory J L

    2016-01-01

    Text mining tools and technologies have long been a part of the repository world, where they have been applied to a variety of purposes, from pragmatic aims to support tools. Research areas as diverse as biology, chemistry, sociology and criminology have seen effective use made of text mining technologies. Working With Text collects a subset of the best contributions from the 'Working with text: Tools, techniques and approaches for text mining' workshop, alongside contributions from experts in the area. Text mining tools and technologies in support of academic research include supporting research on the basis of a large body of documents, facilitating access to and reuse of extant work, and bridging between the formal academic world and areas such as traditional and social media. Jisc have funded a number of projects, including NaCTem (the National Centre for Text Mining) and the ResDis programme. Contents are developed from workshop submissions and invited contributions, including: Legal considerations in te...

  20. MeSH: a window into full text for document summarization.

    Science.gov (United States)

    Bhattacharya, Sanmitra; Ha-Thuc, Viet; Srinivasan, Padmini

    2011-07-01

    Previous research in the biomedical text-mining domain has historically been limited to titles, abstracts and metadata available in MEDLINE records. Recent research initiatives such as TREC Genomics and BioCreAtIvE strongly point to the merits of moving beyond abstracts and into the realm of full texts. Full texts are, however, more expensive to process not only in terms of resources needed but also in terms of accuracy. Since full texts contain embellishments that elaborate, contextualize, contrast, supplement, etc., there is greater risk for false positives. Motivated by this, we explore an approach that offers a compromise between the extremes of abstracts and full texts. Specifically, we create reduced versions of full text documents that contain only important portions. In the long-term, our goal is to explore the use of such summaries for functions such as document retrieval and information extraction. Here, we focus on designing summarization strategies. In particular, we explore the use of MeSH terms, manually assigned to documents by trained annotators, as clues to select important text segments from the full text documents. Our experiments confirm the ability of our approach to pick the important text portions. Using the ROUGE measures for evaluation, we were able to achieve maximum ROUGE-1, ROUGE-2 and ROUGE-SU4 F-scores of 0.4150, 0.1435 and 0.1782, respectively, for our MeSH term-based method versus the maximum baseline scores of 0.3815, 0.1353 and 0.1428, respectively. Using a MeSH profile-based strategy, we were able to achieve maximum ROUGE F-scores of 0.4320, 0.1497 and 0.1887, respectively. Human evaluation of the baselines and our proposed strategies further corroborates the ability of our method to select important sentences from the full texts. sanmitra-bhattacharya@uiowa.edu; padmini-srinivasan@uiowa.edu.

  1. Forecasting Sensorimotor Adaptability from Baseline Inter-Trial Correlations

    Science.gov (United States)

    Beaton, K. H.; Bloomberg, J. J.

    2016-01-01

    One of the greatest challenges for sensorimotor adaptation to the spaceflight environment is the large variability in symptoms, and corresponding functional impairments, from one crewmember to the next. This renders preflight training and countermeasure development difficult, as a "one-size-fits-all" approach is inappropriate. Therefore, it would be highly advantageous to know ahead of time which crewmembers might have more difficulty adjusting to the novel g-levels inherent to spaceflight. This information could guide individually customized countermeasures, which would enable more efficient use of crew time and provide better outcomes. The principal aim of this work is to look for baseline performance metrics that relate to locomotor adaptability. We propose a novel hypothesis that considers baseline inter-trial correlations, the trial-to-trial fluctuations ("noise") in motor performance, as a predictor of individual adaptive capabilities.

  2. On the feasibility of routine baseline improvement in processing of geomagnetic observatory data

    Science.gov (United States)

    Soloviev, Anatoly; Lesur, Vincent; Kudin, Dmitry

    2018-02-01

    We propose a new approach to the calculation of regular baselines at magnetic observatories. The proposed approach is based on the simultaneous analysis of the irregular absolute observations and the continuous time-series deltaF, widely used for estimating the data quality. The systematic deltaF analysis allows to take into account all available information about the operation of observatory instruments (i.e., continuous records of the field variations and its modulus) in the intervals between the times of absolute observations, as compared to the traditional baseline calculation where only spot values are considered. To establish a connection with the observed spot baseline values, we introduce a function for approximate evaluation of the intermediate baseline values. An important feature of the algorithm is its quantitative estimation of the resulting data precision and thus determination of the problematic fragments in raw data. We analyze the robustness of the algorithm operation using synthetic data sets. We also compare baselines and definitive data derived by the proposed algorithm with those derived by the traditional approach using Saint Petersburg observatory data, recorded in 2015 and accepted by INTERMAGNET. It is shown that the proposed method allows to essentially improve the resulting data quality when baseline data are not good enough. The obtained results prove that the baseline variability in time might be quite rapid.[Figure not available: see fulltext.

  3. Shifted Baselines Reduce Willingness to Pay for Conservation

    Directory of Open Access Journals (Sweden)

    Loren McClenachan

    2018-02-01

    Full Text Available A loss of memory of past environmental degradation has resulted in shifted baselines, which may result in conservation and restoration goals that are less ambitious than if stakeholders had a full knowledge of ecosystem potential. However, the link between perception of baseline states and support for conservation planning has not been tested empirically. Here, we investigate how perceptions of change in coral reef ecosystems affect stakeholders' willingness to pay (WTP for the establishment of protected areas. Coral reefs are experiencing rapid, global change that is observable by the public, and therefore provide an ideal ecosystem to test links between beliefs about baseline states and willingness to support conservation. Our survey respondents perceived change to coral reef communities across six variables: coral abundance, fish abundance, fish diversity, fish size, sedimentation, and water pollution. Respondants who accurately perceived declines in reef health had significantly higher WTP for protected areas (US $256.80 vs. $102.50 per year, suggesting that shifted baselines may reduce engagement with conservation efforts. If WTP translates to engagement, this suggests that goals for restoration and recovery are likely to be more ambitious if the public is aware of long term change. Therefore, communicating the scope and depth of environmental problems is essential in engaging the public in conservation.

  4. Informational Text and the CCSS

    Science.gov (United States)

    Aspen Institute, 2012

    2012-01-01

    What constitutes an informational text covers a broad swath of different types of texts. Biographies & memoirs, speeches, opinion pieces & argumentative essays, and historical, scientific or technical accounts of a non-narrative nature are all included in what the Common Core State Standards (CCSS) envisions as informational text. Also included…

  5. The Only Safe SMS Texting Is No SMS Texting.

    Science.gov (United States)

    Toth, Cheryl; Sacopulos, Michael J

    2015-01-01

    Many physicians and practice staff use short messaging service (SMS) text messaging to communicate with patients. But SMS text messaging is unencrypted, insecure, and does not meet HIPAA requirements. In addition, the short and abbreviated nature of text messages creates opportunities for misinterpretation, and can negatively impact patient safety and care. Until recently, asking patients to sign a statement that they understand and accept these risks--as well as having policies, device encryption, and cyber insurance in place--would have been enough to mitigate the risk of using SMS text in a medical practice. But new trends and policies have made SMS text messaging unsafe under any circumstance. This article explains these trends and policies, as well as why only secure texting or secure messaging should be used for physician-patient communication.

  6. Associations between baseline allergens and polysensitization

    DEFF Research Database (Denmark)

    Carlsen, B.C.; Menne, T.; Johansen, J.D.

    2008-01-01

    Background: Identification of patients at risk of developing polysensitization is not possible at present. An association between weak sensitizers and polysensitization has been hypothesized. Objectives: To examine associations of 21 allergens in the European baseline series to polysensitization....... Patients/Methods: From a database-based study with 14 998 patients patch tested with the European baseline series between 1985 and 2005, a group of 759 (5.1%) patients were polysensitized. Odds ratios were calculated to determine the relative contribution of each allergen to polysensitization. Results...... denominator for the association between the allergens and the polysensitization was apparent, and any association, whether positive or negative, was relatively low. Based on these results, sensitization to specific baseline allergens cannot be used as risk indicators for polysensitization Udgivelsesdato: 2008...

  7. Associations between baseline allergens and polysensitization

    DEFF Research Database (Denmark)

    Carlsen, Berit Christina; Menné, Torkil; Johansen, Jeanne Duus

    2008-01-01

    BACKGROUND: Identification of patients at risk of developing polysensitization is not possible at present. An association between weak sensitizers and polysensitization has been hypothesized. OBJECTIVES: To examine associations of 21 allergens in the European baseline series to polysensitization....... PATIENTS/METHODS: From a database-based study with 14 998 patients patch tested with the European baseline series between 1985 and 2005, a group of 759 (5.1%) patients were polysensitized. Odds ratios were calculated to determine the relative contribution of each allergen to polysensitization. RESULTS...... denominator for the association between the allergens and the polysensitization was apparent, and any association, whether positive or negative, was relatively low. Based on these results, sensitization to specific baseline allergens cannot be used as risk indicators for polysensitization....

  8. Monitoring interaction and collective text production through text mining

    Directory of Open Access Journals (Sweden)

    Macedo, Alexandra Lorandi

    2014-04-01

    Full Text Available This article presents the Concepts Network tool, developed using text mining technology. The main objective of this tool is to extract and relate terms of greatest incidence from a text and exhibit the results in the form of a graph. The Network was implemented in the Collective Text Editor (CTE which is an online tool that allows the production of texts in synchronized or non-synchronized forms. This article describes the application of the Network both in texts produced collectively and texts produced in a forum. The purpose of the tool is to offer support to the teacher in managing the high volume of data generated in the process of interaction amongst students and in the construction of the text. Specifically, the aim is to facilitate the teacher’s job by allowing him/her to process data in a shorter time than is currently demanded. The results suggest that the Concepts Network can aid the teacher, as it provides indicators of the quality of the text produced. Moreover, messages posted in forums can be analyzed without their content necessarily having to be pre-read.

  9. Text recycling: acceptable or misconduct?

    Science.gov (United States)

    Harriman, Stephanie; Patel, Jigisha

    2014-08-16

    Text recycling, also referred to as self-plagiarism, is the reproduction of an author's own text from a previous publication in a new publication. Opinions on the acceptability of this practice vary, with some viewing it as acceptable and efficient, and others as misleading and unacceptable. In light of the lack of consensus, journal editors often have difficulty deciding how to act upon the discovery of text recycling. In response to these difficulties, we have created a set of guidelines for journal editors on how to deal with text recycling. In this editorial, we discuss some of the challenges of developing these guidelines, and how authors can avoid undisclosed text recycling.

  10. Baseline methodologies for clean development mechanism projects

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M.K. (ed.); Shrestha, R.M.; Sharma, S.; Timilsina, G.R.; Kumar, S.

    2005-11-15

    The Kyoto Protocol and the Clean Development Mechanism (CDM) came into force on 16th February 2005 with its ratification by Russia. The increasing momentum of this process is reflected in more than 100 projects having been submitted to the CDM Executive Board (CDM-EB) for approval of the baselines and monitoring methodologies, which is the first step in developing and implementing CDM projects. A CDM project should result in a net decrease of GHG emissions below any level that would have resulted from other activities implemented in the absence of that CDM project. The 'baseline' defines the GHG emissions of activities that would have been implemented in the absence of a CDM project. The baseline methodology is the process/algorithm for establishing that baseline. The baseline, along with the baseline methodology, are thus the most critical element of any CDM project towards meeting the important criteria of CDM, which are that a CDM should result in 'real, measurable, and long term benefits related to the mitigation of climate change'. This guidebook is produced within the frame work of the United Nations Environment Programme (UNEP) facilitated 'Capacity Development for the Clean Development Mechanism (CD4CDM)' Project. This document is published as part of the projects effort to develop guidebooks that cover important issues such as project finance, sustainability impacts, legal framework and institutional framework. These materials are aimed to help stakeholders better understand the CDM and are believed to eventually contribute to maximize the effect of the CDM in achieving the ultimate goal of UNFCCC and its Kyoto Protocol. This Guidebook should be read in conjunction with the information provided in the two other guidebooks entitled, 'Clean Development Mechanism: Introduction to the CDM' and 'CDM Information and Guidebook' developed under the CD4CDM project. (BA)

  11. Geochemical baseline studies of soil in Finland

    Science.gov (United States)

    Pihlaja, Jouni

    2017-04-01

    The soil element concentrations regionally vary a lot in Finland. Mostly this is caused by the different bedrock types, which are reflected in the soil qualities. Geological Survey of Finland (GTK) is carrying out geochemical baseline studies in Finland. In the previous phase, the research is focusing on urban areas and mine environments. The information can, for example, be used to determine the need for soil remediation, to assess environmental impacts or to measure the natural state of soil in industrial areas or mine districts. The field work is done by taking soil samples, typically at depth between 0-10 cm. Sampling sites are chosen to represent the most vulnerable areas when thinking of human impacts by possible toxic soil element contents: playgrounds, day-care centers, schools, parks and residential areas. In the mine districts the samples are taken from the areas locating outside the airborne dust effected areas. Element contents of the soil samples are then analyzed with ICP-AES and ICP-MS, Hg with CV-AAS. The results of the geochemical baseline studies are published in the Finnish national geochemical baseline database (TAPIR). The geochemical baseline map service is free for all users via internet browser. Through this map service it is possible to calculate regional soil baseline values using geochemical data stored in the map service database. Baseline data for 17 elements in total is provided in the map service and it can be viewed on the GTK's web pages (http://gtkdata.gtk.fi/Tapir/indexEN.html).

  12. Baseline methodologies for clean development mechanism projects

    International Nuclear Information System (INIS)

    Lee, M.K.; Shrestha, R.M.; Sharma, S.; Timilsina, G.R.; Kumar, S.

    2005-11-01

    The Kyoto Protocol and the Clean Development Mechanism (CDM) came into force on 16th February 2005 with its ratification by Russia. The increasing momentum of this process is reflected in more than 100 projects having been submitted to the CDM Executive Board (CDM-EB) for approval of the baselines and monitoring methodologies, which is the first step in developing and implementing CDM projects. A CDM project should result in a net decrease of GHG emissions below any level that would have resulted from other activities implemented in the absence of that CDM project. The 'baseline' defines the GHG emissions of activities that would have been implemented in the absence of a CDM project. The baseline methodology is the process/algorithm for establishing that baseline. The baseline, along with the baseline methodology, are thus the most critical element of any CDM project towards meeting the important criteria of CDM, which are that a CDM should result in 'real, measurable, and long term benefits related to the mitigation of climate change'. This guidebook is produced within the frame work of the United Nations Environment Programme (UNEP) facilitated 'Capacity Development for the Clean Development Mechanism (CD4CDM)' Project. This document is published as part of the projects effort to develop guidebooks that cover important issues such as project finance, sustainability impacts, legal framework and institutional framework. These materials are aimed to help stakeholders better understand the CDM and are believed to eventually contribute to maximize the effect of the CDM in achieving the ultimate goal of UNFCCC and its Kyoto Protocol. This Guidebook should be read in conjunction with the information provided in the two other guidebooks entitled, 'Clean Development Mechanism: Introduction to the CDM' and 'CDM Information and Guidebook' developed under the CD4CDM project. (BA)

  13. TEXT DEIXIS IN NARRATIVE SEQUENCES

    Directory of Open Access Journals (Sweden)

    Josep Rivera

    2007-06-01

    Full Text Available This study looks at demonstrative descriptions, regarding them as text-deictic procedures which contribute to weave discourse reference. Text deixis is thought of as a metaphorical referential device which maps the ground of utterance onto the text itself. Demonstrative expressions with textual antecedent-triggers, considered as the most important text-deictic units, are identified in a narrative corpus consisting of J. M. Barrie’s Peter Pan and its translation into Catalan. Some linguistic and discourse variables related to DemNPs are analysed to characterise adequately text deixis. It is shown that this referential device is usually combined with abstract nouns, thus categorising and encapsulating (non-nominal complex discourse entities as nouns, while performing a referential cohesive function by means of the text deixis + general noun type of lexical cohesion.

  14. Digital signal processing reveals circadian baseline oscillation in majority of mammalian genes.

    Directory of Open Access Journals (Sweden)

    Andrey A Ptitsyn

    2007-06-01

    Full Text Available In mammals, circadian periodicity has been described for gene expression in the hypothalamus and multiple peripheral tissues. It is accepted that 10%-15% of all genes oscillate in a daily rhythm, regulated by an intrinsic molecular clock. Statistical analyses of periodicity are limited by the small size of datasets and high levels of stochastic noise. Here, we propose a new approach applying digital signal processing algorithms separately to each group of genes oscillating in the same phase. Combined with the statistical tests for periodicity, this method identifies circadian baseline oscillation in almost 100% of all expressed genes. Consequently, circadian oscillation in gene expression should be evaluated in any study related to biological pathways. Changes in gene expression caused by mutations or regulation of environmental factors (such as photic stimuli or feeding should be considered in the context of changes in the amplitude and phase of genetic oscillations.

  15. The SSVEP-Based BCI Text Input System Using Entropy Encoding Algorithm

    Directory of Open Access Journals (Sweden)

    Yeou-Jiunn Chen

    2015-01-01

    Full Text Available The so-called amyotrophic lateral sclerosis (ALS or motor neuron disease (MND is a neurodegenerative disease with various causes. It is characterized by muscle spasticity, rapidly progressive weakness due to muscle atrophy, and difficulty in speaking, swallowing, and breathing. The severe disabled always have a common problem that is about communication except physical malfunctions. The steady-state visually evoked potential based brain computer interfaces (BCI, which apply visual stimulus, are very suitable to play the role of communication interface for patients with neuromuscular impairments. In this study, the entropy encoding algorithm is proposed to encode the letters of multilevel selection interface for BCI text input systems. According to the appearance frequency of each letter, the entropy encoding algorithm is proposed to construct a variable-length tree for the letter arrangement of multilevel selection interface. Then, the Gaussian mixture models are applied to recognize electrical activity of the brain. According to the recognition results, the multilevel selection interface guides the subject to spell and type the words. The experimental results showed that the proposed approach outperforms the baseline system, which does not consider the appearance frequency of each letter. Hence, the proposed approach is able to ease text input interface for patients with neuromuscular impairments.

  16. Text against Text: Counterbalancing the Hegemony of Assessment.

    Science.gov (United States)

    Cosgrove, Cornelius

    A study examined whether composition specialists can counterbalance the potential privileging of the assessment perspective, or of self-appointed interpreters of that perspective, through the study of assessment discourse as text. Fourteen assessment texts were examined, most of them journal articles and most of them featuring the common…

  17. SparkText: Biomedical Text Mining on Big Data Framework.

    Directory of Open Access Journals (Sweden)

    Zhan Ye

    Full Text Available Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment.In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM, and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes.This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  18. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  19. Knowledge Representation in Travelling Texts

    DEFF Research Database (Denmark)

    Mousten, Birthe; Locmele, Gunta

    2014-01-01

    Today, information travels fast. Texts travel, too. In a corporate context, the question is how to manage which knowledge elements should travel to a new language area or market and in which form? The decision to let knowledge elements travel or not travel highly depends on the limitation...... and the purpose of the text in a new context as well as on predefined parameters for text travel. For texts used in marketing and in technology, the question is whether culture-bound knowledge representation should be domesticated or kept as foreign elements, or should be mirrored or moulded—or should not travel...... at all! When should semantic and pragmatic elements in a text be replaced and by which other elements? The empirical basis of our work is marketing and technical texts in English, which travel into the Latvian and Danish markets, respectively....

  20. Texting while driving: is speech-based text entry less risky than handheld text entry?

    Science.gov (United States)

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. O livro didático de ciências no ensino fundamental proposta de critérios para análise do conteúdo zoológico The science text book in the elementary education – a proposal for zoology contents analysis

    Directory of Open Access Journals (Sweden)

    Simão Dias Vasconcelos

    2003-01-01

    Full Text Available A crescente discussão sobre a qualidade dos livros didáticos tem provocado sensíveis alterações na produção editorial nos últimos anos. Apesar dos significativos avanços, uma considerável quantidade de professores anda não tem acesso a instrumentos de análise de livros didáticos. Neste contexto, nós propomos uma série de critérios a serem utilizados por professores de ensino fundamental (6a. série na escolha de seu livro de Ciências, tendo como modelo o conteúdo zoológico. Os seguintes tópicos foram considerados: conteúdo teórico, recursos visuais, atividades práticas e informações complementares. Pretende-se, com este trabalho, contribuir para o debate sobre a necessidade de um maior envolvimento dos professores no processo de escolha do livro.The growing discussion about the quality of Science textbooks has clearly altered the editorial market in the past few years in Brazil. Despite remarkable improvement, a considerable amount of teachers has not yet had access to means for analyzing Science textbooks. In this context, we propose a series of criteria to be used by the teacher when electing the textbook used at junior high schools, using as a model the zoological content. The following topics were considered: theoretical contents, visual information, practical activities and complementary information. Through this study, ee intend to contribute to the debate about the necessity of a stronger involvement of teachers in the process of book choice.

  2. SparkText: Biomedical Text Mining on Big Data Framework

    Science.gov (United States)

    He, Karen Y.; Wang, Kai

    2016-01-01

    Background Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. Results In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. Conclusions This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research. PMID:27685652

  3. SparkText: Biomedical Text Mining on Big Data Framework.

    Science.gov (United States)

    Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M

    Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  4. Active Learning for Text Classification

    OpenAIRE

    Hu, Rong

    2011-01-01

    Text classification approaches are used extensively to solve real-world challenges. The success or failure of text classification systems hangs on the datasets used to train them, without a good dataset it is impossible to build a quality system. This thesis examines the applicability of active learning in text classification for the rapid and economical creation of labelled training data. Four main contributions are made in this thesis. First, we present two novel selection strategies to cho...

  5. Text Mining Applications and Theory

    CERN Document Server

    Berry, Michael W

    2010-01-01

    Text Mining: Applications and Theory presents the state-of-the-art algorithms for text mining from both the academic and industrial perspectives.  The contributors span several countries and scientific domains: universities, industrial corporations, and government laboratories, and demonstrate the use of techniques from machine learning, knowledge discovery, natural language processing and information retrieval to design computational models for automated text analysis and mining. This volume demonstrates how advancements in the fields of applied mathematics, computer science, machine learning

  6. Exploring the potential of short-baseline physics at Fermilab

    Science.gov (United States)

    Miranda, O. G.; Pasquini, Pedro; Tórtola, M.; Valle, J. W. F.

    2018-05-01

    We study the capabilities of the short-baseline neutrino program at Fermilab to probe the unitarity of the lepton mixing matrix. We find the sensitivity to be slightly better than the current one. Motivated by the future DUNE experiment, we have also analyzed the potential of an extra liquid Argon near detector in the LBNF beamline. Adding such a near detector to the DUNE setup will substantially improve the current sensitivity on nonunitarity. This would help to remove C P degeneracies due to the new complex phase present in the neutrino mixing matrix. We also study the sensitivity of our proposed setup to light sterile neutrinos for various configurations.

  7. Waste management project technical baseline description

    International Nuclear Information System (INIS)

    Sederburg, J.P.

    1997-01-01

    A systems engineering approach has been taken to describe the technical baseline under which the Waste Management Project is currently operating. The document contains a mission analysis, function analysis, requirement analysis, interface definitions, alternative analysis, system definition, documentation requirements, implementation definitions, and discussion of uncertainties facing the Project

  8. Baseline and Multimodal UAV GCS Interface Design

    Science.gov (United States)

    2013-07-01

    complete a computerized version of the NASA - TLX assessment of perceived mental workload. 2.3 Results The baseline condition ran smoothly and with...System MALE Medium-altitude, Long-endurance NASA - TLX NASA Task Load Index SA Situation Awareness TDT Tucker Davis Technologies UAV Uninhabited Aerial

  9. National Cyberethics, Cybersafety, Cybersecurity Baseline Study

    Science.gov (United States)

    Education Digest: Essential Readings Condensed for Quick Review, 2009

    2009-01-01

    This article presents findings from a study that explores the nature of the Cyberethics, Cybersafety, and Cybersecurity (C3) educational awareness policies, initiatives, curriculum, and practices currently taking place in the U.S. public and private K-12 educational settings. The study establishes baseline data on C3 awareness, which can be used…

  10. Guidance on Port Biological Baseline Surveys (PBBS)

    Digital Repository Service at National Institute of Oceanography (India)

    Awad, A.; Haag, F.; Anil, A.C.; Abdulla, A.

    This publication has been prepared by GBP, IOI, CSIR-NIO and IUCN in order to serve as guidance to those who are planning to carry out a port biological baseline survey, in particular in the context of Ballast Water Management. It has been drafted...

  11. Solid Waste Program technical baseline description

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, A.B.

    1994-07-01

    The system engineering approach has been taken to describe the technical baseline under which the Solid Waste Program is currently operating. The document contains a mission analysis, function analysis, system definition, documentation requirements, facility and project bases, and uncertainties facing the program.

  12. Toward Baseline Software Anomalies in NASA Missions

    Science.gov (United States)

    Layman, Lucas; Zelkowitz, Marvin; Basili, Victor; Nikora, Allen P.

    2012-01-01

    In this fast abstract, we provide preliminary findings an analysis of 14,500 spacecraft anomalies from unmanned NASA missions. We provide some baselines for the distributions of software vs. non-software anomalies in spaceflight systems, the risk ratings of software anomalies, and the corrective actions associated with software anomalies.

  13. Thesis Proposal

    DEFF Research Database (Denmark)

    Sloth, Erik

    2010-01-01

    Strukturen i Thesis proposal er følgende: Først præsenteres mine konkrete empiriske forskningsprojekter som skal munde ud i afhandlingens artikler. Jeg præsenterer herefter de teoretiske overvejelser omkring oplevelsesbegrebet og forbrugerkulturteori som danner baggrund for at jeg er nået frem til...

  14. Mercury baseline levels in Flemish soils (Belgium)

    International Nuclear Information System (INIS)

    Tack, Filip M.G.; Vanhaesebroeck, Thomas; Verloo, Marc G.; Van Rompaey, Kurt; Ranst, Eric van

    2005-01-01

    It is important to establish contaminant levels that are normally present in soils to provide baseline data for pollution studies. Mercury is a toxic element of concern. This study was aimed at assessing baseline mercury levels in soils in Flanders. In a previous study, mercury contents in soils in Oost-Vlaanderen were found to be significantly above levels reported elsewhere. For the current study, observations were extended over two more provinces, West-Vlaanderen and Antwerpen. Ranges of soil Hg contents were distinctly higher in the province Oost-Vlaanderen (interquartile range from 0.09 to 0.43 mg/kg) than in the other provinces (interquartile ranges from 0.7 to 0.13 and 0.7 to 0.15 mg/kg for West-Vlaanderen and Antwerpen, respectively). The standard threshold method was applied to separate soils containing baseline levels of Hg from the data. Baseline concentrations for Hg were characterised by a median of 0.10 mg Hg/kg dry soil, an interquartile range from 0.07 to 0.14 mg/kg and a 90% percentile value of 0.30 mg/kg. The influence of soil properties such as clay and organic carbon contents, and pH on baseline Hg concentrations was not important. Maps of the spatial distribution of Hg levels showed that the province Oost-Vlaanderen exhibited zones with systematically higher Hg soil contents. This may be related to the former presence of many small-scale industries employing mercury in that region. - Increased mercury levels may reflect human activity

  15. Layout-aware text extraction from full-text PDF of scientific articles

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Cartic

    2012-05-01

    Full Text Available Abstract Background The Portable Document Format (PDF is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the ‘Layout-Aware PDF Text Extraction’ (LA-PDFText system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Results Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1 Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2 Classifying text blocks into rhetorical categories using a rule-based method and (3 Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF

  16. Text and ideology: text-oriented discourse analysis

    Directory of Open Access Journals (Sweden)

    Maria Eduarda Gonçalves Peixoto

    2018-04-01

    Full Text Available The article aims to contribute to the understanding of the connection between text and ideology articulated by the text-oriented analysis of discourse (ADTO. Based on the reflections of Fairclough (1989, 2001, 2003 and Fairclough and Chouliaraki (1999, the debate presents the social ontology that ADTO uses to base its conception of social life as an open system and textually mediated; the article then explains the chronological-narrative development of the main critical theories of ideology, by virtue of which ADTO organizes the assumptions that underpin the particular use it makes of the term. Finally, the discussion presents the main aspects of the connection between text and ideology, offering a conceptual framework that can contribute to the domain of the theme according to a critical discourse analysis approach.

  17. Building Background Knowledge through Reading: Rethinking Text Sets

    Science.gov (United States)

    Lupo, Sarah M.; Strong, John Z.; Lewis, William; Walpole, Sharon; McKenna, Michael C.

    2018-01-01

    To increase reading volume and help students access challenging texts, the authors propose a four-dimensional framework for text sets. The quad text set framework is designed around a target text: a challenging content area text, such as a canonical literary work, research article, or historical primary source document. The three remaining…

  18. English Metafunction Analysis in Chemistry Text: Characterization of Scientific Text

    Directory of Open Access Journals (Sweden)

    Ahmad Amin Dalimunte, M.Hum

    2013-09-01

    Full Text Available The objectives of this research are to identify what Metafunctions are applied in chemistry text and how they characterize a scientific text. It was conducted by applying content analysis. The data for this research was a twelve-paragraph chemistry text. The data were collected by applying a documentary technique. The document was read and analyzed to find out the Metafunction. The data were analyzed by some procedures: identifying the types of process, counting up the number of the processes, categorizing and counting up the cohesion devices, classifying the types of modulation and determining modality value, finally counting up the number of sentences and clauses, then scoring the grammatical intricacy index. The findings of the research show that Material process (71of 100 is mostly used, circumstance of spatial location (26 of 56 is more dominant than the others. Modality (5 is less used in order to avoid from subjectivity. Impersonality is implied through less use of reference either pronouns (7 or demonstrative (7, conjunctions (60 are applied to develop ideas, and the total number of the clauses are found much more dominant (109 than the total number of the sentences (40 which results high grammatical intricacy index. The Metafunction found indicate that the chemistry text has fulfilled the characteristics of scientific or academic text which truly reflects it as a natural science.

  19. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  20. Text Genres in Information Organization

    Science.gov (United States)

    Nahotko, Marek

    2016-01-01

    Introduction: Text genres used by so-called information organizers in the processes of information organization in information systems were explored in this research. Method: The research employed text genre socio-functional analysis. Five genre groups in information organization were distinguished. Every genre group used in information…

  1. Strategies for Translating Vocative Texts

    Directory of Open Access Journals (Sweden)

    Olga COJOCARU

    2014-12-01

    Full Text Available The paper deals with the linguistic and cultural elements of vocative texts and the techniques used in translating them by giving some examples of texts that are typically vocative (i.e. advertisements and instructions for use. Semantic and communicative strategies are popular in translation studies and each of them has its own advantages and disadvantages in translating vocative texts. The advantage of semantic translation is that it takes more account of the aesthetic value of the SL text, while communicative translation attempts to render the exact contextual meaning of the original text in such a way that both content and language are readily acceptable and comprehensible to the readership. Focus is laid on the strategies used in translating vocative texts, strategies that highlight and introduce a cultural context to the target audience, in order to achieve their overall purpose, that is to sell or persuade the reader to behave in a certain way. Thus, in order to do that, a number of advertisements from the field of cosmetics industry and electronic gadgets were selected for analysis. The aim is to gather insights into vocative text translation and to create new perspectives on this field of research, now considered a process of innovation and diversion, especially in areas as important as economy and marketing.

  2. Systematic characterizations of text similarity in full text biomedical publications.

    Science.gov (United States)

    Sun, Zhaohui; Errami, Mounir; Long, Tara; Renard, Chris; Choradia, Nishant; Garner, Harold

    2010-09-15

    Computational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central. 72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively). While quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when

  3. Layout-aware text extraction from full-text PDF of scientific articles.

    Science.gov (United States)

    Ramakrishnan, Cartic; Patnia, Abhishek; Hovy, Eduard; Burns, Gully Apc

    2012-05-28

    The Portable Document Format (PDF) is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the 'Layout-Aware PDF Text Extraction' (LA-PDFText) system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1) Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2) Classifying text blocks into rhetorical categories using a rule-based method and (3) Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF. Finally, we discuss preliminary error analysis for

  4. Linguistic Dating of Biblical Texts

    DEFF Research Database (Denmark)

    Ehrensvärd, Martin Gustaf

    2003-01-01

    For two centuries, scholars have pointed to consistent differences in the Hebrew of certain biblical texts and interpreted these differences as reflecting the date of composition of the texts. Until the 1980s, this was quite uncontroversial as the linguistic findings largely confirmed the chronol......For two centuries, scholars have pointed to consistent differences in the Hebrew of certain biblical texts and interpreted these differences as reflecting the date of composition of the texts. Until the 1980s, this was quite uncontroversial as the linguistic findings largely confirmed...... the chronology of the texts established by other means: the Hebrew of Genesis-2 Kings was judged to be early and that of Esther, Daniel, Ezra, Nehemiah, and Chronicles to be late. In the current debate where revisionists have questioned the traditional dating, linguistic arguments in the dating of texts have...... come more into focus. The study critically examines some linguistic arguments adduced to support the traditional position, and reviewing the arguments it points to weaknesses in the linguistic dating of EBH texts to pre-exilic times. When viewing the linguistic evidence in isolation it will be clear...

  5. The optimized baseline project: Reinventing environmental restoration at Hanford

    International Nuclear Information System (INIS)

    Goodenough, J.D.; Janaskie, M.T.; Kleinen, P.J.

    1994-01-01

    The U.S. Department of Energy Richland Operations Office (DOE-RL) is using a strategic planning effort (termed the Optimized Baseline Project) to develop a new approach to the Hanford Environmental Restoration program. This effort seeks to achieve a quantum leap improvement in performance through results oriented prioritization of activities. This effort was conducted in parallel with the renegotiation of the Tri-Party Agreement and provided DOE with an opportunity to propose innovative initiatives to promote cost effectiveness, accelerate progress in the Hanford Environmental Restoration Program and involve stakeholders in the decision-making process. The Optimized Baseline project is an innovative approach to program planning and decision-making in several respects. First, the process is a top down, value driven effort that responds to values held by DOE, the regulatory community and the public. Second, planning is conducted in a way that reinforces the technical management process at Richland, involves the regulatory community in substantive decisions, and includes the public. Third, the Optimized Baseline Project is being conducted as part of a sitewide Hanford initiative to reinvent Government. The planning process used for the Optimized Baseline Project has many potential applications at other sites and in other programs where there is a need to build consensus among diverse, independent groups of stakeholders and decisionmakers. The project has successfully developed and demonstrated an innovative approach to program planning that accelerates the pace of cleanup, involves the regulators as partners with DOE in priority setting, and builds public understanding and support for the program through meaningful opportunities for involvement

  6. 75 FR 67768 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Baseline...

    Science.gov (United States)

    2010-11-03

    ... elements of I2P2 among establishments. The OSHA also proposes to conduct case study interviews with... more than 10 employees. Finally, the OSHA proposes to conduct case study interviews with government... Administration (OSHA) sponsored information collection request (ICR), ``Baseline Safety and Health Practices...

  7. Stemming Malay Text and Its Application in Automatic Text Categorization

    Science.gov (United States)

    Yasukawa, Michiko; Lim, Hui Tian; Yokoo, Hidetoshi

    In Malay language, there are no conjugations and declensions and affixes have important grammatical functions. In Malay, the same word may function as a noun, an adjective, an adverb, or, a verb, depending on its position in the sentence. Although extensively simple root words are used in informal conversations, it is essential to use the precise words in formal speech or written texts. In Malay, to make sentences clear, derivative words are used. Derivation is achieved mainly by the use of affixes. There are approximately a hundred possible derivative forms of a root word in written language of the educated Malay. Therefore, the composition of Malay words may be complicated. Although there are several types of stemming algorithms available for text processing in English and some other languages, they cannot be used to overcome the difficulties in Malay word stemming. Stemming is the process of reducing various words to their root forms in order to improve the effectiveness of text processing in information systems. It is essential to avoid both over-stemming and under-stemming errors. We have developed a new Malay stemmer (stemming algorithm) for removing inflectional and derivational affixes. Our stemmer uses a set of affix rules and two types of dictionaries: a root-word dictionary and a derivative-word dictionary. The use of set of rules is aimed at reducing the occurrence of under-stemming errors, while that of the dictionaries is believed to reduce the occurrence of over-stemming errors. We performed an experiment to evaluate the application of our stemmer in text mining software. For the experiment, text data used were actual web pages collected from the World Wide Web to demonstrate the effectiveness of our Malay stemming algorithm. The experimental results showed that our stemmer can effectively increase the precision of the extracted Boolean expressions for text categorization.

  8. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  9. Anomaly Detection with Text Mining

    Data.gov (United States)

    National Aeronautics and Space Administration — Many existing complex space systems have a significant amount of historical maintenance and problem data bases that are stored in unstructured text forms. The...

  10. Social Studies: Texts and Supplements.

    Science.gov (United States)

    Curriculum Review, 1979

    1979-01-01

    This review of selected social studies texts, series, and supplements, mainly for the secondary level, includes a special section examining eight titles on warfare and terrorism for grades 4-12. (SJL)

  11. BreakingNews: Article Annotation by Image and Text Processing.

    Science.gov (United States)

    Ramisa, Arnau; Yan, Fei; Moreno-Noguer, Francesc; Mikolajczyk, Krystian

    2018-05-01

    Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.

  12. Predicting Baseline for Analysis of Electricity Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Lee, D. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Choi, J. [Ulsan National Inst. of Science and Technology (Korea, Republic of); Spurlock, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Todd, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-03

    To understand the impact of new pricing structure on residential electricity demands, we need a baseline model that captures every factor other than the new price. The standard baseline is a randomized control group, however, a good control group is hard to design. This motivates us to devlop data-driven approaches. We explored many techniques and designed a strategy, named LTAP, that could predict the hourly usage years ahead. The key challenge in this process is that the daily cycle of electricity demand peaks a few hours after the temperature reaching its peak. Existing methods rely on the lagged variables of recent past usages to enforce this daily cycle. These methods have trouble making predictions years ahead. LTAP avoids this trouble by assuming the daily usage profile is determined by temperature and other factors. In a comparison against a well-designed control group, LTAP is found to produce accurate predictions.

  13. CASA Uno GPS orbit and baseline experiments

    Science.gov (United States)

    Schutz, B. E.; Ho, C. S.; Abusali, P. A. M.; Tapley, B. D.

    1990-01-01

    CASA Uno data from sites distributed in longitude from Australia to Europe have been used to determine orbits of the GPS satellites. The characteristics of the orbits determined from double difference phase have been evaluated through comparisons of two-week solutions with one-week solutions and by comparisons of predicted and estimated orbits. Evidence of unmodeled effects is demonstrated, particularly associated with the orbit planes that experience solar eclipse. The orbit accuracy has been assessed through the repeatability of unconstrained estimated baseline vectors ranging from 245 km to 5400 km. Both the baseline repeatability and the comparison with independent space geodetic methods give results at the level of 1-2 parts in 100,000,000. In addition, the Mojave/Owens Valley (245 km) and Kokee Park/Ft. Davis (5409 km) estimates agree with VLBI and SLR to better than 1 part in 100,000,000.

  14. Baseline composition of solar energetic particles

    International Nuclear Information System (INIS)

    Meyer, J.

    1985-01-01

    We analyze all existing spacecraft observations of the highly variable heavy element composition of solar energetic particles (SEP) during non- 3 He-rich events. All data show the imprint of an ever-present basic composition pattern (dubbed ''mass-unbiased baseline'' SEP composition) that differs from the photospheric composition by a simple bias related to first ionization potential (FIP). In each particular observation, this mass-unbiased baseline composition is being distorted by an additional bias, which is always a monotonic function of mass (or Z). This latter bias varies in amplitude and even sign from observation to observation. To first order, it seems related to differences in the A/Z* ratio between elements (Z* = mean effective charge)

  15. Text Mining in Organizational Research.

    Science.gov (United States)

    Kobayashi, Vladimer B; Mol, Stefan T; Berkers, Hannah A; Kismihók, Gábor; Den Hartog, Deanne N

    2018-07-01

    Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.

  16. OCRWM baseline management procedure for document identifiers

    International Nuclear Information System (INIS)

    1993-03-01

    This procedure establishes a uniform numbering system (document identifier) for all Program and project technical, cost, and schedule baselines, and selected management and procurement documents developed for and controlled by the Office of Civilian Radioactive Waste Management (OCRWM) for the Civilian Radioactive Waste Management System (CRWMS). The document identifier defined in this procedure is structured to ensure that the relational integrity between configuration items (CIs) and their associated documentation and software is maintained, traceable, categorical, and retrievable for the life of the program

  17. Information Technology Sector Baseline Risk Assessment

    Science.gov (United States)

    2009-08-01

    alternative root be economically advantageous , an actor’s ability to exploit market forces and create an alternative root would be significantly improved...conduct their operations. Therefore, a loss or disruption to Internet services would not be advantageous for the desired outcomes of these syndicates.26... eCommerce Service loss or disruption [C] Traffic Redirection [C] = Undesired consequence Information Technology Sector Baseline Risk Assessment

  18. On the baseline evolution of automobile fuel economy in Europe

    International Nuclear Information System (INIS)

    Zachariadis, Theodoros

    2006-01-01

    'Business as usual' scenarios in long-term energy forecasts are crucial for scenario-based policy analyses. This article focuses on fuel economy of passenger cars and light trucks, a long-disputed issue with serious implications for worldwide energy use and CO 2 emissions. The current status in Europe is explained and future developments are analysed with the aid of historical data of the last three decades from the United States and Europe. As a result of this analysis, fuel economy values are proposed for use as assumptions in baseline energy/transport scenarios in the 15 'old' European Union Member States. Proposed values are given for new gasoline and diesel cars and for the years 2010, 2020 and 2030. The increasing discrepancy between vehicle fuel consumption measured under test conditions and that in the real world is also considered. One main conclusion is that the European Commission's voluntary agreement with the automobile industry should not be assumed to fully achieve its target under baseline conditions, nor should it be regarded as a major stimulus for autonomous vehicle efficiency improvements after 2010. A second conclusion is that three very recent studies enjoying authority across the EU tend to be overly optimistic as regards the technical progress for conventional and alternative vehicle propulsion technologies under 'business as usual' conditions

  19. Detecting CP violation in a single neutrino oscillation channel at very long baselines

    International Nuclear Information System (INIS)

    Latimer, D. C.; Escamilla, J.; Ernst, D. J.

    2007-01-01

    We propose a way of detecting CP violation in a single neutrino oscillation channel at very long baselines (on the order of several thousands of kilometers), given precise knowledge of the smallest mass-squared difference. It is shown that CP violation can be characterized by a shift in L/E of the peak oscillation in the ν e -ν μ appearance channel, both in vacuum and in matter. In fact, matter effects enhance the shift at a fixed energy. We consider the case in which sub-GeV neutrinos are measured with varying baseline and also the case of a fixed baseline. For the varied baseline, accurate knowledge of the absolute neutrino flux would not be necessary; however, neutrinos must be distinguishable from antineutrinos. For the fixed baseline, it is shown that CP violation can be distinguished if the mixing angle θ 13 were known

  20. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  1. GPU-Accelerated Text Mining

    International Nuclear Information System (INIS)

    Cui, X.; Mueller, F.; Zhang, Y.; Potok, Thomas E.

    2009-01-01

    Accelerating hardware devices represent a novel promise for improving the performance for many problem domains but it is not clear for which domains what accelerators are suitable. While there is no room in general-purpose processor design to significantly increase the processor frequency, developers are instead resorting to multi-core chips duplicating conventional computing capabilities on a single die. Yet, accelerators offer more radical designs with a much higher level of parallelism and novel programming environments. This present work assesses the viability of text mining on CUDA. Text mining is one of the key concepts that has become prominent as an effective means to index the Internet, but its applications range beyond this scope and extend to providing document similarity metrics, the subject of this work. We have developed and optimized text search algorithms for GPUs to exploit their potential for massive data processing. We discuss the algorithmic challenges of parallelization for text search problems on GPUs and demonstrate the potential of these devices in experiments by reporting significant speedups. Our study may be one of the first to assess more complex text search problems for suitability for GPU devices, and it may also be one of the first to exploit and report on atomic instruction usage that have recently become available in NVIDIA devices

  2. Comprehending text in literature class

    Directory of Open Access Journals (Sweden)

    Purić Daliborka S.

    2016-01-01

    Full Text Available The paper discusses the problem of understanding a text and the contribution of methodological apparatus in the reader book to comprehension of a text being read in junior classes of elementary school. By using the technique of content analysis from methodological apparatuses in eight reader books for the fourth grade of elementary school, approved for usage in 2014/2015 academic year, and surveying 350 teachers in 33 elementary schools and 11 administrative districts in the Republic of Serbia we examined: (a to what extent the Serbian language text book contents enable junior students to understand a literary text; (b to what extent teachers accept the suggestions offered in the textbook for preparing literature teaching. The results show that a large number of suggestions relate to reading comprehension, but some of categories of understanding are unevenly distributed in the methodological apparatus. On the other hand, the majority of teachers use the methodological apparatus given in a textbook for preparing classes, not only the textbook he or she selected for teaching but also other textbooks for the same grade.

  3. A Guide Text or Many Texts? "That is the Question”

    Directory of Open Access Journals (Sweden)

    Delgado de Valencia Sonia

    2001-08-01

    Full Text Available The use of supplementary materials in the classroom has always been an essential part of the teaching and learning process. To restrict our teaching to the scope of one single textbook means to stand behind the advances of knowledge, in any area and context. Young learners appreciate any new and varied support that expands their knowledge of the world: diaries, letters, panels, free texts, magazines, short stories, poems or literary excerpts, and articles taken from Internet are materials that will allow learnersto share more and work more collaboratively. In this article we are going to deal with some of these materials, with the criteria to select, adapt, and create them that may be of interest to the learner and that may promote reading and writing processes. Since no text can entirely satisfy the needs of students and teachers, the creativity of both parties will be necessary to improve the quality of teaching through the adequate use and adaptation of supplementary materials.

  4. Individual Profiling Using Text Analysis

    Science.gov (United States)

    2016-04-15

    AFRL-AFOSR-UK-TR-2016-0011 Individual Profiling using Text Analysis 140333 Mark Stevenson UNIVERSITY OF SHEFFIELD, DEPARTMENT OF PSYCHOLOGY Final...REPORT TYPE      Final 3.  DATES COVERED (From - To)      15 Sep 2014 to 14 Sep 2015 4.  TITLE AND SUBTITLE Individual Profiling using Text Analysis ...consisted of collections of tweets for a number of Twitter users whose gender, age and personality scores are known. The task was to construct some system

  5. Identifying issue frames in text.

    Directory of Open Access Journals (Sweden)

    Eyal Sagi

    Full Text Available Framing, the effect of context on cognitive processes, is a prominent topic of research in psychology and public opinion research. Research on framing has traditionally relied on controlled experiments and manually annotated document collections. In this paper we present a method that allows for quantifying the relative strengths of competing linguistic frames based on corpus analysis. This method requires little human intervention and can therefore be efficiently applied to large bodies of text. We demonstrate its effectiveness by tracking changes in the framing of terror over time and comparing the framing of abortion by Democrats and Republicans in the U.S.

  6. Finding text in color images

    Science.gov (United States)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  7. An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry

    OpenAIRE

    Yao, Dong-mei; Zhang, Xin; Wang, Ke-feng; Zou, Tao; Wang, Dong; Qian, Xin-hua

    2016-01-01

    According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production proce...

  8. The qualitative research proposal

    Directory of Open Access Journals (Sweden)

    H Klopper

    2008-09-01

    Full Text Available Qualitative research in the health sciences has had to overcome many prejudices and a number of misunderstandings, but today qualitative research is as acceptable as quantitative research designs and is widely funded and published. Writing the proposal of a qualitative study, however, can be a challenging feat, due to the emergent nature of the qualitative research design and the description of the methodology as a process. Even today, many sub-standard proposals at post-graduate evaluation committees and application proposals to be considered for funding are still seen. This problem has led the researcher to develop a framework to guide the qualitative researcher in writing the proposal of a qualitative study based on the following research questions: (i What is the process of writing a qualitative research proposal? and (ii What does the structure and layout of a qualitative proposal look like? The purpose of this article is to discuss the process of writing the qualitative research proposal, as well as describe the structure and layout of a qualitative research proposal. The process of writing a qualitative research proposal is discussed with regards to the most important questions that need to be answered in your research proposal with consideration of the guidelines of being practical, being persuasive, making broader links, aiming for crystal clarity and planning before you write. While the structure of the qualitative research proposal is discussed with regards to the key sections of the proposal, namely the cover page, abstract, introduction, review of the literature, research problem and research questions, research purpose and objectives, research paradigm, research design, research method, ethical considerations, dissemination plan, budget and appendices.

  9. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  10. Exploring non standard physics in long-baseline neutrino oscillation experiments

    International Nuclear Information System (INIS)

    Chatterjee, Sabya Sachi

    2015-01-01

    After the recent discovery of large th ( 13), the focus has been shifted to address the remaining fundamental issues like neutrino mass ordering and CP-violation in leptonic sector. Future proposed Long-Baseline facilities like DUNE (1300 km baseline from FNAL to Homestake) and LBNO (2290 km baseline from CERN to Pyhasalmi) are well suited to address these issues at high confidence level. Not only to the standard framework, these experiments are highly capable to look for some new physics beyond the Standard Model scenario. In this work, we explore whether these high precision future facilities are sensitive to new U(1) global symmetries and upto which confidence level. (author)

  11. Multilingual text induced spelling correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams

  12. Solar Concepts: A Background Text.

    Science.gov (United States)

    Gorham, Jonathan W.

    This text is designed to provide teachers, students, and the general public with an overview of key solar energy concepts. Various energy terms are defined and explained. Basic thermodynamic laws are discussed. Alternative energy production is described in the context of the present energy situation. Described are the principal contemporary solar…

  13. FTP: Full-Text Publishing?

    Science.gov (United States)

    Jul, Erik

    1992-01-01

    Describes the use of file transfer protocol (FTP) on the INTERNET computer network and considers its use as an electronic publishing system. The differing electronic formats of text files are discussed; the preparation and access of documents are described; and problems are addressed, including a lack of consistency. (LRW)

  14. Multi-sensors multi-baseline mapping system for mobile robot using stereovision camera and laser-range device

    Directory of Open Access Journals (Sweden)

    Mohammed Faisal

    2016-06-01

    Full Text Available Countless applications today are using mobile robots, including autonomous navigation, security patrolling, housework, search-and-rescue operations, material handling, manufacturing, and automated transportation systems. Regardless of the application, a mobile robot must use a robust autonomous navigation system. Autonomous navigation remains one of the primary challenges in the mobile-robot industry; many control algorithms and techniques have been recently developed that aim to overcome this challenge. Among autonomous navigation methods, vision-based systems have been growing in recent years due to rapid gains in computational power and the reliability of visual sensors. The primary focus of research into vision-based navigation is to allow a mobile robot to navigate in an unstructured environment without collision. In recent years, several researchers have looked at methods for setting up autonomous mobile robots for navigational tasks. Among these methods, stereovision-based navigation is a promising approach for reliable and efficient navigation. In this article, we create and develop a novel mapping system for a robust autonomous navigation system. The main contribution of this article is the fuse of the multi-baseline stereovision (narrow and wide baselines and laser-range reading data to enhance the accuracy of the point cloud, to reduce the ambiguity of correspondence matching, and to extend the field of view of the proposed mapping system to 180°. Another contribution is the pruning the region of interest of the three-dimensional point clouds to reduce the computational burden involved in the stereo process. Therefore, we called the proposed system multi-sensors multi-baseline mapping system. The experimental results illustrate the robustness and accuracy of the proposed system.

  15. What oral text reading fluency can reveal about reading comprehension

    NARCIS (Netherlands)

    Veenendaal, N.J.; Groen, M.A.; Verhoeven, L.T.W.

    2015-01-01

    Text reading fluency – the ability to read quickly, accurately and with a natural intonation – has been proposed as a predictor of reading comprehension. In the current study, we examined the role of oral text reading fluency, defined as text reading rate and text reading prosody, as a contributor

  16. Baseline review of the U.S. LHC Accelerator project

    International Nuclear Information System (INIS)

    1998-02-01

    The Department of Energy (DOE) Review of the U.S. Large Hadron Collider (LHC) Accelerator project was conducted February 23--26, 1998, at the request of Dr. John R. O'Fallon, Director, Division of High Energy Physics, Office of Energy Research, U.S. DOE. This is the first review of the U.S. LHC Accelerator project. Overall, the Committee found that the U.S. LHC Accelerator project effort is off to a good start and that the proposed scope is very conservative for the funding available. The Committee recommends that the project be initially baselined at a total cost of $110 million, with a scheduled completion data of 2005. The U.S. LHC Accelerator project will supply high technology superconducting magnets for the interaction regions (IRs) and the radio frequency (rf) straight section of the LHC intersecting storage rings. In addition, the project provides the cryogenic support interface boxes to service the magnets and radiation absorbers to protect the IR dipoles and the inner triplet quadrupoles. US scientists will provide support in analyzing some of the detailed aspects of accelerator physics in the two rings. The three laboratories participating in this project are Brookhaven National Laboratory, Fermi National Accelerator Laboratory (Fermilab), and Lawrence Berkeley National Laboratory. The Committee was very impressed by the technical capabilities of the US LHC Accelerator project team. Cost estimates for each subsystem of the US LHC Accelerator project were presented to the Review Committee, with a total cost including contingency of $110 million (then year dollars). The cost estimates were deemed to be conservative. A re-examination of the funding profile, costs, and schedules on a centralized project basis should lead to an increased list of deliverables. The Committee concluded that the proposed scope of US deliverables to CERN can be readily accomplished with the $110 million total cost baseline for the project. The current deliverables should serve as

  17. Pipeline integrity: ILI baseline data for QRA

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Todd R. [Tuboscope Pipeline Services, Houston, TX (United States)]. E-mail: tporter@varco.com; Silva, Jose Augusto Pereira da [Pipeway Engenharia, Rio de Janeiro, RJ (Brazil)]. E-mail: guto@pipeway.com; Marr, James [MARR and Associates, Calgary, AB (Canada)]. E-mail: jmarr@marr-associates.com

    2003-07-01

    The initial phase of a pipeline integrity management program (IMP) is conducting a baseline assessment of the pipeline system and segments as part of Quantitative Risk Assessment (QRA). This gives the operator's integrity team the opportunity to identify critical areas and deficiencies in the protection, maintenance, and mitigation strategies. As a part of data gathering and integration of a wide variety of sources, in-line inspection (ILI) data is a key element. In order to move forward in the integrity program development and execution, the baseline geometry of the pipeline must be determined with accuracy and confidence. From this, all subsequent analysis and conclusions will be derived. Tuboscope Pipeline Services (TPS), in conjunction with Pipeway Engenharia of Brazil, operate ILI inertial navigation system (INS) and Caliper geometry tools, to address this integrity requirement. This INS and Caliper ILI tool data provides pipeline trajectory at centimeter level resolution and sub-metre 3D position accuracy along with internal geometry - ovality, dents, misalignment, and wrinkle/buckle characterization. Global strain can be derived from precise INS curvature measurements and departure from the initial pipeline state. Accurate pipeline elevation profile data is essential in the identification of sag/over bend sections for fluid dynamic and hydrostatic calculations. This data, along with pipeline construction, operations, direct assessment and maintenance data is integrated in LinaViewPRO{sup TM}, a pipeline data management system for decision support functions, and subsequent QRA operations. This technology provides the baseline for an informed, accurate and confident integrity management program. This paper/presentation will detail these aspects of an effective IMP, and experience will be presented, showing the benefits for liquid and gas pipeline systems. (author)

  18. Consideration of the baseline environment in examples of voluntary SEAs from Scotland

    International Nuclear Information System (INIS)

    Wright, Fiona

    2007-01-01

    Evidence from analysing and evaluating examples of three voluntary SEAs prepared in Scotland in the mid-late 1990s showed that different spatial and temporal scales were used when providing a baseline environment description. The SEAs analysed were prepared for: a wind farm siting programme that looked at national and short-term impacts; a land use plan that looked at regional and short-term impacts; and a transport plan that examined local and medium-term impacts. It was found that the two SEAs prepared by local government only considered impacts on the baseline environment within their jurisdictional boundaries whilst the SEA prepared by the private business considered impacts on the national baseline. A mixture of baseline data about planning, economic, environmental and social issues were included in the SEAs, however, evidence suggested that each SEA only focussed on those baseline features that might be significantly affected by the proposal. Each SEA also made extensive use of existing baseline information available from a variety of sources including local, and central government records and information from statutory bodies. All of the SEAs acknowledged that baseline data deficiencies existed and in certain cases steps were taken to obtain primary field data to help address these, however, it was also acknowledged that resource restrictions and decision-making deadlines limited the amount of primary baseline data that could be collected

  19. SRP Baseline Hydrogeologic Investigation, Phase 3

    Energy Technology Data Exchange (ETDEWEB)

    Bledsoe, H.W.

    1988-08-01

    The SRP Baseline Hydrogeologic Investigation was implemented for the purpose of updating and improving the knowledge and understanding of the hydrogeologic systems underlying the SRP site. Phase III, which is discussed in this report, includes the drilling of 7 deep coreholes (sites P-24 through P-30) and the installation of 53 observation wells ranging in depth from approximately 50 ft to more than 970 ft below the ground surface. In addition to the collection of geologic cores for lithologic and stratigraphic study, samples were also collected for the determination of physical characteristics of the sediments and for the identification of microorganisms.

  20. Environmental Baseline File for National Transportation

    International Nuclear Information System (INIS)

    1999-01-01

    This Environmental Baseline File summarizes and consolidates information related to the national-level transportation of commercial spent nuclear fuel. Topics address include: shipments of commercial spent nuclear fuel based on mostly truck and mostly rail shipping scenarios; transportation routing for commercial spent nuclear fuel sites and DOE sites; radionuclide inventories for various shipping container capacities; transportation routing; populations along transportation routes; urbanized area population densities; the impacts of historical, reasonably foreseeable, and general transportation; state-level food transfer factors; Federal Guidance Report No. 11 and 12 radionuclide dose conversion factors; and national average atmospheric conditions

  1. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  2. Spectrometer Baseline Control Via Spatial Filtering

    Science.gov (United States)

    Burleigh, M. R.; Richey, C. R.; Rinehart, S. A.; Quijada, M. A.; Wollack, E. J.

    2016-01-01

    An absorptive half-moon aperture mask is experimentally explored as a broad-bandwidth means of eliminating spurious spectral features arising from reprocessed radiation in an infrared Fourier transform spectrometer. In the presence of the spatial filter, an order of magnitude improvement in the fidelity of the spectrometer baseline is observed. The method is readily accommodated within the context of commonly employed instrument configurations and leads to a factor of two reduction in optical throughput. A detailed discussion of the underlying mechanism and limitations of the method are provided.

  3. Neutrino physics with short baseline experiments

    International Nuclear Information System (INIS)

    Zimmerman, E.D.

    2006-01-01

    Neutrino physics with low- to medium-energy beams has progressed steadily over the last several years. Neutrino oscillation searches at short baseline (defined as 2 - -> 0.1eV 2 . One positive signal, from the LSND collaboration, exists and is being tested by the MiniBooNE experiment. Neutrino cross-section measurements are being made by MiniBooNE and K2K, which will be important for reducing systematic errors in present and future oscillation measurements. In the near future, dedicated cross- section experiments will begin operating at Fermilab. (author)

  4. Rationing in the presence of baselines

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moreno-Ternero, Juan D.; Østerdal, Lars Peter

    2013-01-01

    We analyze a general model of rationing in which agents have baselines, in addition to claims against the (insufficient) endowment of the good to be allocated. Many real-life problems fit this general model (e.g., bankruptcy with prioritized claims, resource allocation in the public health care...... sector, water distribution in drought periods). We introduce (and characterize) a natural class of allocation methods for this model. Any method within the class is associated with a rule in the standard rationing model, and we show that if the latter obeys some focal properties, the former obeys them...

  5. Baseline scenarios of global environmental change

    International Nuclear Information System (INIS)

    Alcamo, J.; Kreileman, G.J.J.; Bollen, J.C.; Born, G.J. van den; Krol, M.S.; Toet, A.M.C.; Vries, H.J.M. de; Gerlagh, R.

    1996-01-01

    This paper presents three baseline scenarios of no policy action computed by the IMAGE2 model. These scenarios cover a wide range of coupled global change indicators, including: energy demand and consumption; food demand, consumption, and production; changes in land cover including changes in extent of agricultural land and forest; emissions of greenhouse gases and ozone precursors; and climate change and its impacts on sea level rise, crop productivity and natural vegetation. Scenario information is available for the entire world with regional and grid scale detail, and covers from 1970 to 2100. (author)

  6. SRP baseline hydrogeologic investigation: Aquifer characterization

    Energy Technology Data Exchange (ETDEWEB)

    Strom, R.N.; Kaback, D.S.

    1992-03-31

    An investigation of the mineralogy and chemistry of the principal hydrogeologic units and the geochemistry of the water in the principal aquifers at Savannah River Site (SRS) was undertaken as part of the Baseline Hydrogeologic Investigation. This investigation was conducted to provide background data for future site studies and reports and to provide a site-wide interpretation of the geology and geochemistry of the Coastal Plain Hydrostratigraphic province. Ground water samples were analyzed for major cations and anions, minor and trace elements, gross alpha and beta, tritium, stable isotopes of hydrogen, oxygen, and carbon, and carbon-14. Sediments from the well borings were analyzed for mineralogy and major and minor elements.

  7. Dynamic Chemical Model for $\\text {H} _2 $/$\\text {O} _2 $ Combustion Developed Through a Community Workflow

    KAUST Repository

    Oreluk, James

    2018-01-30

    Elementary-reaction models for $\\\\text{H}_2$/$\\\\text{O}_2$ combustion were evaluated and optimized through a collaborative workflow, establishing accuracy and characterizing uncertainties. Quantitative findings were the optimized model, the importance of $\\\\text{H}_2 + \\\\text{O}_2(1\\\\Delta) = \\\\text{H} + \\\\text{HO}_2$ in high-pressure flames, and the inconsistency of certain low-temperature shock-tube data. The workflow described here is proposed to be even more important because the approach and publicly available cyberinfrastructure allows future community development of evolving improvements. The workflow steps applied here were to develop an initial reaction set using Burke et al. [2012], Burke et al. [2013], Sellevag et al. [2009], and Konnov [2015]; test it for thermodynamic and kinetics consistency and plausibility against other sets in the literature; assign estimated uncertainties where not stated in the sources; select key data targets (

  8. Dynamic Chemical Model for $\\text {H} _2 $/$\\text {O} _2 $ Combustion Developed Through a Community Workflow

    KAUST Repository

    Oreluk, James; Needham, Craig D.; Baskaran, Sathya; Sarathy, Mani; Burke, Michael P.; West, Richard H.; Frenklach, Michael; Westmoreland, Phillip R.

    2018-01-01

    Elementary-reaction models for $\\text{H}_2$/$\\text{O}_2$ combustion were evaluated and optimized through a collaborative workflow, establishing accuracy and characterizing uncertainties. Quantitative findings were the optimized model, the importance of $\\text{H}_2 + \\text{O}_2(1\\Delta) = \\text{H} + \\text{HO}_2$ in high-pressure flames, and the inconsistency of certain low-temperature shock-tube data. The workflow described here is proposed to be even more important because the approach and publicly available cyberinfrastructure allows future community development of evolving improvements. The workflow steps applied here were to develop an initial reaction set using Burke et al. [2012], Burke et al. [2013], Sellevag et al. [2009], and Konnov [2015]; test it for thermodynamic and kinetics consistency and plausibility against other sets in the literature; assign estimated uncertainties where not stated in the sources; select key data targets (

  9. Linguistic dating of biblical texts

    DEFF Research Database (Denmark)

    Young, Ian; Rezetko, Robert; Ehrensvärd, Martin Gustaf

    Since the beginning of critical scholarship biblical texts have been dated using linguistic evidence. In recent years this has become a controversial topic, especially with the publication of Ian Young (ed.), Biblical Hebrew: Studies in Chronology and Typology (2003). However, until now there has...... been no introduction and comprehensive study of the field. Volume 1 introduces the field of linguistic dating of biblical texts, particularly to intermediate and advanced students of biblical Hebrew who have a reasonable background in the language, having completed at least an introductory course...... in this volume are: What is it that makes Archaic Biblical Hebrew archaic , Early Biblical Hebrew early , and Late Biblical Hebrew late ? Does linguistic typology, i.e. different linguistic characteristics, convert easily and neatly into linguistic chronology, i.e. different historical origins? A large amount...

  10. Text as an Autopoietic System

    DEFF Research Database (Denmark)

    Nicolaisen, Maria Skou

    2016-01-01

    The aim of the present research article is to discuss the possibilities and limitations in addressing text as an autopoietic system. The theory of autopoiesis originated in the field of biology in order to explain the dynamic processes entailed in sustaining living organisms at cellular level. Th....... By comparing the biological with the textual account of autopoietic agency, the end conclusion is that a newly derived concept of sociopoiesis might be better suited for discussing the architecture of textual systems....

  11. The TEXT upgrade vertical interferometer

    International Nuclear Information System (INIS)

    Hallock, G.A.; Gartman, M.L.; Li, W.; Chiang, K.; Shin, S.; Castles, R.L.; Chatterjee, R.; Rahman, A.S.

    1992-01-01

    A far-infrared interferometer has been installed on TEXT upgrade to obtain electron density profiles. The primary system views the plasma vertically through a set of large (60-cm radialx7.62-cm toroidal) diagnostic ports. A 1-cm channel spacing (59 channels total) and fast electronic time response is used, to provide high resolution for radial profiles and perturbation experiments. Initial operation of the vertical system was obtained late in 1991, with six operating channels

  12. Robust keyword retrieval method for OCRed text

    Science.gov (United States)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  13. Wide-Baseline Stereo-Based Obstacle Mapping for Unmanned Surface Vehicles

    Science.gov (United States)

    Mou, Xiaozheng; Wang, Han

    2018-01-01

    This paper proposes a wide-baseline stereo-based static obstacle mapping approach for unmanned surface vehicles (USVs). The proposed approach eliminates the complicated calibration work and the bulky rig in our previous binocular stereo system, and raises the ranging ability from 500 to 1000 m with a even larger baseline obtained from the motion of USVs. Integrating a monocular camera with GPS and compass information in this proposed system, the world locations of the detected static obstacles are reconstructed while the USV is traveling, and an obstacle map is then built. To achieve more accurate and robust performance, multiple pairs of frames are leveraged to synthesize the final reconstruction results in a weighting model. Experimental results based on our own dataset demonstrate the high efficiency of our system. To the best of our knowledge, we are the first to address the task of wide-baseline stereo-based obstacle mapping in a maritime environment. PMID:29617293

  14. Analysis of baseline gene expression levels from ...

    Science.gov (United States)

    The use of gene expression profiling to predict chemical mode of action would be enhanced by better characterization of variance due to individual, environmental, and technical factors. Meta-analysis of microarray data from untreated or vehicle-treated animals within the control arm of toxicogenomics studies has yielded useful information on baseline fluctuations in gene expression. A dataset of control animal microarray expression data was assembled by a working group of the Health and Environmental Sciences Institute's Technical Committee on the Application of Genomics in Mechanism Based Risk Assessment in order to provide a public resource for assessments of variability in baseline gene expression. Data from over 500 Affymetrix microarrays from control rat liver and kidney were collected from 16 different institutions. Thirty-five biological and technical factors were obtained for each animal, describing a wide range of study characteristics, and a subset were evaluated in detail for their contribution to total variability using multivariate statistical and graphical techniques. The study factors that emerged as key sources of variability included gender, organ section, strain, and fasting state. These and other study factors were identified as key descriptors that should be included in the minimal information about a toxicogenomics study needed for interpretation of results by an independent source. Genes that are the most and least variable, gender-selectiv

  15. Structure of a traditional baseline data system

    Energy Technology Data Exchange (ETDEWEB)

    1976-12-01

    Research was conducted to determine whether appropriate data exist for the development of a comprehensive statistical baseline data system on the human environment in the Athabasca oil sands region of Alberta. The existing data sources pertinent to the target area were first reviewed and discussed. Criteria were selected to assist the evaluation of data, including type of data collected, source, degree of detail, geographic identification, accessibility, and time frame. These criteria allowed assessing whether the data would be amenable to geographically-coded, continuous monitoring systems. It was found that the Statistics Canada Census provided the most detail, the most complete coverage of the target area, the smallest statistical areas, the greatest consistency in data and data collection, and the most regular collection. The local agency collection efforts were generally oriented toward specific goals and the data intended primarily for intra-agency use. The smallest statistical units in these efforts may be too large to be of value to a common small-area system, and data collection agencies did not generally use coterminous boundaries. Recommendations were made to give primary consideration to Statistics Canada data in the initial development of the baseline data system. Further development of such a system depends on the adoption by local agencies of a common small-area system for data collection. 38 refs., 6 figs.

  16. Baseline response rates affect resistance to change.

    Science.gov (United States)

    Kuroda, Toshikazu; Cook, James E; Lattal, Kennon A

    2018-01-01

    The effect of response rates on resistance to change, measured as resistance to extinction, was examined in two experiments. In Experiment 1, responding in transition from a variable-ratio schedule and its yoked-interval counterpart to extinction was compared with pigeons. Following training on a multiple variable-ratio yoked-interval schedule of reinforcement, in which response rates were higher in the former component, reinforcement was removed from both components during a single extended extinction session. Resistance to extinction in the yoked-interval component was always either greater or equal to that in the variable-ratio component. In Experiment 2, resistance to extinction was compared for two groups of rats that exhibited either high or low response rates when maintained on identical variable-interval schedules. Resistance to extinction was greater for the lower-response-rate group. These results suggest that baseline response rate can contribute to resistance to change. Such effects, however, can only be revealed when baseline response rate and reinforcement rate are disentangled (Experiments 1 and 2) from the more usual circumstance where the two covary. Furthermore, they are more cleanly revealed when the programmed contingencies controlling high and low response rates are identical, as in Experiment 2. © 2017 Society for the Experimental Analysis of Behavior.

  17. Baseline study of flora fauna at proposed uranium mining site at Gogi, Gulbarga district (Karnataka)

    International Nuclear Information System (INIS)

    Nautiyal, Sunil; Bhaskar, K.; Imran Khan, Y.D.

    2013-03-01

    This report covers authentic data compiled from field investigation, rational explanation and scientific interpretation required to meet the objectives of the study. This research aimed to explore, survey and collect plant and animal specimens to document the species from aquatic and terrestrial ecosystems. The phytosiological assessment and analysis of diversity indices of different vegetation strata i.e. trees, shrubs, herbs, climbers, tree saplings and seedlings across the study region are part of the objectives of the study. The study and analysis of the conversation status i.e to identify and document floral and faunal species was taken up as a research component. The examination of land use/land cover class of the region for vegetation analysis was another objective of this research

  18. Performance Analysis for Airborne Interferometric SAR Affected by Flexible Baseline Oscillation

    Directory of Open Access Journals (Sweden)

    Liu Zhong-sheng

    2014-04-01

    Full Text Available The airborne interferometric SAR platform suffers from instability factors, such as air turbulence and mechanical vibrations during flight. Such factors cause the oscillation of the flexible baseline, which leads to significant degradation of the performance of the interferometric SAR system. This study is concerned with the baseline oscillation. First, the error of the slant range model under baseline oscillation conditions is formulated. Then, the SAR complex image signal and dual-channel correlation coefficient are modeled based on the first-order, second-order, and generic slant range error. Subsequently, the impact of the baseline oscillation on the imaging and interferometric performance of the SAR system is analyzed. Finally, simulations of the echo data are used to validate the theoretical analysis of the baseline oscillation in the airborne interferometric SAR.

  19. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  20. A comparison of baseline methodologies for 'Reducing Emissions from Deforestation and Degradation'

    Directory of Open Access Journals (Sweden)

    Kok Kasper

    2009-07-01

    Full Text Available Abstract Background A mechanism for emission reductions from deforestation and degradation (REDD is very likely to be included in a future climate agreement. The choice of REDD baseline methodologies will crucially influence the environmental and economic effectiveness of the climate regime. We compare three different historical baseline methods and one innovative dynamic model baseline approach to appraise their applicability under a future REDD policy framework using a weighted multi-criteria analysis. Results The results show that each baseline method has its specific strengths and weaknesses. Although the dynamic model allows for the best environmental and for comparatively good economic performance, its high demand for data and technical capacity limit the current applicability in many developing countries. Conclusion The adoption of a multi-tier approach will allow countries to select the baseline method best suiting their specific capabilities and data availability while simultaneously ensuring scientific transparency, environmental effectiveness and broad political support.

  1. Forecasting Sensorimotor Adaptability from Baseline Inter-Trial Correlations

    Science.gov (United States)

    Beaton, K. H.; Bloomberg, J. J.

    2014-01-01

    One of the greatest challenges surrounding adaptation to the spaceflight environment is the large variability in symptoms, and corresponding functional impairments, from one crewmember to the next. This renders preflight training and countermeasure development difficult, as a "one-size-fits-all" approach is inappropriate. Therefore, it would be highly advantageous to know ahead of time which crewmembers might have more difficulty adjusting to the novel g-levels inherent to spaceflight. Such knowledge could guide individually customized countermeasures, which would enable more efficient use of crew time, both preflight and inflight, and provide better outcomes. The primary goal of this project is to look for a baseline performance metric that can forecast sensorimotor adaptability without exposure to an adaptive stimulus. We propose a novel hypothesis that considers baseline inter-trial correlations, the trial-to-trial fluctuations in motor performance, as a predictor of individual sensorimotor adaptive capabilities. To-date, a strong relationship has been found between baseline inter-trial correlations and adaptability in two oculomotor systems. For this project, we will explore an analogous predictive mechanism in the locomotion system. METHODS: Baseline Inter-trial Correlations: Inter-trial correlations specify the relationships among repeated trials of a given task that transpire as a consequence of correcting for previous performance errors over multiple timescales. We can quantify the strength of inter-trial correlations by measuring the decay of the autocorrelation function (ACF), which describes how rapidly information from past trials is "forgotten." Processes whose ACFs decay more slowly exhibit longer-term inter-trial correlations (longer memory processes), while processes whose ACFs decay more rapidly exhibit shorterterm inter-trial correlations (shorter memory processes). Longer-term correlations reflect low-frequency activity, which is more easily

  2. Baseline Response Levels Are a Nuisance in Infant Contingency Learning

    Science.gov (United States)

    Millar, W. S.; Weir, Catherine

    2015-01-01

    The impact of differences in level of baseline responding on contingency learning in the first year was examined by considering the response acquisition of infants classified into baseline response quartiles. Whereas the three lower baseline groups showed the predicted increment in responding to a contingency, the highest baseline responders did…

  3. 33 CFR 2.20 - Territorial sea baseline.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Territorial sea baseline. 2.20... JURISDICTION Jurisdictional Terms § 2.20 Territorial sea baseline. Territorial sea baseline means the line.... Normally, the territorial sea baseline is the mean low water line along the coast of the United States...

  4. Text Mining for Protein Docking.

    Directory of Open Access Journals (Sweden)

    Varsha D Badal

    2015-12-01

    Full Text Available The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking. Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu. The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound

  5. The Balinese Unicode Text Processing

    Directory of Open Access Journals (Sweden)

    Imam Habibi

    2009-06-01

    Full Text Available In principal, the computer only recognizes numbers as the representation of a character. Therefore, there are many encoding systems to allocate these numbers although not all characters are covered. In Europe, every single language even needs more than one encoding system. Hence, a new encoding system known as Unicode has been established to overcome this problem. Unicode provides unique id for each different characters which does not depend on platform, program, and language. Unicode standard has been applied in a number of industries, such as Apple, HP, IBM, JustSystem, Microsoft, Oracle, SAP, Sun, Sybase, and Unisys. In addition, language standards and modern information exchanges such as XML, Java, ECMA Script (JavaScript, LDAP, CORBA 3.0, and WML make use of Unicode as an official tool for implementing ISO/IEC 10646. There are four things to do according to Balinese script: the algorithm of transliteration, searching, sorting, and word boundary analysis (spell checking. To verify the truth of algorithm, some applications are made. These applications can run on Linux/Windows OS platform using J2SDK 1.5 and J2ME WTK2 library. The input and output of the algorithm/application are character sequence that is obtained from keyboard punch and external file. This research produces a module or a library which is able to process the Balinese text based on Unicode standard. The output of this research is the ability, skill, and mastering of 1. Unicode standard (21-bit as a substitution to ASCII (7-bit and ISO8859-1 (8-bit as the former default character set in many applications. 2. The Balinese Unicode text processing algorithm. 3. An experience of working with and learning from an international team that consists of the foremost experts in the area: Michael Everson (Ireland, Peter Constable (Microsoft US, I Made Suatjana, and Ida Bagus Adi Sudewa.

  6. Text mining by Tsallis entropy

    Science.gov (United States)

    Jamaati, Maryam; Mehri, Ali

    2018-01-01

    Long-range correlations between the elements of natural languages enable them to convey very complex information. Complex structure of human language, as a manifestation of natural languages, motivates us to apply nonextensive statistical mechanics in text mining. Tsallis entropy appropriately ranks the terms' relevance to document subject, taking advantage of their spatial correlation length. We apply this statistical concept as a new powerful word ranking metric in order to extract keywords of a single document. We carry out an experimental evaluation, which shows capability of the presented method in keyword extraction. We find that, Tsallis entropy has reliable word ranking performance, at the same level of the best previous ranking methods.

  7. Biased limiter experiments on text

    International Nuclear Information System (INIS)

    Phillips, P.E.; Wootton, A.J.; Rowan, W.L.; Ritz, C.P.; Rhodes, T.L.; Bengtson, R.D.; Hodge, W.L.; Durst, R.D.; McCool, S.C.; Richards, B.; Gentle, K.W.; Schoch, P.; Forster, J.C.; Hickok, R.L.; Evans, T.E.

    1987-01-01

    Experiments using an electrically biased limiter have been performed on the Texas Experimental Tokamak (TEXT). A small movable limiter is inserted past the main poloidal ring limiter (which is electrically connected to the vacuum vessel) and biased at V Lim with respect to it. The floating potential, plasma potential and shear layer position can be controlled. With vertical strokeV Lim vertical stroke ≥ 50 V the plasma density increases. For V Lim Lim > 0 the results obtained are inconclusive. Variation of V Lim changes the electrostatic turbulence which may explain the observed total flux changes. (orig.)

  8. New Historicism: Text and Context

    Directory of Open Access Journals (Sweden)

    Violeta M. Vesić

    2016-02-01

    Full Text Available During most of the twentieth century history was seen as a phenomenon outside of literature that guaranteed the veracity of literary interpretation. History was unique and it functioned as a basis for reading literary works. During the seventies of the twentieth century there occurred a change of attitude towards history in American literary theory, and there appeared a new theoretical approach which soon became known as New Historicism. Since its inception, New Historicism has been identified with the study of Renaissance and Romanticism, but nowadays it has been increasingly involved in other literary trends. Although there are great differences in the arguments and practices at various representatives of this school, New Historicism has clearly recognizable features and many new historicists will agree with the statement of Walter Cohen that New Historicism, when it appeared in the eighties, represented something quite new in reference to the studies of theory, criticism and history (Cohen 1987, 33. Theoretical connection with Bakhtin, Foucault and Marx is clear, as well as a kind of uneasy tie with deconstruction and the work of Paul de Man. At the center of this approach is a renewed interest in the study of literary works in the light of historical and political circumstances in which they were created. Foucault encouraged readers to begin to move literary texts and to link them with discourses and representations that are not literary, as well as to examine the sociological aspects of the texts in order to take part in the social struggles of today. The study of literary works using New Historicism is the study of politics, history, culture and circumstances in which these works were created. With regard to one of the main fact which is located in the center of the criticism, that history cannot be viewed objectively and that reality can only be understood through a cultural context that reveals the work, re-reading and interpretation of

  9. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  10. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  11. Negation handling in sentiment classification using rule-based adapted from Indonesian language syntactic for Indonesian text in Twitter

    Science.gov (United States)

    Amalia, Rizkiana; Arif Bijaksana, Moch; Darmantoro, Dhinta

    2018-03-01

    The presence of the word negation is able to change the polarity of the text if it is not handled properly it will affect the performance of the sentiment classification. Negation words in Indonesian are ‘tidak’, ‘bukan’, ‘belum’ and ‘jangan’. Also, there is a conjunction word that able to reverse the actual values, as the word ‘tetapi’, or ‘tapi’. Unigram has shortcomings in dealing with the existence of negation because it treats negation word and the negated words as separate words. A general approach for negation handling in English text gives the tag ‘NEG_’ for following words after negation until the first punctuation. But this may gives the tag to un-negated, and this approach does not handle negation and conjunction in one sentences. The rule-based method to determine what words negated by adapting the rules of Indonesian language syntactic of negation to determine the scope of negation was proposed in this study. With adapting syntactic rules and tagging “NEG_” using SVM classifier with RBF kernel has better performance results than the other experiments. Considering the average F1-score value, the performance of this proposed method can be improved against baseline equal to 1.79% (baseline without negation handling) and 5% (baseline with existing negation handling) for a dataset that all tweets contain negation words. And also for the second dataset that has the various number of negation words in document tweet. It can be improved against baseline at 2.69% (without negation handling) and 3.17% (with existing negation handling).

  12. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    Directory of Open Access Journals (Sweden)

    Darko Brodić

    2010-05-01

    Full Text Available Text line segmentation is an essential stage in off-line optical character recognition (OCR systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  13. CCM: A Text Classification Method by Clustering

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...

  14. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  15. Transfer Learning beyond Text Classification

    Science.gov (United States)

    Yang, Qiang

    Transfer learning is a new machine learning and data mining framework that allows the training and test data to come from different distributions or feature spaces. We can find many novel applications of machine learning and data mining where transfer learning is necessary. While much has been done in transfer learning in text classification and reinforcement learning, there has been a lack of documented success stories of novel applications of transfer learning in other areas. In this invited article, I will argue that transfer learning is in fact quite ubiquitous in many real world applications. In this article, I will illustrate this point through an overview of a broad spectrum of applications of transfer learning that range from collaborative filtering to sensor based location estimation and logical action model learning for AI planning. I will also discuss some potential future directions of transfer learning.

  16. Historical baselines of coral cover on tropical reefs as estimated by expert opinion

    Directory of Open Access Journals (Sweden)

    Tyler D. Eddy

    2018-01-01

    Full Text Available Coral reefs are important habitats that represent global marine biodiversity hotspots and provide important benefits to people in many tropical regions. However, coral reefs are becoming increasingly threatened by climate change, overfishing, habitat destruction, and pollution. Historical baselines of coral cover are important to understand how much coral cover has been lost, e.g., to avoid the ‘shifting baseline syndrome’. There are few quantitative observations of coral reef cover prior to the industrial revolution, and therefore baselines of coral reef cover are difficult to estimate. Here, we use expert and ocean-user opinion surveys to estimate baselines of global coral reef cover. The overall mean estimated baseline coral cover was 59% (±19% standard deviation, compared to an average of 58% (±18% standard deviation estimated by professional scientists. We did not find evidence of the shifting baseline syndrome, whereby respondents who first observed coral reefs more recently report lower estimates of baseline coral cover. These estimates of historical coral reef baseline cover are important for scientists, policy makers, and managers to understand the extent to which coral reefs have become depleted and to set appropriate recovery targets.

  17. A SURVEY OF ASTRONOMICAL RESEARCH: A BASELINE FOR ASTRONOMICAL DEVELOPMENT

    International Nuclear Information System (INIS)

    Ribeiro, V. A. R. M.; Russo, P.; Cárdenas-Avendaño, A.

    2013-01-01

    Measuring scientific development is a difficult task. Different metrics have been put forward to evaluate scientific development; in this paper we explore a metric that uses the number of peer-reviewed, and when available non-peer-reviewed, research articles as an indicator of development in the field of astronomy. We analyzed the available publication record, using the Smithsonian Astrophysical Observatory/NASA Astrophysics Database System, by country affiliation in the time span between 1950 and 2011 for countries with a gross national income of less than 14,365 USD in 2010. This represents 149 countries. We propose that this metric identifies countries in ''astronomical development'' with a culture of research publishing. We also propose that for a country to develop in astronomy, it should invest in outside expert visits, send its staff abroad to study, and establish a culture of scientific publishing. Furthermore, we propose that this paper may be used as a baseline to measure the success of major international projects, such as the International Year of Astronomy 2009

  18. The WITCH Model. Structure, Baseline, Solutions.

    Energy Technology Data Exchange (ETDEWEB)

    Bosetti, V.; Massetti, E.; Tavoni, M.

    2007-07-01

    WITCH - World Induced Technical Change Hybrid - is a regionally disaggregated hard link hybrid global model with a neoclassical optimal growth structure (top down) and an energy input detail (bottom up). The model endogenously accounts for technological change, both through learning curves affecting prices of new vintages of capital and through R and D investments. The model features the main economic and environmental policies in each world region as the outcome of a dynamic game. WITCH belongs to the class of Integrated Assessment Models as it possesses a climate module that feeds climate changes back into the economy. In this paper we provide a thorough discussion of the model structure and baseline projections. We report detailed information on the evolution of energy demand, technology and CO2 emissions. Finally, we explicitly quantifiy the role of free riding in determining the emissions scenarios. (auth)

  19. Global Nuclear Energy Partnership Waste Treatment Baseline

    International Nuclear Information System (INIS)

    Gombert, Dirk; Ebert, William; Marra, James; Jubin, Robert; Vienna, John

    2008-01-01

    The Global Nuclear Energy Partnership (GNEP) program is designed to demonstrate that a proliferation-resistant and sustainable integrated nuclear fuel cycle can be commercialized and used internationally. Alternative stabilization concepts for byproducts and waste streams generated by fuel recycling processes were evaluated and a baseline set of waste forms was recommended for the safe disposition of waste streams. Specific waste forms are recommended based on the demonstrated or expected commercial practicability and technical maturity of the processes needed to make the waste forms, and expected performance of the waste form materials when disposed. Significant issues remain in developing technologies to process some of the wastes into the recommended waste forms, and a detailed analysis of technology readiness may lead to the choice of a different waste form than what is recommended herein. Evolving regulations could also affect the selection of waste forms. (authors)

  20. In-Space Manufacturing Baseline Property Development

    Science.gov (United States)

    Stockman, Tom; Schneider, Judith; Prater, Tracie; Bean, Quincy; Werkheiser, Nicki

    2016-01-01

    The In-Space Manufacturing (ISM) project at NASA Marshall Space Flight Center currently operates a 3D FDM (fused deposition modeling) printer onboard the International Space Station. In order to enable utilization of this capability by designer, the project needs to establish characteristic material properties for materials produced using the process. This is difficult for additive manufacturing since standards and specifications do not yet exist for these technologies. Due to availability of crew time, there are limitations to the sample size which in turn limits the application of the traditional design allowables approaches to develop a materials property database for designers. In this study, various approaches to development of material databases were evaluated for use by designers of space systems who wish to leverage in-space manufacturing capabilities. This study focuses on alternative statistical techniques for baseline property development to support in-space manufacturing.

  1. Global Nuclear Energy Partnership Waste Treatment Baseline

    Energy Technology Data Exchange (ETDEWEB)

    Gombert, Dirk; Ebert, William; Marra, James; Jubin, Robert; Vienna, John [Idaho National laboratory, 2525 Fremont Ave., Idaho Falls, ID 83402 (United States)

    2008-07-01

    The Global Nuclear Energy Partnership (GNEP) program is designed to demonstrate that a proliferation-resistant and sustainable integrated nuclear fuel cycle can be commercialized and used internationally. Alternative stabilization concepts for byproducts and waste streams generated by fuel recycling processes were evaluated and a baseline set of waste forms was recommended for the safe disposition of waste streams. Specific waste forms are recommended based on the demonstrated or expected commercial practicability and technical maturity of the processes needed to make the waste forms, and expected performance of the waste form materials when disposed. Significant issues remain in developing technologies to process some of the wastes into the recommended waste forms, and a detailed analysis of technology readiness may lead to the choice of a different waste form than what is recommended herein. Evolving regulations could also affect the selection of waste forms. (authors)

  2. Global Nuclear Energy Partnership Waste Treatment Baseline

    Energy Technology Data Exchange (ETDEWEB)

    Dirk Gombert; William Ebert; James Marra; Robert Jubin; John Vienna

    2008-05-01

    The Global Nuclear Energy Partnership program (GNEP) is designed to demonstrate a proliferation-resistant and sustainable integrated nuclear fuel cycle that can be commercialized and used internationally. Alternative stabilization concepts for byproducts and waste streams generated by fuel recycling processes were evaluated and a baseline of waste forms was recommended for the safe disposition of waste streams. Waste forms are recommended based on the demonstrated or expected commercial practicability and technical maturity of the processes needed to make the waste forms, and performance of the waste form materials when disposed. Significant issues remain in developing technologies to process some of the wastes into the recommended waste forms, and a detailed analysis of technology readiness and availability may lead to the choice of a different waste form than what is recommended herein. Evolving regulations could also affect the selection of waste forms.

  3. Baselines and test data for cross-lingual inference

    DEFF Research Database (Denmark)

    Agic, Zeljko; Schluter, Natalie

    2018-01-01

    The recent years have seen a revival of interest in textual entailment, sparked by i) the emergence of powerful deep neural network learners for natural language processing and ii) the timely development of large-scale evaluation datasets such as SNLI. Recast as natural language inference......, the problem now amounts to detecting the relation between pairs of statements: they either contradict or entail one another, or they are mutually neutral. Current research in natural language inference is effectively exclusive to English. In this paper, we propose to advance the research in SNLI-style natural...... language inference toward multilingual evaluation. To that end, we provide test data for four major languages: Arabic, French, Spanish, and Russian. We experiment with a set of baselines. Our systems are based on cross-lingual word embeddings and machine translation. While our best system scores an average...

  4. Effects of triplet Higgs bosons in long baseline neutrino experiments

    Science.gov (United States)

    Huitu, K.; Kärkkäinen, T. J.; Maalampi, J.; Vihonen, S.

    2018-05-01

    The triplet scalars (Δ =Δ++,Δ+,Δ0) utilized in the so-called type-II seesaw model to explain the lightness of neutrinos, would generate nonstandard interactions (NSI) for a neutrino propagating in matter. We investigate the prospects to probe these interactions in long baseline neutrino oscillation experiments. We analyze the upper bounds that the proposed DUNE experiment might set on the nonstandard parameters and numerically derive upper bounds, as a function of the lightest neutrino mass, on the ratio the mass MΔ of the triplet scalars, and the strength |λϕ| of the coupling ϕ ϕ Δ of the triplet Δ and conventional Higgs doublet ϕ . We also discuss the possible misinterpretation of these effects as effects arising from a nonunitarity of the neutrino mixing matrix and compare the results with the bounds that arise from the charged lepton flavor violating processes.

  5. The NuMAX Long Baseline Neutrino Factory Concept

    Energy Technology Data Exchange (ETDEWEB)

    Delahaye, J-P. [SLAC; Ankenbrandt, C. [MUONS Inc., Batavia; Bogacz, A. [Jefferson Lab; Huber, P. [Virginia Tech.; Kirk, H. [Brookhaven; Neuffer, D. [Fermilab; Palmer, M. A. [Fermilab; Ryne, R. [LBL, Berkeley; Snopok, P. [IIT, Chicago

    2018-03-19

    A Neutrino Factory where neutrinos of all species are produced in equal quantities by muon decay is described as a facility at the intensity frontier for exquisite precision providing ideal conditions for ultimate neutrino studies and the ideal complement to Long Baseline Facilities like LBNF at Fermilab. It is foreseen to be built in stages with progressively increasing complexity and performance, taking advantage of existing or proposed facilities at an existing laboratory like Fermilab. A tentative layout based on a recirculating linac providing opportunities for considerable saving is discussed as well as its possible evolution toward a muon collider if and when requested by Physics. Tentative parameters of the various stages are presented as well as the necessary R&D to address the technological issues and demonstrate their feasibility.

  6. Building a comprehensive syntactic and semantic corpus of Chinese clinical texts.

    Science.gov (United States)

    He, Bin; Dong, Bin; Guan, Yi; Yang, Jinfeng; Jiang, Zhipeng; Yu, Qiubin; Cheng, Jianyi; Qu, Chunyan

    2017-05-01

    To build a comprehensive corpus covering syntactic and semantic annotations of Chinese clinical texts with corresponding annotation guidelines and methods as well as to develop tools trained on the annotated corpus, which supplies baselines for research on Chinese texts in the clinical domain. An iterative annotation method was proposed to train annotators and to develop annotation guidelines. Then, by using annotation quality assurance measures, a comprehensive corpus was built, containing annotations of part-of-speech (POS) tags, syntactic tags, entities, assertions, and relations. Inter-annotator agreement (IAA) was calculated to evaluate the annotation quality and a Chinese clinical text processing and information extraction system (CCTPIES) was developed based on our annotated corpus. The syntactic corpus consists of 138 Chinese clinical documents with 47,426 tokens and 2612 full parsing trees, while the semantic corpus includes 992 documents that annotated 39,511 entities with their assertions and 7693 relations. IAA evaluation shows that this comprehensive corpus is of good quality, and the system modules are effective. The annotated corpus makes a considerable contribution to natural language processing (NLP) research into Chinese texts in the clinical domain. However, this corpus has a number of limitations. Some additional types of clinical text should be introduced to improve corpus coverage and active learning methods should be utilized to promote annotation efficiency. In this study, several annotation guidelines and an annotation method for Chinese clinical texts were proposed, and a comprehensive corpus with its NLP modules were constructed, providing a foundation for further study of applying NLP techniques to Chinese texts in the clinical domain. Copyright © 2017. Published by Elsevier Inc.

  7. Resonant island divertor experiments on text

    International Nuclear Information System (INIS)

    deGrassie, J.S.; Evans, T.E.; Jackson, G.L.

    1988-09-01

    The first experimental tests of the resonant island divertor (RID) concept have been carried out on the Texas Experimental Tokamak (TEXT). Modular perturbation coils produce static resonant magnetic fields at the tokamak boundary. The resulting magnetic islands are used to guide heat and particle fluxes around a small scoop limiter head. An enhancement in the limiter collection efficiency over the nonisland operation, as evidenced by enhanced neutral density within the limiter head, of up to a factor of 4 is obtained. This enhancement is larger than one would expect given the measured magnitude of the cross-field particle transport in TEXT. It is proposed that electrostatic perturbations occur which enhance the ion convection rate around the islands. Preliminary experiments utilizing electron cyclotron heating (ECH) in conjunction with RID operation have also have been performed. 6 refs., 3 figs

  8. Using ontology network structure in text mining.

    Science.gov (United States)

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  9. A programmed text in statistics

    CERN Document Server

    Hine, J

    1975-01-01

    Exercises for Section 2 42 Physical sciences and engineering 42 43 Biological sciences 45 Social sciences Solutions to Exercises, Section 1 47 Physical sciences and engineering 47 49 Biological sciences 49 Social sciences Solutions to Exercises, Section 2 51 51 PhYSical sciences and engineering 55 Biological sciences 58 Social sciences 62 Tables 2 62 x - tests involving variances 2 63,64 x - one tailed tests 2 65 x - two tailed tests F-distribution 66-69 Preface This project started some years ago when the Nuffield Foundation kindly gave a grant for writing a pro­ grammed text to use with service courses in statistics. The work carried out by Mrs. Joan Hine and Professor G. B. Wetherill at Bath University, together with some other help from time to time by colleagues at Bath University and elsewhere. Testing was done at various colleges and universities, and some helpful comments were received, but we particularly mention King Edwards School, Bath, who provided some sixth formers as 'guinea pigs' for the fir...

  10. Baseline assessment of benthic communities of the Flower Garden Banks (2010 - 2013) using technical diving operations: 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  11. Baseline assessment of fish communities of the Flower Garden Banks (2010 - 2013) using technical diving operations: 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  12. DGEMP-OE (2008) Energy Baseline Scenario. Synthesis report

    International Nuclear Information System (INIS)

    2008-01-01

    the CAS scenarios relies primarily on 2000 data, despite the existence of sufficiently complete statistics through to 2005. The DGEMP on the other hand used a study by the BIPE (Office for Economic Information and Forecasting) provided by the SESP, the Ministry for Ecology, Energy, Sustainable Development and Spatial Planning's economic statistics and forecasting department. On the basis of the study's macro-economic projections of the French economy to 2020, the DGEMP was able to re-evaluate the prospects for activity in the industrial and tertiary sectors. In several respects (e.g. supply security, CO 2 emissions, energy efficiency), the baseline scenario proposed here is clearly not a scenario conducive to satisfying French energy policy objectives. This is not a surprising conclusion in that it implies the need to implement new policies and measures in addition to those already in place or approved. In particular, this scenario would lead to importing 66 billion cubic meters of gas (59 Mtoe) in 2020 and 78 billion cubic meters (70 Mtoe) in 2030, compared with the present 44 billion cubic meters. In addition to the resulting CO 2 emissions, the near doubling of gas imports would pose a twofold problem as to the geographic origin of the gas imported (under appropriate supply contracts) and the infrastructure (LNG terminals, gas pipelines) required to transport it. Finally, the baseline scenario is of course a long way from achieving the Community targets, whether for CO 2 emissions, projected to rise continually until 2020 and then even faster until 2030 (due to transport and electric power generation), or for the share of renewable energy in the energy mix. In that regard, the share of renewable energy in 'enlarged' final energy consumption, as it is described in the 'energy and climate change package', would grow to 13.4% in 2020 (versus 23% in the Commission's burden sharing proposal) and to 13.7% in 2030, compared with the 10.3% share observed in 2006

  13. Office of Civilian Radioactive Waste Management Program Cost and Schedule Baseline

    International Nuclear Information System (INIS)

    1992-09-01

    The purpose of this document is to establish quantitative expressions of proposed costs and schedule to serve as a basis for measurement of program performance. It identifies the components of the Program Cost and Schedule Baseline (PCSB) that will be subject to change control by the Executive (Level 0) and Program (Level 1) Change Control Boards (CCBS) and establishes their baseline values. This document also details PCSB reporting, monitoring, and corrective action requirements. The Program technical baseline contained in the Waste Management System Description (WMSD), the Waste Management System Requirements (WMSR), and the Physical System Requirements documents provide the technical basis for the PCSB. Changes to the PCSB will be approved by the Pregrain Change Control Board (PCCB)In addition to the PCCB, the Energy System Acquisition Advisory Board Baseline CCB (ESAAB BCCB) will perform control functions relating to Total Project Cost (TPC) and major schedule milestones for the Yucca Mountain Site Characterization Project and the Monitored Retrievable Storage (MRS) Project

  14. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    Science.gov (United States)

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  15. 40 CFR 80.915 - How are the baseline toxics value and baseline toxics volume determined?

    Science.gov (United States)

    2010-07-01

    ... baseline toxics value if it can determine an applicable toxics value for every batch of gasoline produced... of gasoline batch i produced or imported between January 1, 1998 and December 31, 2000, inclusive. i = Individual batch of gasoline produced or imported between January 1, 1998 and December 31, 2000, inclusive. n...

  16. Pakistan, Sindh Province - Baseline Indicators System : Baseline Procurement Performance Assessment Report

    OpenAIRE

    World Bank

    2009-01-01

    This document provides an assessment of the public procurement system in Sindh province using the baseline indicators system developed by the Development Assistance Committee of the Organization for Economic Cooperation and Development (OECD-DAC). This assessment, interviews and discussions were held with stakeholders from the public and private sectors as well as civil society. Developing...

  17. The texting and driving epidemic : changing norms to change behavior.

    Science.gov (United States)

    2013-09-01

    This campaign was created to reduce texting and driving and to increase awareness of the serious risks involved with texting and driving. The target audience of the campaign is University of Kansas students. This plan proposes an Anti-Texting and ...

  18. Prospects for long baseline neutrino oscillation experiments

    International Nuclear Information System (INIS)

    Goodman, M.

    1991-01-01

    Several recent development have motivated consideration of neutrino experiments located hundreds or thousand of kilometers from an accelerator. The motivations and experimental challenges for such experiments are examined. Three proposals for using the Fermilab Main Injector are compared. The requirements on mass, distance and resolution for an ''ideal'' detector for such an experimental are considered

  19. The London low emission zone baseline study.

    Science.gov (United States)

    Kelly, Frank; Armstrong, Ben; Atkinson, Richard; Anderson, H Ross; Barratt, Ben; Beevers, Sean; Cook, Derek; Green, Dave; Derwent, Dick; Mudway, Ian; Wilkinson, Paul

    2011-11-01

    On February 4, 2008, the world's largest low emission zone (LEZ) was established. At 2644 km2, the zone encompasses most of Greater London. It restricts the entry of the oldest and most polluting diesel vehicles, including heavy-goods vehicles (haulage trucks), buses and coaches, larger vans, and minibuses. It does not apply to cars or motorcycles. The LEZ scheme will introduce increasingly stringent Euro emissions standards over time. The creation of this zone presented a unique opportunity to estimate the effects of a stepwise reduction in vehicle emissions on air quality and health. Before undertaking such an investigation, robust baseline data were gathered on air quality and the oxidative activity and metal content of particulate matter (PM) from air pollution monitors located in Greater London. In addition, methods were developed for using databases of electronic primary-care records in order to evaluate the zone's health effects. Our study began in 2007, using information about the planned restrictions in an agreed-upon LEZ scenario and year-on-year changes in the vehicle fleet in models to predict air pollution concentrations in London for the years 2005, 2008, and 2010. Based on this detailed emissions and air pollution modeling, the areas in London were then identified that were expected to show the greatest changes in air pollution concentrations and population exposures after the implementation of the LEZ. Using these predictions, the best placement of a pollution monitoring network was determined and the feasibility of evaluating the health effects using electronic primary-care records was assessed. To measure baseline pollutant concentrations before the implementation of the LEZ, a comprehensive monitoring network was established close to major roadways and intersections. Output-difference plots from statistical modeling for 2010 indicated seven key areas likely to experience the greatest change in concentrations of nitrogen dioxide (NO2) (at least 3

  20. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  1. A baseline for the multivariate comparison of resting state networks

    Directory of Open Access Journals (Sweden)

    Elena A Allen

    2011-02-01

    Full Text Available As the size of functional and structural MRI datasets expands, it becomes increasingly important to establish a baseline from which diagnostic relevance may be determined, a processing strategy that efficiently prepares data for analysis, and a statistical approach that identifies important effects in a manner that is both robust and reproducible. In this paper, we introduce a multivariate analytic approach that optimizes sensitivity and reduces unnecessary testing. We demonstrate the utility of this mega-analytic approach by identifying the effects of age and gender on the resting state networks of 603 healthy adolescents and adults (mean age: 23.4 years, range: 12 to 71 years. Data were collected on the same scanner, preprocessed using an automated analysis pipeline based in SPM, and studied using group independent component analysis. Resting state networks were identified and evaluated in terms of three primary outcome measures: time course spectral power, spatial map intensity, and functional network connectivity. Results revealed robust effects of age on all three outcome measures, largely indicating decreases in network coherence and connectivity with increasing age. Gender effects were of smaller magnitude but suggested stronger intra-network connectivity in females and more inter-network connectivity in males, particularly with regard to sensorimotor networks. These findings, along with the analysis approach and statistical framework described here, provide a useful baseline for future investigations of brain networks in health and disease.

  2. Pilot evaluation of the text4baby mobile health program

    Directory of Open Access Journals (Sweden)

    Evans William Douglas

    2012-11-01

    Full Text Available Abstract Background Mobile phone technologies for health promotion and disease prevention have evolved rapidly, but few studies have tested the efficacy of mobile health in full-fledged programs. Text4baby is an example of mobile health based on behavioral theory, and it delivers text messages to traditionally underserved pregnant women and new mothers to change their health, health care beliefs, practices, and behaviors in order to improve clinical outcomes. The purpose of this pilot evaluation study is to assess the efficacy of this text messaging campaign. Methods We conducted a randomized pilot evaluation study. All participants were pregnant women first presenting for care at the Fairfax County, Virginia Health Department. We randomized participants to enroll in text4baby and receive usual health care (intervention, or continue simply to receive usual care (control. We then conducted a 24-item survey by telephone of attitudes and behaviors related to text4baby. We surveyed participants at baseline, before text4baby was delivered to the intervention group, and at follow-up at approximately 28 weeks of baby’s gestational age. Results We completed 123 baseline interviews in English and in Spanish. Overall, the sample was predominantly of Hispanic origin (79.7% with an average age of 27.6 years. We completed 90 follow-up interviews, and achieved a 73% retention rate. We used a logistic generalized estimating equation model to evaluate intervention effects on measured outcomes. We found a significant effect of text4baby intervention exposure on increased agreement with the attitude statement “I am prepared to be a new mother” (OR = 2.73, CI = 1.04, 7.18, p = 0.042 between baseline and follow-up. For those who had attained a high school education or greater, we observed a significantly higher overall agreement to attitudes against alcohol consumption during pregnancy (OR = 2.80, CI = 1.13, 6.90, p = 0.026. We also observed a

  3. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  4. Integrating risk management into the baselining process

    International Nuclear Information System (INIS)

    Jennett, N.; Tonkinson, A.

    1994-01-01

    These processes work together in building the project (comprised of the technical, schedule, and cost baselines) against which performance is measured and changes to the scope, schedule and cost of a project are managed and controlled. Risk analysis is often performed as the final element of the scheduling or estimating processes, a precursor to establishing cost and schedule contingency. However, best business practices dictate that information that may be crucial to the success of a project be analyzed and incorporated into project planning as soon as it is available and usable. The purpose or risk management is not to eliminate risk. Neither is it intended to suggest wholesale re-estimating and re-scheduling of a project. Rather, the intent is to make provisions to reduce and control the schedule and/or cost ramifications of risk by anticipating events and conditions that cannot be reliably planned for and which have the potential to negatively impact accomplishment of the technical objectives and requirements of the project

  5. Shifting environmental baselines in the Red Sea.

    Science.gov (United States)

    Price, A R G; Ghazi, S J; Tkaczynski, P J; Venkatachalam, A J; Santillan, A; Pancho, T; Metcalfe, R; Saunders, J

    2014-01-15

    The Red Sea is among the world's top marine biodiversity hotspots. We re-examined coastal ecosystems at sites surveyed during the 1980s using the same methodology. Coral cover increased significantly towards the north, mirroring the reverse pattern for mangroves and other sedimentary ecosystems. Latitudinal patterns are broadly consistent across both surveys and with results from independent studies. Coral cover showed greatest change, declining significantly from a median score of 4 (1000-9999 m(2)) to 2 (10-99m(2)) per quadrat in 2010/11. This may partly reflect impact from coastal construction, which was evident at 40% of sites and has significantly increased in magnitude over 30 years. Beach oil has significantly declined, but shore debris has increased significantly. Although substantial, levels are lower than at some remote ocean atolls. While earlier reports have suggested that the Red Sea is generally healthy, shifting environmental baselines are evident from the current study. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Regional geochemical baselines for Portuguese shelf sediments

    International Nuclear Information System (INIS)

    Mil-Homens, M.; Stevens, R.L.; Cato, I.; Abrantes, F.

    2007-01-01

    Metal concentrations (Al, Cr, Cu, Ni, Pb and Zn) from the DGM-INETI archive data set have been examined for sediments collected during the 1970s from 267 sites on the Portuguese shelf. Due to the differences in the oceanographic and sedimentological settings between western and Algarve coasts, the archive data set is split in two segments. For both shelf segments, regional geochemical baselines (RGB) are defined using aluminium as a reference element. Seabed samples recovered in 2002 from four distinct areas of the Portuguese shelf are superimposed on these models to identify and compare possible metal enrichments relative to the natural distribution. Metal enrichments associated with anthropogenic influences are identified in three samples collected nearby the Tejo River and are characterised by the highest enrichment factors (EF; EF Pb Zn < 4). EF values close to 1 suggest a largely natural origin for metal distributions in sediments from the other areas included in the study. - Background metal concentrations and their natural variability must be established before assessing anthropogenic impacts

  7. Arc melter demonstration baseline test results

    International Nuclear Information System (INIS)

    Soelberg, N.R.; Chambers, A.G.; Anderson, G.L.; Oden, L.L.; O'Connor, W.K.; Turner, P.C.

    1994-07-01

    This report describes the test results and evaluation for the Phase 1 (baseline) arc melter vitrification test series conducted for the Buried Waste Integrated Demonstration program (BWID). Phase 1 tests were conducted on surrogate mixtures of as-incinerated wastes and soil. Some buried wastes, soils, and stored wastes at the INEL and other DOE sites, are contaminated with transuranic (TRU) radionuclides and hazardous organics and metals. The high temperature environment in an electric arc furnace may be used to process these wastes to produce materials suitable for final disposal. An electric arc furnace system can treat heterogeneous wastes and contaminated soils by (a) dissolving and retaining TRU elements and selected toxic metals as oxides in the slag phase, (b) destroying organic materials by dissociation, pyrolyzation, and combustion, and (c) capturing separated volatilized metals in the offgas system for further treatment. Structural metals in the waste may be melted and tapped separately for recycle or disposal, or these metals may be oxidized and dissolved into the slag. The molten slag, after cooling, will provide a glass/ceramic final waste form that is homogeneous, highly nonleachable, and extremely durable. These features make this waste form suitable for immobilization of TRU radionuclides and toxic metals for geologic timeframes. Further, the volume of contaminated wastes and soils will be substantially reduced in the process

  8. Cryogenics Testbed Laboratory Flange Baseline Configuration

    Science.gov (United States)

    Acuna, Marie Lei Ysabel D.

    2013-01-01

    As an intern at Kennedy Space Center (KSC), I was involved in research for the Fluids and Propulsion Division of the NASA Engineering (NE) Directorate. I was immersed in the Integrated Ground Operations Demonstration Units (IGODU) project for the majority of my time at KSC, primarily with the Ground Operations Demonstration Unit Liquid Oxygen (GODU L02) branch of IGODU. This project was established to develop advancements in cryogenic systems as a part of KSC's Advanced Exploration Systems (AES) program. The vision of AES is to develop new approaches for human exploration, and operations in and beyond low Earth orbit. Advanced cryogenic systems are crucial to minimize the consumable losses of cryogenic propellants, develop higher performance launch vehicles, and decrease operations cost for future launch programs. During my internship, I conducted a flange torque tracking study that established a baseline configuration for the flanges in the Simulated Propellant Loading System (SPLS) at the KSC Cryogenics Test Laboratory (CTL) - the testing environment for GODU L02.

  9. Gated integrator with signal baseline subtraction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.

    1996-12-17

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.

  10. Gated integrator with signal baseline subtraction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xucheng (Lisle, IL)

    1996-01-01

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.

  11. LTC vacuum blasting machine (concrete): Baseline report

    International Nuclear Information System (INIS)

    1997-01-01

    The LTC shot blast technology was tested and is being evaluated at Florida International University (FIU) as a baseline technology. In conjunction with FIU's evaluation of efficiency and cost, this report covers the evaluation conducted for safety and health issues. It is a commercially available technology and has been used for various projects at locations throughout the country. The LTC 1073 Vacuum Blasting Machine uses a high-capacity, direct-pressure blasting system which incorporates a continuous feed for the blast media. The blast media cleans the surface within the contained brush area of the blast. It incorporates a vacuum system which removes dust and debris from the surface as it is blasted. The safety and health evaluation during the testing demonstration focused on two main areas of exposure: dust and noise. Dust exposure during maintenance activities was minimal, but due to mechanical difficulties dust monitoring could not be conducted during operation. Noise exposure was significant. Further testing for each of these exposures is recommended because of the outdoor environment where the testing demonstration took place. This may cause the results to be inaccurate. It is feasible that the dust and noise levels will be higher in an enclosed environment. In addition, other safety and health issues found were ergonomics, heat stress, tripping hazards, electrical hazards, lockout/tagout, and arm-hand vibration

  12. Pentek metal coating removal system: Baseline report

    International Nuclear Information System (INIS)

    1997-01-01

    The Pentek coating removal technology was tested and is being evaluated at Florida International University (FIU) as a baseline technology. In conjunction with FIU's evaluation of efficiency and cost, this report covers evaluation conducted for safety and health issues. It is a commercially available technology and has been used for various projects at locations throughout the country. The Pentek coating removal system consisted of the ROTO-PEEN Scaler, CORNER-CUTTER reg-sign, and VAC-PAC reg-sign. They are designed to remove coatings from steel, concrete, brick, and wood. The Scaler uses 3M Roto Peen tungsten carbide cutters while the CORNER-CUTTER reg-sign uses solid needles for descaling activities. These hand tools are used with the VAC-PAC reg-sign vacuum system to capture dust and debris as removal of the coating takes place. The safety and health evaluation during the testing demonstration focused on two main areas of exposure: dust and noise. Dust exposure minimal, but noise exposure was significant. Further testing for each exposure is recommended because of the environment where the testing demonstration took place. It is feasible that the dust and noise levels will be higher in an enclosed operating environment of different construction. In addition, other areas of concern found were arm-hand vibration, whole-body, ergonomics, heat stress, tripping hazards, electrical hazards, machine guarding, and lockout/tagout

  13. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  14. Effect of the Interaction of Text Structure, Background Knowledge and Purpose on Attention to Text.

    Science.gov (United States)

    1982-04-01

    in4 the sense proposed by Craik and Lockhart (1972). All levels of representation would entail such preliminary processing operations as perceptual...109. Craik , F. I., & Lockhart , R. S. Levels of processing : A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 1972, 11... processes this information to a deeper level than those text elements that are less important or irrelevant. The terminology "deeper" level is used here

  15. Tank waste remediation system retrieval and disposal mission initial updated baseline summary

    International Nuclear Information System (INIS)

    Swita, W.R.

    1998-01-01

    This document provides a summary of the proposed Tank Waste Remediation System Retrieval and Disposal Mission Initial Updated Baseline (scope, schedule, and cost) developed to demonstrate the Tank Waste Remediation System contractor's Readiness-to-Proceed in support of the Phase 1B mission

  16. Baseline projections of transportation energy consumption by mode: 1981 update

    Energy Technology Data Exchange (ETDEWEB)

    Millar, M; Bunch, J; Vyas, A; Kaplan, M; Knorr, R; Mendiratta, V; Saricks, C

    1982-04-01

    A comprehensive set of activity and energy-demand projections for each of the major transportation modes and submodes is presented. Projections are developed for a business-as-usual scenario, which provides a benchmark for assessing the effects of potential conservation strategies. This baseline scenario assumes a continuation of present trends, including fuel-efficiency improvements likely to result from current efforts of vehicle manufacturers. Because of anticipated changes in fuel efficiency, fuel price, modal shifts, and a lower-than-historic rate of economic growth, projected growth rates in transportation activity and energy consumption depart from historic patterns. The text discusses the factors responsible for this departure, documents the assumptions and methodologies used to develop the modal projections, and compares the projections with other efforts.

  17. Relationship of Baseline Hemoglobin Level with Serum Ferritin, Postphlebotomy Hemoglobin Changes, and Phlebotomy Requirements among HFE C282Y Homozygotes

    Directory of Open Access Journals (Sweden)

    Seyed Ali Mousavi

    2015-01-01

    Full Text Available Objectives. We aimed to examine whether baseline hemoglobin levels in C282Y-homozygous patients are related to the degree of serum ferritin (SF elevation and whether patients with different baseline hemoglobin have different phlebotomy requirements. Methods. A total of 196 patients (124 males and 72 females who had undergone therapeutic phlebotomy and had SF and both pre- and posttreatment hemoglobin values were included in the study. Results. Bivariate correlation analysis suggested that baseline SF explains approximately 6 to 7% of the variation in baseline hemoglobin. The results also showed that males who had higher (≥150 g/L baseline hemoglobin levels had a significantly greater reduction in their posttreatment hemoglobin despite requiring fewer phlebotomies to achieve iron depletion than those who had lower (<150 g/L baseline hemoglobin, regardless of whether baseline SF was below or above 1000 µg/L. There were no significant differences between hemoglobin subgroups regarding baseline and treatment characteristics, except for transferrin saturation between male subgroups with SF above 1000 µg/L. Similar differences were observed when females with higher (≥138 g/L baseline hemoglobin were compared with those with lower (<138 g/L baseline hemoglobin. Conclusion. Dividing C282Y-homozygous patients into just two subgroups according to the degree of baseline SF elevation may obscure important subgroup variations.

  18. Nonintrusive methodology for wellness baseline profiling

    Science.gov (United States)

    Chung, Danny Wen-Yaw; Tsai, Yuh-Show; Miaou, Shaou-Gang; Chang, Walter H.; Chang, Yaw-Jen; Chen, Shia-Chung; Hong, Y. Y.; Chyang, C. S.; Chang, Quan-Shong; Hsu, Hon-Yen; Hsu, James; Yao, Wei-Cheng; Hsu, Ming-Sin; Chen, Ming-Chung; Lee, Shi-Chen; Hsu, Charles; Miao, Lidan; Byrd, Kenny; Chouikha, Mohamed F.; Gu, Xin-Bin; Wang, Paul C.; Szu, Harold

    2007-04-01

    We develop an accumulatively effective and affordable set of smart pair devices to save the exuberant expenditure for the healthcare of aging population, which will not be sustainable when all the post-war baby boomers retire (78 millions will cost 1/5~1/4 GDP in US alone). To design an accessible test-bed for distributed points of homecare, we choose two exemplars of the set to demonstrate the possibility of translation of modern military and clinical know-how, because two exemplars share identically the noninvasive algorithm adapted to the Smart Sensor-pairs for the real world persistent surveillance. Currently, the standard diagnoses for malignant tumors and diabetes disorders are blood serum tests, X-ray CAT scan, and biopsy used sometime in the physical checkup by physicians as cohort-average wellness baselines. The loss of the quality of life in making second careers productive may be caused by the missing of timeliness for correct diagnoses and easier treatments, which contributes to the one quarter of human errors generating the lawsuits against physicians and hospitals, which further escalates the insurance cost and wasteful healthcare expenditure. Such a vicious cycle should be entirely eliminated by building an "individual diagnostic aids (IDA)," similar to the trend of personalized drug, developed from daily noninvasive intelligent databases of the "wellness baseline profiling (WBP)". Since our physiology state undulates diurnally, the Nyquist anti-aliasing theory dictates a minimum twice-a-day sampling of the WBP for the IDA, which must be made affordable by means of noninvasive, unsupervised and unbiased methodology at the convenience of homes. Thus, a pair of military infrared (IR) spectral cameras has been demonstrated for the noninvasive spectrogram ratio test of the spontaneously emitted thermal radiation from a normal human body at 37°C temperature. This invisible self-emission spreads from 3 microns to 12 microns of the radiation wavelengths

  19. 100-D Area technical baseline report

    International Nuclear Information System (INIS)

    Carpenter, R.W.

    1993-01-01

    This document is prepared in support of the 100 Area Environmental Restoration activity at the US Department of Energy's Hanford Site near Richland, Washington. It provides a technical baseline of waste sites located at the 100-D Area. The report is based on an environmental investigation undertaken by the Westinghouse Hanford Company (WHC) History Office in support of the Environmental Restoration Engineering Function and on review and evaluation of numerous Hanford Site current and historical reports, drawings, and photographs, supplemented by site inspections and employee interviews. No intrusive field investigation or sampling was conducted. All Hanford coordinate locations are approximate locations taken from several different maps and drawings of the 100-D Area. Every effort was made to derive coordinate locations for the center of each facility or waste site, except where noted, using standard measuring devices. Units of measure are shown as they appear in reference documents. The 100-D Area is made up of three operable units: 100-DR-1, 100-DR-2, and 100-DR-3. All three are addressed in this report. These operable units include liquid and solid waste disposal sites in the vicinity of, and related to, the 100-D and 100-DR Reactors. A fourth operable unit, 100-HR-3, is concerned with groundwater and is not addressed here. This report describes waste sites which include cribs, trenches, pits, french drains, retention basins, solid waste burial grounds, septic tanks, and drain fields. Each waste site is described separately and photographs are provided where available. A complete list of photographs can be found in Appendix A. A comprehensive environmental summary is not provided here but may be found in Hanford Site National Environmental Policy Act Characterization (Cushing 1988), which describes the geology and soils, meteorology, hydrology, land use, population, and air quality of the area

  20. 1993 baseline solid waste management system description

    International Nuclear Information System (INIS)

    Armacost, L.L.; Fowler, R.A.; Konynenbelt, H.S.

    1994-02-01

    Pacific Northwest Laboratory has prepared this report under the direction of Westinghouse Hanford Company. The report provides an integrated description of the system planned for managing Hanford's solid low-level waste, low-level mixed waste, transuranic waste, and transuranic mixed waste. The primary purpose of this document is to illustrate a collective view of the key functions planned at the Hanford Site to handle existing waste inventories, as well as solid wastes that will be generated in the future. By viewing this system as a whole rather than as individual projects, key facility interactions and requirements are identified and a better understanding of the overall system may be gained. The system is described so as to form a basis for modeling the system at various levels of detail. Model results provide insight into issues such as facility capacity requirements, alternative system operating strategies, and impacts of system changes (ie., startup dates). This description of the planned Hanford solid waste processing system: defines a baseline system configuration; identifies the entering waste streams to be managed within the system; identifies basic system functions and waste flows; and highlights system constraints. This system description will evolve and be revised as issues are resolved, planning decisions are made, additional data are collected, and assumptions are tested and changed. Out of necessity, this document will also be revised and updated so that a documented system description, which reflects current system planning, is always available for use by engineers and managers. It does not provide any results generated from the many alternatives that will be modeled in the course of analyzing solid waste disposal options; such results will be provided in separate documents

  1. The very-long-baseline array

    International Nuclear Information System (INIS)

    Kellermann, K.I.; Thompson, A.R.

    1988-01-01

    The development of radio technology in World War II opened a completely new window on the universe. When astronomers turned radio antennas to the heavens, they began to find a previously unknown universe of solar and planetary radio bursts, quasars, pulsars, radio galaxies, giant molecular clouds and cosmic masers. Not only do the radio waves reveal a new world of astronomical phenomena but also-because they are much longer than light waves-they are not as severely distorted by atmospheric turbulence or small imperfections in the telescope. About 25 years ago radio astronomers became aware that they could synthesize a resolution equivalent to that of a large aperture by combining data from smaller radio antennas that are widely separated. The effective aperture size would be about equal to the largest separation between the antennas. The technique is called synthesis imaging and is based on the principles of interferometry. Radio astronomers in the U.S. are now building a synthesis radio telescope called the Very-Long-Baseline Array, or VLBA. With 10 antennas sited across the country from the Virgin Islands to Hawaii, it will synthesize a radio antenna 8,000 kilometers across, nearly the diameter of the earth. The VLBA'S angular resolution will be less than a thousandth of an arc-second-about three orders of magnitude better than that of the largest conventional ground-based optical telescopes. Astronomers eagerly await the completion early in the next decade of the VLBA, which is expected, among other things, to give an unprecedentedly clear view into the cores of quasars and galactic nuclei and to reveal details of the processe-thought to be powered by black holes-that drive them

  2. Drawing a baseline in aesthetic quality assessment

    Science.gov (United States)

    Rubio, Fernando; Flores, M. Julia; Puerta, Jose M.

    2018-04-01

    Aesthetic classification of images is an inherently subjective task. There does not exist a validated collection of images/photographs labeled as having good or bad quality from experts. Nowadays, the closest approximation to that is to use databases of photos where a group of users rate each image. Hence, there is not a unique good/bad label but a rating distribution given by users voting. Due to this peculiarity, it is not possible to state the problem of binary aesthetic supervised classification in such a direct mode as other Computer Vision tasks. Recent literature follows an approach where researchers utilize the average rates from the users for each image, and they establish an arbitrary threshold to determine their class or label. In this way, images above the threshold are considered of good quality, while images below the threshold are seen as bad quality. This paper analyzes current literature, and it reviews those attributes able to represent an image, differentiating into three families: specific, general and deep features. Among those which have been proved more competitive, we have selected a representative subset, being our main goal to establish a clear experimental framework. Finally, once features were selected, we have used them for the full AVA dataset. We have to remark that to perform validation we report not only accuracy values, which is not that informative in this case, but also, metrics able to evaluate classification power within imbalanced datasets. We have conducted a series of experiments so that distinct well-known classifiers are learned from data. Like that, this paper provides what we could consider valuable and valid baseline results for the given problem.

  3. 10 CFR 850.20 - Baseline beryllium inventory.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Baseline beryllium inventory. 850.20 Section 850.20 Energy... Baseline beryllium inventory. (a) The responsible employer must develop a baseline inventory of the... inventory, the responsible employer must: (1) Review current and historical records; (2) Interview workers...

  4. A study of man made radioactivity baseline in dietary materials

    International Nuclear Information System (INIS)

    de la Paz, L.; Estacio, J.; Palattao, M.V.; Anden, A.

    1986-01-01

    This paper describes the radioactivity baseline from literature data coming from various countries where data are available. 1979-1985 were chosen as the baseline years for the following: milk (fresh and powdered), meat and meat products, cereals, fruits, coffee and tea, fish and vegetables. Pre- and post-Chernobyl baseline data are given. (ELC). 21 figs; 17 refs

  5. 40 CFR 80.92 - Baseline auditor requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Baseline auditor requirements. 80.92... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Anti-Dumping § 80.92 Baseline auditor requirements. (a... determination methodology, resulting baseline fuel parameter, volume and emissions values verified by an auditor...

  6. Baseline axle load survey in Malawi - 2014

    CSIR Research Space (South Africa)

    Roux, M

    2015-07-01

    Full Text Available of 50.4%. The average overloaded mass on the 1 356 overloaded vehicles was 4 264 kg, representing an average degree of overloading of 26.1%. Weigh data from 4 of the 5 permanent weighbridges in Malawi were also analysed to compare the extent and degree...

  7. Stability analysis of geomagnetic baseline data obtained at Cheongyang observatory in Korea

    Directory of Open Access Journals (Sweden)

    S. M. Amran

    2017-07-01

    Full Text Available The stability of baselines produced by Cheongyang (CYG observatory from the period of 2014 to 2016 is analysed. Step heights of higher than 5 nT were found in H and Z components in 2014 and 2015 due to magnetic noise in the absolute-measurement hut. In addition, a periodic modulation behaviour observed in the H and Z baseline curves was related to annual temperature variation of about 20 °C in the fluxgate magnetometer hut. Improvement in data quality was evidenced by a small dispersion between successive measurements from June 2015 to the end of 2016. Moreover, the baseline was also improved by correcting the discontinuity in the H and Z baselines.

  8. Probing Neutrino Properties with Long-Baseline Neutrino Beams

    International Nuclear Information System (INIS)

    Marino, Alysia

    2015-01-01

    This final report on an Early Career Award grant began in April 15, 2010 and concluded on April 14, 2015. Alysia Marino's research is focussed on making precise measurements of neutrino properties using intense accelerator-generated neutrino beams. As a part of this grant, she is collaborating on the Tokai-to-Kamioka (T2K) long-baseline neutrino experiment, currently taking data in Japan, and on the Deep Underground Neutrino Experiment (DUNE) design effort for a future Long-Baseline Neutrino Facility (LBNF) in the US. She is also a member of the NA61/SHINE particle production experiment at CERN, but as that effort is supported by other funds, it will not be discussed further here. T2K was designed to search for the disappearance of muon neutrinos (?_?) and the appearance of electron neutrinos (?_e), using a beam of muon neutrino beam that travels 295 km across Japan towards the Super-Kamiokande detector. In 2011 T2K first reported indications of ?_e appearance, a previously unobserved mode of neutrino oscillations. In the past year, T2K has published a combined analysis of ?_? disappearance and ?_e appearance, and began collecting taking data with a beam of anti-neutrinos, instead of neutrinos, to search for hints of violation of the CP symmetry of the universe. The proposed DUNE experiment has similar physics goals to T2K, but will be much more sensitive due to its more massive detectors and new higher-intensity neutrino beam. This effort will be very high-priority particle physics project in the US over the next decade.

  9. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  10. A one-way text messaging intervention for obesity.

    Science.gov (United States)

    Ahn, Ahleum; Choi, Jaekyung

    2016-04-01

    Worldwide, there has been a startling increase in the number of people who are obese or overweight. Obesity increases the risk of cardiovascular disease and overall mortality. Mobile phone messaging is an important means of human communication globally. Because the mobile phone can be used anywhere at any time, mobile phone messaging has the potential to manage obesity. We investigated the effectiveness of a one-way text messaging intervention for obesity. Participants' body mass index and waist circumference were measured at the beginning of the programme and again after 12 weeks. The text message group received text messages about exercise, dietary intake, and general information about obesity three times a week, while the control group did not receive any text messages from the study. Of the 80 participants, 25 subjects in the text message group and 29 participants in the control group completed the study. After adjusting for baseline body mass index, the body mass index was significantly lower in the text message group than in the control group (27.9 vs. 28.3; p = 0.02). After adjusting for the baseline waist circumference, the difference of waist circumference between the text message group and control group was not significant (93.4 vs. 94.6; p = 0.13). The one-way text messaging intervention was a simple and effective way to manage obesity. The one-way text messaging intervention may be a useful method for lifestyle modification in obese subjects. © The Author(s) 2015.

  11. Physics with a very long neutrino factory baseline

    International Nuclear Information System (INIS)

    Gandhi, Raj; Winter, Walter

    2007-01-01

    We discuss the neutrino oscillation physics of a very long neutrino factory baseline over a broad range of lengths (between 6000 km and 9000 km), centered on the 'magic baseline' (∼7500 km) where correlations with the leptonic CP phase are suppressed by matter effects. Since the magic baseline depends only on the density, we study the impact of matter density profile effects and density uncertainties over this range, and the impact of detector locations off the optimal baseline. We find that the optimal constant density describing the physics over this entire baseline range is about 5% higher than the average matter density. This implies that the magic baseline is significantly shorter than previously inferred. However, while a single detector optimization requires fine-tuning of the (very long) baseline length, its combination with a near detector at a shorter baseline is much less sensitive to the far detector location and to uncertainties in the matter density. In addition, we point out different applications of this baseline which go beyond its excellent correlation and degeneracy resolution potential. We demonstrate that such a long baseline assists in the improvement of the θ 13 precision and in the resolution of the octant degeneracy. Moreover, we show that the neutrino data from such a baseline could be used to extract the matter density along the profile up to 0.24% at 1σ for large sin 2 2θ 13 , providing a useful discriminator between different geophysical models

  12. Text2Floss: the feasibility and acceptability of a text messaging intervention to improve oral health behavior and knowledge.

    Science.gov (United States)

    Hashemian, Tony S; Kritz-Silverstein, Donna; Baker, Ryan

    2015-01-01

    Text messaging is useful for promoting numerous health-related behaviors. The Text2Floss Study examines the feasibility and utility of a 7-day text messaging intervention to improve oral health knowledge and behavior in mothers of young children. Mothers were recruited from a private practice and a community clinic. Of 156 mothers enrolled, 129 randomized into text (n = 60) and control groups (n = 69) completed the trial. Participants in the text group received text messages for 7 days, asking about flossing and presenting oral health information. Oral health behaviors and knowledge were surveyed pre- and post-intervention. At baseline, there were no differences between text and control group mothers in knowledge and behaviors (P > 0.10). Post-intervention, text group mothers flossed more (P = 0.01), had higher total (P = 0.0006) and specific (P Text messages were accepted and perceived as useful. Mothers receiving text messages improved their own oral health behaviors and knowledge as well as their behaviors regarding their children's oral health. Text messaging represents a viable method to improve oral health behaviors and knowledge. Its high acceptance may make it useful for preventing oral disease. © 2014 American Association of Public Health Dentistry.

  13. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  14. Aurora Mine project - historical resources baseline study

    International Nuclear Information System (INIS)

    Reeves, B.

    1996-01-01

    This volume contains the results of a base line archaeological study of the Aurora Mine Project local study area. It was compiled in support of Syncrude Canada's application to the Alberta Energy and Utilities Board (AEUB) and Alberta Environmental Protection to construct and operate it new Aurora Mine, located northeast of Fort McMurray, Alberta. The objective of this study was to compile, consolidate, review and analyze the reports for the area compiled over the past 22 years in and adjacent to the local study area (LSA), particularly those of now existing and Syncrude projects, and previously proposed Alsands and OSLO projects. The report is a summary of the human history in the area including pre-contact native archaeological sites, past archaeological studies, the Hinterland site pattern, post-contact native traditional sites, oil sands exploration/development related sites and paleontological sites in the subject area, and areas adjacent to it. 150 refs., 5 tabs., 43 figs

  15. Intrafractional baseline drift during free breathing breast cancer radiation therapy.

    Science.gov (United States)

    Jensen, Christer Andre; Acosta Roa, Ana María; Lund, Jo-Åsmund; Frengen, Jomar

    2017-06-01

    Intrafraction motion in breast cancer radiation therapy (BCRT) has not yet been thoroughly described in the literature. It has been observed that baseline drift occurs as part of the intrafraction motion. This study aims to measure baseline drift and its incidence in free-breathing BCRT patients using an in-house developed laser system for tracking the position of the sternum. Baseline drift was monitored in 20 right-sided breast cancer patients receiving free breathing 3D-conformal RT by using an in-house developed laser system which measures one-dimensional distance in the AP direction. A total of 357 patient respiratory traces from treatment sessions were logged and analysed. Baseline drift was compared to patient positioning error measured from in-field portal imaging. The mean overall baseline drift at end of treatment sessions was -1.3 mm for the patient population. Relatively small baseline drift was observed during the first fraction; however it was clearly detected already at the second fraction. Over 90% of the baseline drift occurs during the first 3 min of each treatment session. The baseline drift rate for the population was -0.5 ± 0.2 mm/min in the posterior direction the first minute after localization. Only 4% of the treatment sessions had a 5 mm or larger baseline drift at 5 min, all towards the posterior direction. Mean baseline drift in the posterior direction in free breathing BCRT was observed in 18 of 20 patients over all treatment sessions. This study shows that there is a substantial baseline drift in free breathing BCRT patients. No clear baseline drift was observed during the first treatment session; however, baseline drift was markedly present at the rest of the sessions. Intrafraction motion due to baseline drift should be accounted for in margin calculations.

  16. Quality baseline of the castilla blackberry (Rubus glaucus in its food chain

    Directory of Open Access Journals (Sweden)

    Fernanda Iza

    2016-09-01

    Full Text Available A proposal for improvement in the performance of the food chain of castilla blackberry (Rubus glaucus in order to potentiate their productivity can only start from a baseline or situational diagnosis of the quality of the fruit and hence identify the main points of improvement. The food chain of the fruit identifies three stages, harvest, post-harvest (storage and transport and marketing or sale. The diagnosis in each stage began with reverse mode. It was identified the most representative producer and the supplying for traders to the point of sale. The quality evaluation of the fruit was performed through chemical and physical characterization in the four stages. Weight loss or losses were evident in all stages, light no significant changes of color from bright red bluish hue in the collection stage until opaque bluish red or off, at the stage of sale due to the short cycle time and the characteristics non-climacteric fruit. However, at all stages of collection, storage, transportation and sale, they presented significant changes in the indices of maturity which meant an increase of sugars, decreased of pH, and increase acidity. The results indicate that the fruit changed its physicochemical characteristics during the stages of the food chain affecting its productivity.

  17. Spent Nuclear Fuel Project technical baseline document. Fiscal year 1995: Volume 1, Baseline description

    International Nuclear Information System (INIS)

    Womack, J.C.; Cramond, R.; Paedon, R.J.

    1995-01-01

    This document is a revision to WHC-SD-SNF-SD-002, and is issued to support the individual projects that make up the Spent Nuclear Fuel Project in the lower-tier functions, requirements, interfaces, and technical baseline items. It presents results of engineering analyses since Sept. 1994. The mission of the SNFP on the Hanford site is to provide safety, economic, environmentally sound management of Hanford SNF in a manner that stages it to final disposition. This particularly involves K Basin fuel, although other SNF is involved also

  18. Primary School Text Comprehension Predicts Mathematical Word Problem-Solving Skills in Secondary School

    Science.gov (United States)

    Björn, Piia Maria; Aunola, Kaisa; Nurmi, Jari-Erik

    2016-01-01

    This longitudinal study aimed to investigate the extent to which primary school text comprehension predicts mathematical word problem-solving skills in secondary school among Finnish students. The participants were 224 fourth graders (9-10 years old at the baseline). The children's text-reading fluency, text comprehension and basic calculation…

  19. Instantaneous Real-Time Kinematic Decimeter-Level Positioning with BeiDou Triple-Frequency Signals over Medium Baselines

    Directory of Open Access Journals (Sweden)

    Xiyang He

    2015-12-01

    Full Text Available Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL combinations are fixed first, and then a wide lane (WL combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.

  20. Use of Popular Culture Texts in Mother Tongue Education

    Science.gov (United States)

    Bal, Mazhar

    2018-01-01

    The aim of this study was to associate popular culture texts with Turkish language lessons of middle school students. For this purpose, a model was proposed and a suitable curriculum was prepared for this model. It was aimed to determine how this program, which was the result of associating popular culture texts with Turkish language lesson…

  1. A Relational Reasoning Approach to Text-Graphic Processing

    Science.gov (United States)

    Danielson, Robert W.; Sinatra, Gale M.

    2017-01-01

    We propose that research on text-graphic processing could be strengthened by the inclusion of relational reasoning perspectives. We briefly outline four aspects of relational reasoning: "analogies," "anomalies," "antinomies", and "antitheses". Next, we illustrate how text-graphic researchers have been…

  2. Social Media Text Classification by Enhancing Well-Formed Text Trained Model

    Directory of Open Access Journals (Sweden)

    Phat Jotikabukkana

    2016-09-01

    Full Text Available Social media are a powerful communication tool in our era of digital information. The large amount of user-generated data is a useful novel source of data, even though it is not easy to extract the treasures from this vast and noisy trove. Since classification is an important part of text mining, many techniques have been proposed to classify this kind of information. We developed an effective technique of social media text classification by semi-supervised learning utilizing an online news source consisting of well-formed text. The computer first automatically extracts news categories, well-categorized by publishers, as classes for topic classification. A bag of words taken from news articles provides the initial keywords related to their category in the form of word vectors. The principal task is to retrieve a set of new productive keywords. Term Frequency-Inverse Document Frequency weighting (TF-IDF and Word Article Matrix (WAM are used as main methods. A modification of WAM is recomputed until it becomes the most effective model for social media text classification. The key success factor was enhancing our model with effective keywords from social media. A promising result of 99.50% accuracy was achieved, with more than 98.5% of Precision, Recall, and F-measure after updating the model three times.

  3. Bilingual Text4Walking Food Service Employee Intervention Pilot Study.

    Science.gov (United States)

    Buchholz, Susan Weber; Ingram, Diana; Wilbur, JoEllen; Fogg, Louis; Sandi, Giselle; Moss, Angela; Ocampo, Edith V

    2016-06-01

    Half of all adults in the United States do not meet the level of recommended aerobic physical activity. Physical activity interventions are now being conducted in the workplace. Accessible technology, in the form of widespread usage of cell phones and text messaging, is available for promoting physical activity. The purposes of this study, which was conducted in the workplace, were to determine (1) the feasibility of implementing a bilingual 12-week Text4Walking intervention and (2) the effect of the Text4Walking intervention on change in physical activity and health status in a food service employee population. Before conducting the study reported here, the Text4Walking research team developed a database of motivational physical activity text messages in English. Because Hispanic or Latino adults compose one-quarter of all adults employed in the food service industry, the Text4Walking team translated the physical activity text messages into Spanish. This pilot study was guided by the Physical Activity Health Promotion Framework and used a 1-group 12-week pre- and posttest design with food service employees who self-reported as being sedentary. The aim of the study was to increase the number of daily steps over the baseline by 3000 steps. Three physical activity text messages were delivered weekly. In addition, participants received 3 motivational calls during the study. SPSS version 19.0 and R 3.0 were used to perform the data analysis. There were 33 employees who participated in the study (57.6% female), with a mean age of 43.7 years (SD 8.4). The study included 11 Hispanic or Latino participants, 8 of whom requested that the study be delivered in Spanish. There was a 100% retention rate in the study. At baseline, the participants walked 102 (SD 138) minutes/day (per self-report). This rate increased significantly (P=.008) to 182 (SD 219) minutes/day over the course of the study. The participants had a baseline mean of 10,416 (SD 5097) steps, which also increased

  4. Performance of JPEG Image Transmission Using Proposed Asymmetric Turbo Code

    Directory of Open Access Journals (Sweden)

    Siddiqi Mohammad Umar

    2007-01-01

    Full Text Available This paper gives the results of a simulation study on the performance of JPEG image transmission over AWGN and Rayleigh fading channels using typical and proposed asymmetric turbo codes for error control coding. The baseline JPEG algorithm is used to compress a QCIF ( "Suzie" image. The recursive systematic convolutional (RSC encoder with generator polynomials , that is, (13/11 in decimal, and 3G interleaver are used for the typical WCDMA and CDMA2000 turbo codes. The proposed asymmetric turbo code uses generator polynomials , that is, (13/11; 13/9 in decimal, and a code-matched interleaver. The effect of interleaver in the proposed asymmetric turbo code is studied using weight distribution and simulation. The simulation results and performance bound for proposed asymmetric turbo code for the frame length , code rate with Log-MAP decoder over AWGN channel are compared with the typical system. From the simulation results, it is observed that the image transmission using proposed asymmetric turbo code performs better than that with the typical system.

  5. Mining consumer health vocabulary from community-generated text.

    Science.gov (United States)

    Vydiswaran, V G Vinod; Mei, Qiaozhu; Hanauer, David A; Zheng, Kai

    2014-01-01

    Community-generated text corpora can be a valuable resource to extract consumer health vocabulary (CHV) and link them to professional terminologies and alternative variants. In this research, we propose a pattern-based text-mining approach to identify pairs of CHV and professional terms from Wikipedia, a large text corpus created and maintained by the community. A novel measure, leveraging the ratio of frequency of occurrence, was used to differentiate consumer terms from professional terms. We empirically evaluated the applicability of this approach using a large data sample consisting of MedLine abstracts and all posts from an online health forum, MedHelp. The results show that the proposed approach is able to identify synonymous pairs and label the terms as either consumer or professional term with high accuracy. We conclude that the proposed approach provides great potential to produce a high quality CHV to improve the performance of computational applications in processing consumer-generated health text.

  6. Updated baseline for a staged Compact Linear Collider

    CERN Document Server

    Boland, M J; Giansiracusa, P J; Lucas, T G; Rassool, R P; Balazs, C; Charles, T K; Afanaciev, K; Emeliantchik, I; Ignatenko, A; Makarenko, V; Shumeiko, N; Patapenka, A; Zhuk, I; Abusleme Hoffman, A C; Diaz Gutierrez, M A; Gonzalez, M Vogel; Chi, Y; He, X; Pei, G; Pei, S; Shu, G; Wang, X; Zhang, J; Zhao, F; Zhou, Z; Chen, H; Gao, Y; Huang, W; Kuang, Y P; Li, B; Li, Y; Shao, J; Shi, J; Tang, C; Wu, X; Ma, L; Han, Y; Fang, W; Gu, Q; Huang, D; Huang, X; Tan, J; Wang, Z; Zhao, Z; Laštovička, T; Uggerhoj, U; Wistisen, T N; Aabloo, A; Eimre, K; Kuppart, K; Vigonski, S; Zadin, V; Aicheler, M; Baibuz, E; Brücken, E; Djurabekova, F; Eerola, P; Garcia, F; Haeggström, E; Huitu, K; Jansson, V; Karimaki, V; Kassamakov, I; Kyritsakis, A; Lehti, S; Meriläinen, A; Montonen, R; Niinikoski, T; Nordlund, K; Österberg, K; Parekh, M; Törnqvist, N A; Väinölä, J; Veske, M; Farabolini, W; Mollard, A; Napoly, O; Peauger, F; Plouin, J; Bambade, P; Chaikovska, I; Chehab, R; Davier, M; Kaabi, W; Kou, E; LeDiberder, F; Pöschl, R; Zerwas, D; Aimard, B; Balik, G; Baud, J-P; Blaising, J-J; Brunetti, L; Chefdeville, M; Drancourt, C; Geoffroy, N; Jacquemier, J; Jeremie, A; Karyotakis, Y; Nappa, J M; Vilalte, S; Vouters, G; Bernard, A; Peric, I; Gabriel, M; Simon, F; Szalay, M; van der Kolk, N; Alexopoulos, T; Gazis, E N; Gazis, N; Ikarios, E; Kostopoulos, V; Kourkoulis, S; Gupta, P D; Shrivastava, P; Arfaei, H; Dayyani, M K; Ghasem, H; Hajari, S S; Shaker, H; Ashkenazy, Y; Abramowicz, H; Benhammou, Y; Borysov, O; Kananov, S; Levy, A; Levy, I; Rosenblat, O; D'Auria, G; Di Mitri, S; Abe, T; Aryshev, A; Higo, T; Makida, Y; Matsumoto, S; Shidara, T; Takatomi, T; Takubo, Y; Tauchi, T; Toge, N; Ueno, K; Urakawa, J; Yamamoto, A; Yamanaka, M; Raboanary, R; Hart, R; van der Graaf, H; Eigen, G; Zalieckas, J; Adli, E; Lillestøl, R; Malina, L; Pfingstner, J; Sjobak, K N; Ahmed, W; Asghar, M I; Hoorani, H; Bugiel, S; Dasgupta, R; Firlej, M; Fiutowski, T A; Idzik, M; Kopec, M; Kuczynska, M; Moron, J; Swientek, K P; Daniluk, W; Krupa, B; Kucharczyk, M; Lesiak, T; Moszczynski, A; Pawlik, B; Sopicki, P; Wojtoń, T; Zawiejski, L; Kalinowski, J; Krawczyk, M; Żarnecki, A F; Firu, E; Ghenescu, V; Neagu, A T; Preda, T; Zgura, I-S; Aloev, A; Azaryan, N; Budagov, J; Chizhov, M; Filippova, M; Glagolev, V; Gongadze, A; Grigoryan, S; Gudkov, D; Karjavine, V; Lyablin, M; Olyunin, A; Samochkine, A; Sapronov, A; Shirkov, G; Soldatov, V; Solodko, A; Solodko, E; Trubnikov, G; Tyapkin, I; Uzhinsky, V; Vorozhtov, A; Levichev, E; Mezentsev, N; Piminov, P; Shatilov, D; Vobly, P; Zolotarev, K; Bozovic-Jelisavcic, I; Kacarevic, G; Lukic, S; Milutinovic-Dumbelovic, G; Pandurovic, M; Iriso, U; Perez, F; Pont, M; Trenado, J; Aguilar-Benitez, M; Calero, J; Garcia-Tabares, L; Gavela, D; Gutierrez, J L; Lopez, D; Toral, F; Moya, D; Ruiz-Jimeno, A; Vila, I; Argyropoulos, T; Blanch Gutierrez, C; Boronat, M; Esperante, D; Faus-Golfe, A; Fuster, J; Fuster Martinez, N; Galindo Muñoz, N; García, I; Giner Navarro, J; Ros, E; Vos, M; Brenner, R; Ekelöf, T; Jacewicz, M; Ögren, J; Olvegård, M; Ruber, R; Ziemann, V; Aguglia, D; Alipour Tehrani, N; Aloev, A; Andersson, A; Andrianala, F; Antoniou, F; Artoos, K; Atieh, S; Ballabriga Sune, R; Barnes, M J; Barranco Garcia, J; Bartosik, H; Belver-Aguilar, C; Benot Morell, A; Bett, D R; Bettoni, S; Blanchot, G; Blanco Garcia, O; Bonnin, X A; Brunner, O; Burkhardt, H; Calatroni, S; Campbell, M; Catalan Lasheras, N; Cerqueira Bastos, M; Cherif, A; Chevallay, E; Constance, B; Corsini, R; Cure, B; Curt, S; Dalena, B; Dannheim, D; De Michele, G; De Oliveira, L; Deelen, N; Delahaye, J P; Dobers, T; Doebert, S; Draper, M; Duarte Ramos, F; Dubrovskiy, A; Elsener, K; Esberg, J; Esposito, M; Fedosseev, V; Ferracin, P; Fiergolski, A; Foraz, K; Fowler, A; Friebel, F; Fuchs, J-F; Fuentes Rojas, C A; Gaddi, A; Garcia Fajardo, L; Garcia Morales, H; Garion, C; Gatignon, L; Gayde, J-C; Gerwig, H; Goldblatt, A N; Grefe, C; Grudiev, A; Guillot-Vignot, F G; Gutt-Mostowy, M L; Hauschild, M; Hessler, C; Holma, J K; Holzer, E; Hourican, M; Hynds, D; Inntjore Levinsen, Y; Jeanneret, B; Jensen, E; Jonker, M; Kastriotou, M; Kemppinen, J M K; Kieffer, R B; Klempt, W; Kononenko, O; Korsback, A; Koukovini Platia, E; Kovermann, J W; Kozsar, C-I; Kremastiotis, I; Kulis, S; Latina, A; Leaux, F; Lebrun, P; Lefevre, T; Linssen, L; Llopart Cudie, X; Maier, A A; Mainaud Durand, H; Manosperti, E; Marelli, C; Marin Lacoma, E; Martin, R; Mazzoni, S; Mcmonagle, G; Mete, O; Mether, L M; Modena, M; Münker, R M; Muranaka, T; Nebot Del Busto, E; Nikiforou, N; Nisbet, D; Nonglaton, J-M; Nuiry, F X; Nürnberg, A; Olvegard, M; Osborne, J; Papadopoulou, S; Papaphilippou, Y; Passarelli, A; Patecki, M; Pazdera, L; Pellegrini, D; Pepitone, K; Perez, F; Perez Codina, E; Perez Fontenla, A; Persson, T H B; Petrič, M; Pitters, F; Pittet, S; Plassard, F; Rajamak, R; Redford, S; Renier, Y; Rey, S F; Riddone, G; Rinolfi, L; Rodriguez Castro, E; Roloff, P; Rossi, C; Rude, V; Rumolo, G; Sailer, A; Santin, E; Schlatter, D; Schmickler, H; Schulte, D; Shipman, N; Sicking, E; Simoniello, R; Skowronski, P K; Sobrino Mompean, P; Soby, L; Sosin, M P; Sroka, S; Stapnes, S; Sterbini, G; Ström, R; Syratchev, I; Tecker, F; Thonet, P A; Timeo, L; Timko, H; Tomas Garcia, R; Valerio, P; Vamvakas, A L; Vivoli, A; Weber, M A; Wegner, R; Wendt, M; Woolley, B; Wuensch, W; Uythoven, J; Zha, H; Zisopoulos, P; Benoit, M; Vicente Barreto Pinto, M; Bopp, M; Braun, H H; Csatari Divall, M; Dehler, M; Garvey, T; Raguin, J Y; Rivkin, L; Zennaro, R; Aksoy, A; Nergiz, Z; Pilicer, E; Tapan, I; Yavas, O; Baturin, V; Kholodov, R; Lebedynskyi, S; Miroshnichenko, V; Mordyk, S; Profatilova, I; Storizhko, V; Watson, N; Winter, A; Goldstein, J; Green, S; Marshall, J S; Thomson, M A; Xu, B; Gillespie, W A; Pan, R; Tyrk, M A; Protopopescu, D; Robson, A; Apsimon, R; Bailey, I; Burt, G; Constable, D; Dexter, A; Karimian, S; Lingwood, C; Buckland, M D; Casse, G; Vossebeld, J; Bosco, A; Karataev, P; Kruchinin, K; Lekomtsev, K; Nevay, L; Snuverink, J; Yamakawa, E; Boisvert, V; Boogert, S; Boorman, G; Gibson, S; Lyapin, A; Shields, W; Teixeira-Dias, P; West, S; Jones, R; Joshi, N; Bodenstein, R; Burrows, P N; Christian, G B; Gamba, D; Perry, C; Roberts, J; Clarke, J A; Collomb, N A; Jamison, S P; Shepherd, B J A; Walsh, D; Demarteau, M; Repond, J; Weerts, H; Xia, L; Wells, J D; Adolphsen, C; Barklow, T; Breidenbach, M; Graf, N; Hewett, J; Markiewicz, T; McCormick, D; Moffeit, K; Nosochkov, Y; Oriunno, M; Phinney, N; Rizzo, T; Tantawi, S; Wang, F; Wang, J; White, G; Woodley, M

    2016-01-01

    The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear e+e- collider under development. For an optimal exploitation of its physics potential, CLIC is foreseen to be built and operated in a staged approach with three centre-of-mass energy stages ranging from a few hundred GeV up to 3 TeV. The first stage will focus on precision Standard Model physics, in particular Higgs and top-quark measurements. Subsequent stages will focus on measurements of rare Higgs processes, as well as searches for new physics processes and precision measurements of new states, e.g. states previously discovered at LHC or at CLIC itself. In the 2012 CLIC Conceptual Design Report, a fully optimised 3 TeV collider was presented, while the proposed lower energy stages were not studied to the same level of detail. This report presents an updated baseline staging scenario for CLIC. The scenario is the result of a comprehensive study addressing the performance, cost and power of the CLIC accelerator complex as a function of...

  7. Baseline brain energy supports the state of consciousness.

    Science.gov (United States)

    Shulman, Robert G; Hyder, Fahmeed; Rothman, Douglas L

    2009-07-07

    An individual, human or animal, is defined to be in a conscious state empirically by the behavioral ability to respond meaningfully to stimuli, whereas the loss of consciousness is defined by unresponsiveness. PET measurements of glucose or oxygen consumption show a widespread approximately 45% reduction in cerebral energy consumption with anesthesia-induced loss of consciousness. Because baseline brain energy consumption has been shown by (13)C magnetic resonance spectroscopy to be almost exclusively dedicated to neuronal signaling, we propose that the high level of brain energy is a necessary property of the conscious state. Two additional neuronal properties of the conscious state change with anesthesia. The delocalized fMRI activity patterns in rat brain during sensory stimulation at a higher energy state (close to the awake) collapse to a contralateral somatosensory response at lower energy state (deep anesthesia). Firing rates of an ensemble of neurons in the rat somatosensory cortex shift from the gamma-band range (20-40 Hz) at higher energy state to energy state. With the conscious state defined by the individual's behavior and maintained by high cerebral energy, measurable properties of that state are the widespread fMRI patterns and high frequency neuronal activity, both of which support the extensive interregional communication characteristic of consciousness. This usage of high brain energies when the person is in the "state" of consciousness differs from most studies, which attend the smaller energy increments observed during the stimulations that form the "contents" of that state.

  8. Conditional analysis of mixed Poisson processes with baseline counts: implications for trial design and analysis.

    Science.gov (United States)

    Cook, Richard J; Wei, Wei

    2003-07-01

    The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).

  9. Super-NOvA a long-baseline neutrino experiment with two off-axis detectors

    CERN Document Server

    Requejo, O M; Pascoli, S; Requejo, Olga Mena; Palomares-Ruiz, Sergio; Pascoli, Silvia

    2005-01-01

    Establishing the neutrino mass hierarchy is one of the fundamental questions that will have to be addressed in the next future. Its determination could be obtained with long-baseline experiments but typically suffers from degeneracies with other neutrino parameters. We consider here the NOvA experiment configuration and propose to place a second off-axis detector, with a shorter baseline, such that, by exploiting matter effects, the type of neutrino mass hierarchy could be determined with only the neutrino run. We show that the determination of this parameter is free of degeneracies, provided the ratio L/E, where L the baseline and E is the neutrino energy, is the same for both detectors.

  10. Large short-baseline νμ disappearance

    International Nuclear Information System (INIS)

    Giunti, Carlo; Laveder, Marco

    2011-01-01

    We analyze the LSND, KARMEN, and MiniBooNE data on short-baseline ν μ →ν e oscillations and the data on short-baseline ν e disappearance obtained in the Bugey-3 and CHOOZ reactor experiments in the framework of 3+1 antineutrino mixing, taking into account the MINOS observation of long-baseline ν μ disappearance and the KamLAND observation of very-long-baseline ν e disappearance. We show that the fit of the data implies that the short-baseline disappearance of ν μ is relatively large. We obtain a prediction of an effective amplitude sin 2 2θ μμ > or approx. 0.1 for short-baseline ν μ disappearance generated by 0.2 2 2 , which could be measured in future experiments.

  11. Doing Mathematics with Purpose: Mathematical Text Types

    Science.gov (United States)

    Dostal, Hannah M.; Robinson, Richard

    2018-01-01

    Mathematical literacy includes learning to read and write different types of mathematical texts as part of purposeful mathematical meaning making. Thus in this article, we describe how learning to read and write mathematical texts (proof text, algorithmic text, algebraic/symbolic text, and visual text) supports the development of students'…

  12. The socio-demographics of texting

    DEFF Research Database (Denmark)

    Ling, Richard; Bertel, Troels Fibæk; Sundsøy, Pål

    2012-01-01

    Who texts, and with whom do they text? This article examines the use of texting using metered traffic data from a large dataset (nearly 400 million anonymous text messages). We ask 1) How much do different age groups use mobile phone based texting (SMS)? 2) How wide is the circle of texting...

  13. Informed baseline subtraction of proteomic mass spectrometry data aided by a novel sliding window algorithm.

    Science.gov (United States)

    Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J

    2016-01-01

    Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of

  14. Scoping paper on new CDM baseline methodology for cross-border power trade

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    Poeyry has been sub-contracted by Carbon Limits, under the African Development Bank CDM Support Programme, to prepare a new CDM baseline methodology for cross border trade, based on a transmission line from Ethiopia to Kenya. The first step in that process is to review the response of the UNFCCC, particularly the Methodologies Panel ('Meth Panel') of the CDM Executive Board, to the various proposals on cross-border trade and interconnection of grids. This report reviews the Methodology Panel and Executive Board decisions on 4 requests for revisions of ACM2 'Consolidated baseline methodology for grid-connected electricity generation from renewable sources', and 5 proposed new baseline methodologies (NM255, NM269, NM272, NM318, NM342), all of which were rejected. We analyse the reasons the methodologies were rejected, and whether the proposed draft Approved Methodology (AM) that the Methodology Panel created in response to NM269 and NM272 is a suitable basis for a new methodology proposal.(auth)

  15. Baseline inventory data recommendations for National Wildlife Refuges

    Data.gov (United States)

    Department of the Interior — The Baseline Inventory Team recommends that each refuge have available abiotic “data layers” for topography, aerial photography, hydrography, soils, boundaries, and...

  16. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Site Outcomes Baseline Multi Year Work Plan Volume 1, River Corridor Restoration Baseline

    International Nuclear Information System (INIS)

    Wintczak, T.M.

    2001-01-01

    The River Corridor Restoration volume is a compilation of Hanford Site scope, which excludes the approximately 194 km 2 Central Plateau. The River Corridor scope is currently contractually assigned to Fluor Hanford, Bechtel Hanford, inc., DynCorp, and Pacific Northwest National Laboratory, and others. The purpose of this project specification is to provide an overall scoping document for the River Corridor Restoration volume, and to provide a link with the overall Hanford Site River Corridor scope. Additionally, this specification provides an integrated and consolidated source of information for the various scopes, by current contract, for the River Corridor Restoration Baseline. It identifies the vision, mission, and goals, as well as the operational history of the Hanford Site, along with environmental setting and hazards

  18. Texting to increase physical activity among teenagers (TXT Me!): Rationale, design, and methods proposal

    Science.gov (United States)

    Physical activity decreases from childhood through adulthood. Among youth, teenagers (teens) achieve the lowest levels of physical activity, and high school age youth are particularly at risk of inactivity. Effective methods are needed to increase youth physical activity in a way that can be maintai...

  19. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    International Nuclear Information System (INIS)

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-01-01

    Highlights: • We evaluated several baseline models predicting energy use in buildings. • Including occupancy data improved accuracy of baseline model prediction. • Occupancy is highly correlated with energy use in buildings. • This simple approach can be used in decision makings of energy retrofit projects. - Abstract: More than 80% of energy is consumed during operation phase of a building’s life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essential for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. The results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.

  20. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  1. Baseline recommendations for greenhouse gas mitigation projects in the electric power sector

    Energy Technology Data Exchange (ETDEWEB)

    Kartha, Sivan; Lazarus, Michael [Stockholm Environment Institute/Tellus Institute, Boston, MA (United States); Bosi, Martina [International Energy Agency, Paris, 75 (France)

    2004-03-01

    The success of the Clean Development Mechanism (CDM) and other credit-based emission trading regimes depends on effective methodologies for quantifying a project's emissions reductions. The key methodological challenge lies in estimating project's counterfactual emission baseline, through balancing the need for accuracy, transparency, and practicality. Baseline standardisation (e.g. methodology, parameters and/or emission rate) can be a means to achieve these goals. This paper compares specific options for developing standardised baselines for the electricity sector - a natural starting point for baseline standardisation given the magnitude of the emissions reductions opportunities. The authors review fundamental assumptions that baseline studies have made with respect to estimating the generation sources avoided by CDM or other emission-reducing projects. Typically, studies have assumed that such projects affect either the operation of existing power plants (the operating margin) or the construction of new generation facilities (the build margin). The authors show that both effects are important to consider and thus recommend a combined margin approach for most projects, based on grid-specific data. They propose a three-category framework, according to projects' relative scale and environmental risk. (Author)

  2. Sensitivity of amounts and distribution of tropical forest carbon credits depending on baseline rules

    International Nuclear Information System (INIS)

    Griscom, Bronson; Shoch, David; Stanley, Bill; Cortez, Rane; Virgilio, Nicole

    2009-01-01

    One of the largest sources of global greenhouse gas emissions can be addressed through conservation of tropical forests by channeling funds to developing countries at a cost-savings for developed countries. However, questions remain to be resolved in negotiating a system for including reduced emissions from deforestation and forest degradation (REDD) in a post-Kyoto climate treaty. The approach to determine national baselines, or reference levels, for quantifying REDD has emerged as central to negotiations over a REDD mechanism in a post-Kyoto policy framework. The baseline approach is critical to the success of a REDD mechanism because it affects the quantity, credibility, and equity of credits generated from efforts to reduce forest carbon emissions. We compared outcomes of seven proposed baseline approaches as a function of country circumstances, using a retrospective analysis of FAO-FRA data on forest carbon emissions from deforestation. Depending upon the baseline approach used, the total credited emissions avoided ranged over two orders of magnitude for the same quantity of actual emissions reductions. There was also a wide range in the relative distribution of credits generated among the five country types we identified. Outcomes were especially variable for countries with high remaining forest and low rates of deforestation (HFLD). We suggest that the most credible approaches measure emissions avoided with respect to a business-as-usual baseline scenario linked to historic emissions data, and allow limited adjustments based on forest carbon stocks.

  3. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  4. Automatically ordering events and times in text

    CERN Document Server

    Derczynski, Leon R A

    2017-01-01

    The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning so...

  5. Speech to Text Translation for Malay Language

    Science.gov (United States)

    Al-khulaidi, Rami Ali; Akmeliawati, Rini

    2017-11-01

    The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.

  6. Microbunch preserving in-line system for an APPLE II helical radiator at the LCLS baseline

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL Project Team, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2011-05-15

    In a previous work we proposed a scheme for polarization control at the LCLS baseline, which exploited the microbunching from the planar undulator. After the baseline undulator, the electron beam is transported through a drift by a FODO focusing system, and through a short helical radiator. The microbunching structure can be preserved, and intense coherent radiation is emitted in the helical undulator at fundamental harmonic. The driving idea of this proposal is that the background linearly-polarized radiation from the baseline undulator is suppressed by spatial filtering. Filtering is achieved by letting radiation and electron beam through Be slits upstream of the helical radiator, where the radiation spot size is about ten times larger than the electron beam transverse size. Several changes considered in the present paper were made to improve the previous design. Slits are now placed immediately behind the helical radiator. The advantage is that the electron beam can be spoiled by the slits, and narrower slits width can be used for spatial filtering. Due to this fundamental reason, the present setup is shorter than the previous one. The helical radiator is now placed immediately behind the SHAB undulator. It is thus sufficient to use the existing FODO focusing system of the SHAB undulator for transporting themodulated electron beam. This paper presents complete GENESIS code calculations for the new design, starting from the baseline undulator entrance up to the helical radiator exit including the modulated electron beam transport by the SHAB FODO focusing system. (orig.)

  7. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    International Nuclear Information System (INIS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-01-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)

  8. Microbunch preserving in-line system for an APPLE II helical radiator at the LCLS baseline

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2011-05-01

    In a previous work we proposed a scheme for polarization control at the LCLS baseline, which exploited the microbunching from the planar undulator. After the baseline undulator, the electron beam is transported through a drift by a FODO focusing system, and through a short helical radiator. The microbunching structure can be preserved, and intense coherent radiation is emitted in the helical undulator at fundamental harmonic. The driving idea of this proposal is that the background linearly-polarized radiation from the baseline undulator is suppressed by spatial filtering. Filtering is achieved by letting radiation and electron beam through Be slits upstream of the helical radiator, where the radiation spot size is about ten times larger than the electron beam transverse size. Several changes considered in the present paper were made to improve the previous design. Slits are now placed immediately behind the helical radiator. The advantage is that the electron beam can be spoiled by the slits, and narrower slits width can be used for spatial filtering. Due to this fundamental reason, the present setup is shorter than the previous one. The helical radiator is now placed immediately behind the SHAB undulator. It is thus sufficient to use the existing FODO focusing system of the SHAB undulator for transporting themodulated electron beam. This paper presents complete GENESIS code calculations for the new design, starting from the baseline undulator entrance up to the helical radiator exit including the modulated electron beam transport by the SHAB FODO focusing system. (orig.)

  9. Business-as-Unusual: Existing policies in energy model baselines

    International Nuclear Information System (INIS)

    Strachan, Neil

    2011-01-01

    Baselines are generally accepted as a key input assumption in long-term energy modelling, but energy models have traditionally been poor on identifying baselines assumptions. Notably, transparency on the current policy content of model baselines is now especially critical as long-term climate mitigation policies have been underway for a number of years. This paper argues that the range of existing energy and emissions policies are an integral part of any long-term baseline, and hence already represent a 'with-policy' baseline, termed here a Business-as-Unusual (BAuU). Crucially, existing energy policies are not a sunk effort; as impacts of existing policy initiatives are targeted at future years, they may be revised through iterative policy making, and their quantitative effectiveness requires ex-post verification. To assess the long-term role of existing policies in energy modelling, currently identified UK policies are explicitly stripped out of the UK MARKAL Elastic Demand (MED) optimisation energy system model, to generate a BAuU (with-policy) and a REF (without policy) baseline. In terms of long-term mitigation costs, policy-baseline assumptions are comparable to another key exogenous modelling assumption - that of global fossil fuel prices. Therefore, best practice in energy modelling would be to have both a no-policy reference baseline, and a current policy reference baseline (BAuU). At a minimum, energy modelling studies should have a transparent assessment of the current policy contained within the baseline. Clearly identifying and comparing policy-baseline assumptions are required for cost effective and objective policy making, otherwise energy models will underestimate the true cost of long-term emissions reductions.

  10. Bengali text summarization by sentence extraction

    OpenAIRE

    Sarkar, Kamal

    2012-01-01

    Text summarization is a process to produce an abstract or a summary by selecting significant portion of the information from one or more texts. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract or abstract of the original text(s). Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. This paper presents a method for Bengali ...

  11. NETWORK DESIGN IN CLOSE-RANGE PHOTOGRAMMETRY WITH SHORT BASELINE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2017-08-01

    Full Text Available The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for “old fashioned” photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.

  12. Future Long-Baseline Neutrino Facilities and Detectors

    Directory of Open Access Journals (Sweden)

    Milind Diwan

    2013-01-01

    Full Text Available We review the ongoing effort in the US, Japan, and Europe of the scientific community to study the location and the detector performance of the next-generation long-baseline neutrino facility. For many decades, research on the properties of neutrinos and the use of neutrinos to study the fundamental building blocks of matter has unveiled new, unexpected laws of nature. Results of neutrino experiments have triggered a tremendous amount of development in theory: theories beyond the standard model or at least extensions of it and development of the standard solar model and modeling of supernova explosions as well as the development of theories to explain the matter-antimatter asymmetry in the universe. Neutrino physics is one of the most dynamic and exciting fields of research in fundamental particle physics and astrophysics. The next-generation neutrino detector will address two aspects: fundamental properties of the neutrino like mass hierarchy, mixing angles, and the CP phase, and low-energy neutrino astronomy with solar, atmospheric, and supernova neutrinos. Such a new detector naturally allows for major improvements in the search for nucleon decay. A next-generation neutrino observatory needs a huge, megaton scale detector which in turn has to be installed in a new, international underground laboratory, capable of hosting such a huge detector.

  13. Education Organization Baseline Control Protection and Trusted Level Security

    Directory of Open Access Journals (Sweden)

    Wasim A. Al-Hamdani

    2007-12-01

    Full Text Available Many education organizations have adopted for security the enterprise best practices for implementation on their campuses, while others focus on ISO Standard (or/and the National Institution of Standards and Technology.All these adoptions are dependent on IT personal and their experiences or knowledge of the standard. On top of this is the size of the education organizations. The larger the population in an education organization, the more the problem of information and security become very clear. Thus, they have been obliged to comply with information security issues and adopt the national or international standard. The case is quite different when the population size of the education organization is smaller. In such education organizations, they use social security numbers as student ID, and issue administrative rights to faculty and lab managers – or they are not aware of the Family Educational Rights and Privacy Act (FERPA – and release some personal information.The problem of education organization security is widely open and depends on the IT staff and their information security knowledge in addition to the education culture (education, scholarships and services has very special characteristics other than an enterprise or comparative organizationThis paper is part of a research to develop an “Education Organization Baseline Control Protection and Trusted Level Security.” The research has three parts: Adopting (standards, Testing and Modifying (if needed.

  14. Local Stressors, Resilience, and Shifting Baselines on Coral Reefs.

    Directory of Open Access Journals (Sweden)

    Matthew McLean

    Full Text Available Understanding how and why coral reefs have changed over the last twenty to thirty years is crucial for sustaining coral-reef resilience. We used a historical baseline from Kosrae, a typical small island in Micronesia, to examine changes in fish and coral assemblages since 1986. We found that natural gradients in the spatial distribution of fish and coral assemblages have become amplified, as island geography is now a stronger determinant of species abundance patterns, and habitat forming Acropora corals and large-bodied fishes that were once common on the leeward side of the island have become scarce. A proxy for fishing access best predicted the relative change in fish assemblage condition over time, and in turn, declining fish condition was the only factor correlated with declining coral condition, suggesting overfishing may have reduced ecosystem resilience. Additionally, a proxy for watershed pollution predicted modern coral assemblage condition, suggesting pollution is also reducing resilience in densely populated areas. Altogether, it appears that unsustainable fishing reduced ecosystem resilience, as fish composition has shifted to smaller species in lower trophic levels, driven by losses of large predators and herbivores. While prior literature and anecdotal reports indicate that major disturbance events have been rare in Kosrae, small localized disturbances coupled with reduced resilience may have slowly degraded reef condition through time. Improving coral-reef resilience in the face of climate change will therefore require improved understanding and management of growing artisanal fishing pressure and watershed pollution.

  15. Study on the calibration and optimization of double theodolites baseline

    Science.gov (United States)

    Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao

    2018-01-01

    For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.

  16. 75 FR 30014 - Consumers Energy Company; Notice of Baseline Filing

    Science.gov (United States)

    2010-05-28

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR10-25-000] Consumers Energy Company; Notice of Baseline Filing May 21, 2010. Take notice that on May 17, 2010, Consumers Energy Company (Consumers) submitted a baseline filing of its Statement of Operating Conditions for the...

  17. 77 FR 31841 - Hope Gas, Inc.; Notice of Baseline Filing

    Science.gov (United States)

    2012-05-30

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR12-23-001] Hope Gas, Inc.; Notice of Baseline Filing Take notice that on May 16, 2012, Hope Gas, Inc. (Hope Gas) submitted a revised baseline filing of their Statement of Operating Conditions for services provided under Section 311 of the...

  18. 77 FR 26535 - Hope Gas, Inc.; Notice of Baseline Filing

    Science.gov (United States)

    2012-05-04

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR12-23-000] Hope Gas, Inc.; Notice of Baseline Filing Take notice that on April 26, 2012, Hope Gas, Inc. (Hope Gas) submitted a baseline filing of their Statement of Operating Conditions for services provided under Section 311 of the...

  19. Esophageal acid exposure decreases intraluminal baseline impedance levels

    NARCIS (Netherlands)

    Kessing, Boudewijn F.; Bredenoord, Albert J.; Weijenborg, Pim W.; Hemmink, Gerrit J. M.; Loots, Clara M.; Smout, A. J. P. M.

    2011-01-01

    Intraluminal baseline impedance levels are determined by the conductivity of the esophageal wall and can be decreased in gastroesophageal reflux disease (GERD) patients. The aim of this study was to investigate the baseline impedance in GERD patients, on and off proton pump inhibitor (PPI), and in

  20. Tank waste remediation system technical baseline summary description

    International Nuclear Information System (INIS)

    Raymond, R.E.

    1998-01-01

    This document is one of the tools used to develop and control the mission work as depicted in the included figure. This Technical Baseline Summary Description document is the top-level tool for management of the Technical Baseline for waste storage operations

  1. Text Analysis: Critical Component of Planning for Text-Based Discussion Focused on Comprehension of Informational Texts

    Science.gov (United States)

    Kucan, Linda; Palincsar, Annemarie Sullivan

    2018-01-01

    This investigation focuses on a tool used in a reading methods course to introduce reading specialist candidates to text analysis as a critical component of planning for text-based discussions. Unlike planning that focuses mainly on important text content or information, a text analysis approach focuses both on content and how that content is…

  2. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  3. Measurement of [Formula: see text] polarisation in [Formula: see text] collisions at [Formula: see text] = 7 TeV.

    Science.gov (United States)

    Aaij, R; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassen, R; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Bauer, Th; Bay, A; Beddow, J; Bedeschi, F; Bediaga, I; Belogurov, S; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bird, T; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Britsch, M; Britton, T; Brook, N H; Brown, H; Bursche, A; Busetto, G; Buytaert, J; Cadeddu, S; Calabrese, R; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carranza-Mejia, H; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cenci, R; Charles, M; Charpentier, Ph; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Cogneras, E; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Counts, I; Couturier, B; Cowan, G A; Craik, D C; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dalseno, J; David, P; David, P N Y; Davis, A; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Silva, W; De Simone, P; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Derkach, D; Deschamps, O; Dettori, F; Di Canto, A; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Esen, S; Evans, T; Falabella, A; Färber, C; Farinelli, C; Farry, S; Ferguson, D; Fernandez Albor, V; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fontana, M; Fontanelli, F; Forty, R; Francisco, O; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauld, R; Gavardi, L; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianelle, A; Giani, S; Gibson, V; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Hampson, T; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hartmann, T; He, J; Head, T; Heijne, V; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Hunt, P; Hussain, N; Hutchcroft, D; Hynds, D; Iakovenko, V; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jaton, P; Jawahery, A; Jezabek, M; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kaballo, M; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Kelsey, M; Kenyon, I R; Ketel, T; Khanji, B; Khurewathanakul, C; Klaver, S; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Korolev, M; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kucharczyk, M; Kudryavtsev, V; Kurek, K; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leo, S; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, G; Lohn, S; Longstaff, I; Longstaff, I; Lopes, J H; Lopez-March, N; Lowdon, P; Lu, H; Lucchesi, D; Luisier, J; Luo, H; Lupato, A; Luppi, E; Lupton, O; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Malde, S; Manca, G; Mancinelli, G; Manzali, M; Maratas, J; Marchand, J F; Marconi, U; Marino, P; Märki, R; Marks, J; Martellotti, G; Martens, A; Martín Sánchez, A; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; McSkelly, B; Meadows, B; Meier, F; Meissner, M; Merk, M; Milanes, D A; Minard, M-N; Molina Rodriguez, J; Monteil, S; Moran, D; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Mountain, R; Muheim, F; Müller, K; Muresan, R; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nicol, M; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, G; Orlandea, M; Otalora Goicochea, J M; Owen, P; Oyanguren, A; Pal, B K; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pazos Alvarez, A; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perez Trigo, E; Perret, P; Perrin-Terrin, M; Pescatore, L; Pesen, E; Petridis, K; Petrolini, A; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Playfer, S; Plo Casasus, M; Polci, F; Polok, G; Poluektov, A; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Powell, A; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Rachwal, B; Rademacker, J H; Rakotomiaramanana, B; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redford, S; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, A; Rinnert, K; Rives Molina, V; Roa Romero, D A; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; Rotondo, M; Rouvinet, J; Ruf, T; Ruffini, F; Ruiz, H; Ruiz Valls, P; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Silva Coutinho, R; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, N A; Smith, E; Smith, E; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Sparkes, A; Spinella, F; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Stroili, R; Subbiah, V K; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szilard, D; Szumlak, T; T'Jampens, S; Teklishyn, M; Tellarini, G; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Tran, M T; Tresch, M; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ubeda Garcia, M; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; Voss, H; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Webber, A D; Websdale, D; Whitehead, M; Wicht, J; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wu, S; Wyllie, K; Xie, Y; Xing, Z; Xu, Z; Yang, Z; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zvyagin, A

    The polarisation of prompt [Formula: see text] mesons is measured by performing an angular analysis of [Formula: see text] decays using proton-proton collision data, corresponding to an integrated luminosity of 1.0[Formula: see text], collected by the LHCb detector at a centre-of-mass energy of 7 TeV. The polarisation is measured in bins of transverse momentum [Formula: see text] and rapidity [Formula: see text] in the kinematic region [Formula: see text] and [Formula: see text], and is compared to theoretical models. No significant polarisation is observed.

  4. Associated diacritical watermarking approach to protect sensitive arabic digital texts

    Science.gov (United States)

    Kamaruddin, Nurul Shamimi; Kamsin, Amirrudin; Hakak, Saqib

    2017-10-01

    Among multimedia content, one of the most predominant medium is text content. There have been lots of efforts to protect and secure text information over the Internet. The limitations of existing works have been identified in terms of watermark capacity, time complexity and memory complexity. In this work, an invisible digital watermarking approach has been proposed to protect and secure the most sensitive text i.e. Digital Holy Quran. The proposed approach works by XOR-ing only those Quranic letters that has certain diacritics associated with it. Due to sensitive nature of Holy Quran, diacritics play vital role in the meaning of the particular verse. Hence, securing letters with certain diacritics will preserve the original meaning of Quranic verses in case of alternation attempt. Initial results have shown that the proposed approach is promising with less memory complexity and time complexity compared to existing approaches.

  5. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  6. Vegetation Parameter Extraction Using Dual Baseline Polarimetric SAR Interferometry Data

    Science.gov (United States)

    Zhang, H.; Wang, C.; Chen, X.; Tang, Y.

    2009-04-01

    For vegetation parameter inversion, the single baseline polarimetric SAR interferometry (POLinSAR) technique, such as the three-stage method and the ESPRIT algorithm, is limited by the observed data with the minimum ground to volume amplitude ration, which effects the estimation of the effective phase center for the vegetation canopy or the surface, and thus results in the underestimated vegetation height. In order to remove this effect of the single baseline inversion techniques in some extend, another baseline POLinSAR data is added on vegetation parameter estimation in this paper, and a dual baseline POLinSAR technique for the extraction of the vegetation parameter is investigated and improved to reduce the dynamic bias for the vegetation parameter estimation. Finally, the simulated data and real data are used to validate this dual baseline technique.

  7. Mobile characters, mobile texts: homelessness and intertextuality in contemporary texts for young people

    Directory of Open Access Journals (Sweden)

    Mavis Reimer

    2013-06-01

    Full Text Available Since the 1990s, narratives about homelessness for and about young people have proliferated around the world. A cluster of thematic elements shared by many of these narratives of the age of globalization points to the deep anxiety that is being expressed about a social, economic, and cultural system under stress or struggling to find a new formation. More surprisingly, many of the narratives also use canonical cultural texts extensively as intertexts. This article considers three novels from three different national traditions to address the work of intertextuality in narratives about homelessness: Skellig by UK author David Almond, which was published in 1998; Chronicler of the Winds by Swedish author Henning Mankell, which was first published in 1988 in Swedish as Comédia Infantil and published in an English translation in 2006; and Stained Glass by Canadian author Michael Bedard, which was published in 2002. Using Julia Kristeva's definition of intertextuality as the “transposition of one (or several sign systems into another,” I propose that all intertexts can be thought of as metaphoric texts, in the precise sense that they carry one text into another. In the narratives under discussion in this article, the idea of homelessness is in perpetual motion between texts and intertexts, ground and figure, the literal and the symbolic. What the child characters and the readers who take up the position offered to implied readers are asked to do, I argue, is to put on a way of seeing that does not settle, a way of being that strains forward toward the new.

  8. Establishing a store baseline during interim storage of waste packages and a review of potential technologies for base-lining

    Energy Technology Data Exchange (ETDEWEB)

    McTeer, Jennifer; Morris, Jenny; Wickham, Stephen [Galson Sciences Ltd. Oakham, Rutland (United Kingdom); Bolton, Gary [National Nuclear Laboratory Risley, Warrington (United Kingdom); McKinney, James; Morris, Darrell [Nuclear Decommissioning Authority Moor Row, Cumbria (United Kingdom); Angus, Mike [National Nuclear Laboratory Risley, Warrington (United Kingdom); Cann, Gavin; Binks, Tracey [National Nuclear Laboratory Sellafield (United Kingdom)

    2013-07-01

    Interim storage is an essential component of the waste management lifecycle, providing a safe, secure environment for waste packages awaiting final disposal. In order to be able to monitor and detect change or degradation of the waste packages, storage building or equipment, it is necessary to know the original condition of these components (the 'waste storage system'). This paper presents an approach to establishing the baseline for a waste-storage system, and provides guidance on the selection and implementation of potential base-lining technologies. The approach is made up of two sections; assessment of base-lining needs and definition of base-lining approach. During the assessment of base-lining needs a review of available monitoring data and store/package records should be undertaken (if the store is operational). Evolutionary processes (affecting safety functions), and their corresponding indicators, that can be measured to provide a baseline for the waste-storage system should then be identified in order for the most suitable indicators to be selected for base-lining. In defining the approach, identification of opportunities to collect data and constraints is undertaken before selecting the techniques for base-lining and developing a base-lining plan. Base-lining data may be used to establish that the state of the packages is consistent with the waste acceptance criteria for the storage facility and to support the interpretation of monitoring and inspection data collected during store operations. Opportunities and constraints are identified for different store and package types. Technologies that could potentially be used to measure baseline indicators are also reviewed. (authors)

  9. High Baseline Postconcussion Symptom Scores and Concussion Outcomes in Athletes.

    Science.gov (United States)

    Custer, Aimee; Sufrinko, Alicia; Elbin, R J; Covassin, Tracey; Collins, Micky; Kontos, Anthony

    2016-02-01

    Some healthy athletes report high levels of baseline concussion symptoms, which may be attributable to several factors (eg, illness, personality, somaticizing). However, the role of baseline symptoms in outcomes after sport-related concussion (SRC) has not been empirically examined. To determine if athletes with high symptom scores at baseline performed worse than athletes without baseline symptoms on neurocognitive testing after SRC. Cohort study. High school and collegiate athletic programs. A total of 670 high school and collegiate athletes participated in the study. Participants were divided into groups with either no baseline symptoms (Postconcussion Symptom Scale [PCSS] score = 0, n = 247) or a high level of baseline symptoms (PCSS score > 18 [top 10% of sample], n = 68). Participants were evaluated at baseline and 2 to 7 days after SRC with the Immediate Post-concussion Assessment and Cognitive Test and PCSS. Outcome measures were Immediate Post-concussion Assessment and Cognitive Test composite scores (verbal memory, visual memory, visual motor processing speed, and reaction time) and total symptom score on the PCSS. The groups were compared using repeated-measures analyses of variance with Bonferroni correction to assess interactions between group and time for symptoms and neurocognitive impairment. The no-symptoms group represented 38% of the original sample, whereas the high-symptoms group represented 11% of the sample. The high-symptoms group experienced a larger decline from preinjury to postinjury than the no-symptoms group in verbal (P = .03) and visual memory (P = .05). However, total concussion-symptom scores increased from preinjury to postinjury for the no-symptoms group (P = .001) but remained stable for the high-symptoms group. Reported baseline symptoms may help identify athletes at risk for worse outcomes after SRC. Clinicians should examine baseline symptom levels to better identify patients for earlier referral and treatment for their

  10. Near Detectors based on gas TPCs for neutrino long baseline experiments

    CERN Document Server

    Blondel, A

    2017-01-01

    Time Projection Chambers have been used with success for the T2K ND280 near detector and are proposed for an upgrade of the T2K near detector. High pressure TPCs are also being considered for future long-baseline experiments like Hyper-Kamiokande and DUNE. A High Pressure TPC would be a very sensitive detector for the detailed study of neutrino-nucleus interactions, a limiting factor for extracting the ultimate precision in long baseline experiments. The requirements of TPCs for neutrino detectors are quite specific. We propose here the development of state-of-the-art near detectors based on gas TPC: atmospheric pressure TPCs for T2K-II and a high-pressure TPC for neutrino experiments. The project proposed here benefits from a strong involvement of the European (CERN) members of the T2K collaboration and beyond. It is a strongly synergetic precursor of other projects of near detectors using gas TPCs that are under discussion for the long baseline neutrino projects worldwide. It will help maintain and develop...

  11. Visual Saliency Models for Text Detection in Real World.

    Directory of Open Access Journals (Sweden)

    Renwu Gao

    Full Text Available This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

  12. Mass hierarchy sensitivity of medium baseline reactor neutrino experiments with multiple detectors

    Directory of Open Access Journals (Sweden)

    Hong-Xin Wang

    2017-05-01

    Full Text Available We report the neutrino mass hierarchy (MH determination of medium baseline reactor neutrino experiments with multiple detectors, where the sensitivity of measuring the MH can be significantly improved by adding a near detector. Then the impact of the baseline and target mass of the near detector on the combined MH sensitivity has been studied thoroughly. The optimal selections of the baseline and target mass of the near detector are ∼12.5 km and ∼4 kton respectively for a far detector with the target mass of 20 kton and the baseline of 52.5 km. As typical examples of future medium baseline reactor neutrino experiments, the optimal location and target mass of the near detector are selected for the specific configurations of JUNO and RENO-50. Finally, we discuss distinct effects of the reactor antineutrino energy spectrum uncertainty for setups of a single detector and double detectors, which indicate that the spectrum uncertainty can be well constrained in the presence of the near detector.

  13. Baseliner: An open-source, interactive tool for processing sap flux data from thermal dissipation probes

    Directory of Open Access Journals (Sweden)

    A. Christopher Oishi

    2016-01-01

    Full Text Available Estimating transpiration from woody plants using thermal dissipation sap flux sensors requires careful data processing. Currently, researchers accomplish this using spreadsheets, or by personally writing scripts for statistical software programs (e.g., R, SAS. We developed the Baseliner software to help establish a standardized protocol for processing sap flux data. Baseliner enables users to QA/QC data and process data using a combination of automated steps, visualization, and manual editing. Data processing requires establishing a zero-flow reference value, or “baseline”, which varies among sensors and with time. Since no set of algorithms currently exists to reliably QA/QC and estimate the zero-flow baseline, Baseliner provides a graphical user interface to allow visual inspection and manipulation of data. Data are first automatically processed using a set of user defined parameters. The user can then view the data for additional, manual QA/QC and baseline identification using mouse and keyboard commands. The open-source software allows for user customization of data processing algorithms as improved methods are developed.

  14. Enriching text with images and colored light

    Science.gov (United States)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  15. The LIFE Cognition Study: design and baseline characteristics

    Directory of Open Access Journals (Sweden)

    Sink KM

    2014-08-01

    Full Text Available Kaycee M Sink,1 Mark A Espeland,2 Julia Rushing,2 Cynthia M Castro,3 Timothy S Church,4 Ronald Cohen,5 Thomas M Gill,6 Leora Henkin,2 Janine M Jennings,7 Diana R Kerwin,8 Todd M Manini,5 Valerie Myers,9 Marco Pahor,5 Kieran F Reid,10 Nancy Woolard,1 Stephen R Rapp,11 Jeff D Williamson1 On behalf of LIFE Investigators 1Department of Internal Medicine, Section on Gerontology and Geriatric Medicine, Sticht Center on Aging, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Biostatistical Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Stanford Prevention Research Center, Stanford University School of Medicine, Stanford, CA, USA; 4Pennington Biomedical, Louisiana State University, Baton Rouge, LA, USA; 5Institute on Aging and Department of Aging and Geriatric Research, University of Florida, Gainesville, FL, USA; 6Yale School of Medicine, Department of Internal Medicine, New Haven, CT, USA; 7Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 8Texas Alzheimer’s and Memory Disorders, Texas Health Presbyterian Hospital Dallas, TX, USA; 9Klein Buendel, Inc., Golden, CO, USA; 10Nutrition, Exercise Physiology and Sarcopenia Laboratory, Jean Mayer United States Department of Agriculture Human Nutrition Research Center on Aging, Tufts University, Boston, MA, USA; 11Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Observational studies have shown beneficial relationships between exercise and cognitive function. Some clinical trials have also demonstrated improvements in cognitive function in response to moderate–high intensity aerobic exercise; however, these have been limited by relatively small sample sizes and short durations. The Lifestyle Interventions and Independence for Elders (LIFE Study is the largest and longest randomized controlled clinical trial of physical activity with cognitive outcomes, in older sedentary

  16. Examining Text Complexity in the Early Grades

    Science.gov (United States)

    Fitzgerald, Jill; Elmore, Jeff; Hiebert, Elfrieda H.; Koons, Heather H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2016-01-01

    The Common Core raises the stature of texts to new heights, creating a hubbub. The fuss is especially messy at the early grades, where children are expected to read more complex texts than in the past. But early-grades teachers have been given little actionable guidance about text complexity. The authors recently examined early-grades texts to…

  17. Automatic Coding of Short Text Responses via Clustering in Educational Assessment

    Science.gov (United States)

    Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank

    2016-01-01

    Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…

  18. Negation scope and spelling variation for text-mining of Danish electronic patient records

    DEFF Research Database (Denmark)

    Thomas, Cecilia Engel; Jensen, Peter Bjødstrup; Werge, Thomas

    2014-01-01

    Electronic patient records are a potentially rich data source for knowledge extraction in biomedical research. Here we present a method based on the ICD10 system for text-mining of Danish health records. We have evaluated how adding functionalities to a baseline text-mining tool affected...

  19. Arabic Text Categorization Using Improved k-Nearest neighbour Algorithm

    Directory of Open Access Journals (Sweden)

    Wail Hamood KHALED

    2014-10-01

    Full Text Available The quantity of text information published in Arabic language on the net requires the implementation of effective techniques for the extraction and classifying of relevant information contained in large corpus of texts. In this paper we presented an implementation of an enhanced k-NN Arabic text classifier. We apply the traditional k-NN and Naive Bayes from Weka Toolkit for comparison purpose. Our proposed modified k-NN algorithm features an improved decision rule to skip the classes that are less similar and identify the right class from k nearest neighbours which increases the accuracy. The study evaluates the improved decision rule technique using the standard of recall, precision and f-measure as the basis of comparison. We concluded that the effectiveness of the proposed classifier is promising and outperforms the classical k-NN classifier.

  20. Text Summarization Using FrameNet-Based Semantic Graph Model

    Directory of Open Access Journals (Sweden)

    Xu Han

    2016-01-01

    Full Text Available Text summarization is to generate a condensed version of the original document. The major issues for text summarization are eliminating redundant information, identifying important difference among documents, and recovering the informative content. This paper proposes a Semantic Graph Model which exploits the semantic information of sentence using FSGM. FSGM treats sentences as vertexes while the semantic relationship as the edges. It uses FrameNet and word embedding to calculate the similarity of sentences. This method assigns weight to both sentence nodes and edges. After all, it proposes an improved method to rank these sentences, considering both internal and external information. The experimental results show that the applicability of the model to summarize text is feasible and effective.

  1. Extracting Baseline Electricity Usage Using Gradient Tree Boosting

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehoon [Ulsan Nat. Inst. of Sci. & Tech., Ulsan (South Korea); Lee, Dongeun [Ulsan Nat. Inst. of Sci. & Tech., Ulsan (South Korea); Choi, Jaesik [Ulsan Nat. Inst. of Sci. & Tech., Ulsan (South Korea); Spurlock, Anna [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Todd, Annika [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-05

    To understand how specific interventions affect a process observed over time, we need to control for the other factors that influence outcomes. Such a model that captures all factors other than the one of interest is generally known as a baseline. In our study of how different pricing schemes affect residential electricity consumption, the baseline would need to capture the impact of outdoor temperature along with many other factors. In this work, we examine a number of different data mining techniques and demonstrate Gradient Tree Boosting (GTB) to be an effective method to build the baseline. We train GTB on data prior to the introduction of new pricing schemes, and apply the known temperature following the introduction of new pricing schemes to predict electricity usage with the expected temperature correction. Our experiments and analyses show that the baseline models generated by GTB capture the core characteristics over the two years with the new pricing schemes. In contrast to the majority of regression based techniques which fail to capture the lag between the peak of daily temperature and the peak of electricity usage, the GTB generated baselines are able to correctly capture the delay between the temperature peak and the electricity peak. Furthermore, subtracting this temperature-adjusted baseline from the observed electricity usage, we find that the resulting values are more amenable to interpretation, which demonstrates that the temperature-adjusted baseline is indeed effective.

  2. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  3. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  4. Stress: a naturalistic proposal

    Directory of Open Access Journals (Sweden)

    María de Lourdes Rodríguez Campuzano

    2013-08-01

    Full Text Available Some of the stress related topics, especially from the conceptual framework of Lazarus and Folkman are reviewed on this work. It is sustained that this approach is dualistic and that the research made from this view is made on the basis of morphological criteria that don’t allow studying important elements of this kind of behavior. From an interbehavioral approach three functional criteria are proposed to study this phenomenon: the functional nature of situations, aptitude levels of behavior, and its three dimensions. Emphasis is made on the singular and individual nature of stress reactions. Finally it is suggested to take into account these functional criteria to develop a generic situational taxonomy to study these reactions as parts of complex behavioral patterns.

  5. A Unified Algorithm for Channel Imbalance and Antenna Phase Center Position Calibration of a Single-Pass Multi-Baseline TomoSAR System

    Directory of Open Access Journals (Sweden)

    Yuncheng Bu

    2018-03-01

    Full Text Available The multi-baseline synthetic aperture radar (SAR tomography (TomoSAR system is employed in such applications as disaster remote sensing, urban 3-D reconstruction, and forest carbon storage estimation. This is because of its 3-D imaging capability in a single-pass platform. However, a high 3-D resolution of TomoSAR is based on the premise that the channel imbalance and antenna phase center (APC position are precisely known. If this is not the case, the 3-D resolution performance will be seriously degraded. In this paper, a unified algorithm for channel imbalance and APC position calibration of a single-pass multi-baseline TomoSAR system is proposed. Based on the maximum likelihood method, as well as the least squares and the damped Newton method, we can calibrate the channel imbalance and APC position. The algorithm is suitable for near-field conditions, and no phase unwrapping operation is required. The effectiveness of the proposed algorithm has been verified by simulation and experimental results.

  6. CryoSat SAR/SARin Level1b products: assessment of BaselineC and improvements towards BaselineD

    Science.gov (United States)

    Scagliola, Michele; Fornari, Marco; Bouffard, Jerome; Parrinello, Tommaso

    2017-04-01

    CryoSat was launched on the 8th April 2010 and is the first European ice mission dedicated to the monitoring of precise changes in the thickness of polar ice sheets and floating sea ice. Cryosat carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. This allows to reach a significantly improved along track resolution with respect to traditional pulse-width limited altimeters. CryoSat is the first altimetry mission operating in SAR mode and continuous improvements in the Level1 Instrument Processing Facility (IPF1) are being identified, tested and validated in order to improve the quality of the Level1b products. The current IPF, Baseline C, was released in operation in April 2015 and the second CryoSat reprocessing campaign was jointly initiated, taking benefit of the upgrade implemented in the IPF1 processing chain but also of some specific configurations for the calibration corrections. In particular, the CryoSat Level1b BaselineC products generated in the framework of the second reprocessing campaign include refined information for what concerns the mispointing angles and the calibration corrections. This poster will thus detail thus the evolutions that are currently planned for the CryoSat BaselineD SAR/SARin Level1b products and the corresponding quality improvements that are expected.

  7. Baseline Biomarkers for Outcome of Melanoma Patients Treated with Pembrolizumab

    NARCIS (Netherlands)

    Weide, Benjamin; Martens, Alexander; Hassel, Jessica C.; Berking, Carola; Postow, Michael A.; Bisschop, Kees; Simeone, Ester; Mangana, Johanna; Schilling, Bastian; Di Giacomo, Anna Maria; Brenner, Nicole; Kaehler, Katharina; Heinzerling, Lucie; Gutzmer, Ralf; Bender, Armin; Gebhardt, Christoffer; Romano, Emanuela; Meier, Friedegund; Martus, Peter; Maio, Michele; Blank, Christian; Schadendorf, Dirk; Dummer, Reinhard; Ascierto, Paolo A.; Hospers, Geke; Garbe, Claus; Wolchok, Jedd D.

    2016-01-01

    Purpose: Biomarkers for outcome after immune-checkpoint blockade are strongly needed as these may influence individual treatment selection or sequence. We aimed to identify baseline factors associated with overall survival (OS) after pembrolizumab treatment in melanoma patients. Experimental Design:

  8. Baseline assessment of fish communities of the Flower Garden Banks

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The work developed baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys employed diving,...

  9. Information architecture. Volume 2, Part 1: Baseline analysis summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The Department of Energy (DOE) Information Architecture, Volume 2, Baseline Analysis, is a collaborative and logical next-step effort in the processes required to produce a Departmentwide information architecture. The baseline analysis serves a diverse audience of program management and technical personnel and provides an organized way to examine the Department`s existing or de facto information architecture. A companion document to Volume 1, The Foundations, it furnishes the rationale for establishing a Departmentwide information architecture. This volume, consisting of the Baseline Analysis Summary (part 1), Baseline Analysis (part 2), and Reference Data (part 3), is of interest to readers who wish to understand how the Department`s current information architecture technologies are employed. The analysis identifies how and where current technologies support business areas, programs, sites, and corporate systems.

  10. Parametric estimation of time varying baselines in airborne interferometric SAR

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1996-01-01

    A method for estimation of time varying spatial baselines in airborne interferometric synthetic aperture radar (SAR) is described. The range and azimuth distortions between two images acquired with a non-linear baseline are derived. A parametric model of the baseline is then, in a least square...... sense, estimated from image shifts obtained by cross correlation of numerous small patches throughout the image. The method has been applied to airborne EMISAR imagery from the 1995 campaign over the Storstrommen Glacier in North East Greenland conducted by the Danish Center for Remote Sensing. This has...... reduced the baseline uncertainties from several meters to the centimeter level in a 36 km scene. Though developed for airborne SAR the method can easily be adopted to satellite data...

  11. The semiotics of typography in literary texts. A multimodal approach

    DEFF Research Database (Denmark)

    Nørgaard, Nina

    2009-01-01

    to multimodal discourse proposed, for instance, by Kress & Van Leeuwen (2001) and Baldry & Thibault (2006), and, more specifically, the multimodal approach to typography suggested by Van Leeuwen (2005b; 2006), in order to sketch out a methodological framework applicable to the description and analysis...... of the semiotic potential of typography in literary texts....

  12. Semantic Linking and Contextualization for Social Forensic Text Analysis

    NARCIS (Netherlands)

    Ren, Z.; van Dijk, D.; Graus, D.; van der Knaap, N.; Henseler, H.; de Rijke, M.; Brynielsson, J.; Johansson, F.

    2013-01-01

    With the development of social media, forensic text analysis is becoming more and more challenging as forensic analysts have begun to include this information source in their practice. In this paper, we report on our recent work related to semantic search in e-discovery and propose the use of entity

  13. Partition of Ni between olivine and sulfide: the effect of temperature, f_{{text{O}}_{text{2}} } and f_{{text{S}}_{text{2}} }

    Science.gov (United States)

    Fleet, M. E.; Macrae, N. D.

    1987-03-01

    The experimental distribution coefficient for Ni/ Fe exchange between olivine and monosulfide (KD3) is 35.6±1.1 at 1385° C, f_{{text{O}}_{text{2}} } = 10^{ - 8.87} ,f_{{text{S}}_{text{2}} } = 10^{ - 1.02} , and olivine of composition Fo96 to Fo92. These are the physicochemical conditions appropriate to hypothesized sulfur-saturated komatiite magma. The present experiments equilibrated natural olivine grains with sulfide-oxide liquid in the presence of a (Mg, Fe)-alumino-silicate melt. By a variety of different experimental procedures, K D3 is shown to be essentially constant at about 30 to 35 in the temperature range 900 to 1400° C, for olivine of composition Fo97 to FoO, monosulfide composition with up to 70 mol. % NiS, and a wide range of f_{{text{O}}_{text{2}} } and f_{{text{S}}_{text{2}} }.

  14. Arabic text classification using Polynomial Networks

    Directory of Open Access Journals (Sweden)

    Mayy M. Al-Tahrawi

    2015-10-01

    Full Text Available In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.

  15. Baselines For Land-Use Change In The Tropics: Application ToAvoided Deforestation Projects

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Sandra; Hall, Myrna; Andrasko, Ken; Ruiz, Fernando; Marzoli, Walter; Guerrero, Gabriela; Masera, Omar; Dushku, Aaron; Dejong,Ben; Cornell, Joseph

    2007-06-01

    /infrastructure factors(less observable) in explaining empirical land-use patterns. We proposefrom the lessons learned, a methodology comprised of three main steps andsix tasks can be used to begin developing credible baselines. We alsopropose that the baselines be projected over a 10-year period because,although projections beyond 10 years are feasible, they are likely to beunrealistic for policy purposes. In the first step, an historic land-usechange and deforestation estimate is made by determining the analyticdomain (size of the region relative to the size of proposed project),obtaining historic data, analyzing candidate historic baseline drivers,and identifying three to four major drivers. In the second step, abaseline of where deforestation is likely to occur --a potential land-usechange (PLUC) map is produced using a spatial model such as GEOMOD thatuses the key drivers from step one. Then rates of deforestation areprojected over a 10-year baseline period using any of the three models.Using the PLUC maps, projected rates of deforestation, and carbon stockestimates, baselineprojections are developed that can be used for projectGHG accounting and crediting purposes: The final step proposes that, atagreed interval (eg, +10 years), the baseline assumptions about baselinedrivers be re-assessed. This step reviews the viability of the 10-yearbaseline in light of changes in one or more key baseline drivers (e.g.,new roads, new communities, new protected area, etc.). The potentialland-use change map and estimates of rates of deforestation could beredone at the agreed interval, allowing the rates and changes in spatialdrivers to be incorporated into a defense of the existing baseline, orderivation of a new baseline projection.

  16. Validity and Reliability of Baseline Testing in a Standardized Environment.

    Science.gov (United States)

    Higgins, Kathryn L; Caze, Todd; Maerlender, Arthur

    2017-08-11

    The Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) is a computerized neuropsychological test battery commonly used to determine cognitive recovery from concussion based on comparing post-injury scores to baseline scores. This model is based on the premise that ImPACT baseline test scores are a valid and reliable measure of optimal cognitive function at baseline. Growing evidence suggests that this premise may not be accurate and a large contributor to invalid and unreliable baseline test scores may be the protocol and environment in which baseline tests are administered. This study examined the effects of a standardized environment and administration protocol on the reliability and performance validity of athletes' baseline test scores on ImPACT by comparing scores obtained in two different group-testing settings. Three hundred-sixty one Division 1 cohort-matched collegiate athletes' baseline data were assessed using a variety of indicators of potential performance invalidity; internal reliability was also examined. Thirty-one to thirty-nine percent of the baseline cases had at least one indicator of low performance validity, but there were no significant differences in validity indicators based on environment in which the testing was conducted. Internal consistency reliability scores were in the acceptable to good range, with no significant differences between administration conditions. These results suggest that athletes may be reliably performing at levels lower than their best effort would produce. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  18. Baseline assessment of fish and benthic communities of the Flower Garden Banks (2010 - present) using remotely operated vehicle (ROV) survey methods: 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The proposed work develop baseline information on fish and benthic communities within the Flower Garden Banks National Marine Sanctuary (FGBNMS). Surveys will employ...

  19. National greenhouse gas emissions baseline scenarios. Learning from experiences in developing countries

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-04-15

    This report reviews national approaches to preparing baseline scenarios of greenhouse-gas (GHG) emissions. It does so by describing and comparing in non-technical language existing practices and choices made by ten developing countries - Brazil, China, Ethiopia, India, Indonesia, Kenya, Mexico, South Africa, Thailand and Vietnam. The review focuses on a number of key elements, including model choices, transparency considerations, choices about underlying assumptions and challenges associated with data management. The aim is to improve overall understanding of baseline scenarios and facilitate their use for policy-making in developing countries more broadly. The findings are based on the results of a collaborative project involving a number of activities undertaken by the Danish Energy Agency, the Organisation for Economic Co-operation and Development (OECD) and the UNEP Risoe Centre (URC), including a series of workshops on the subject. The ten contributing countries account for approximately 40% of current global GHG emissions - a share that is expected to increase in the future. The breakdown of emissions by sector varies widely among these countries. In some countries, the energy sector is the leading source of emissions; for others, the land-use sector and/or agricultural sector dominate emissions. The report underscores some common technical and financial capacity gaps faced by developing countries when preparing baseline scenarios. It does not endeavour to propose guidelines for preparing baseline scenarios. Rather, it is hoped that the report will inform any future attempts at preparing such kind of guidelines. (Author)

  20. Geochemical baseline level and function and contamination of phosphorus in Liao River Watershed sediments of China.

    Science.gov (United States)

    Liu, Shaoqing; Wang, Jing; Lin, Chunye; He, Mengchang; Liu, Xitao

    2013-10-15

    The quantitative assessment of P contamination in sediments is a challenge due to sediment heterogeneity and the lacking of geochemical background or baseline levels. In this study, a procedure was proposed to determine the average P background level and P geochemical baseline level (GBL) and develop P geochemical baseline functions (GBF) for riverbed sediments of the Liao River Watershed (LRW). The LRW has two river systems - the Liao River System (LRS) and the Daliao River System (DRS). Eighty-eight samples were collected and analyzed for P, Al, Fe, Ca, organic matter, pH, and texture. The results show that Fe can be used as a better particle-size proxy to construct the GBF of P (P (mg/kg) = 39.98 + 166.19 × Fe (%), R(2) = 0.835, n = 66). The GBL of P was 675 mg/kg, while the average background level of P was 355 mg/kg. Noting that many large cities are located in the DRS watershed, most of the contaminated sites were located within the DRS and the riverbed sediments were more contaminated by P in the DRS watershed than in the LRS watershed. The geochemical background and baseline information of P are of great importance in managing P levels within the LRW. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Youth Texting: Help or Hindrance to Literacy?

    Science.gov (United States)

    Zebroff, Dmitri

    2018-01-01

    An extensive amount of research has been performed in recent years into the widespread practice of text messaging in youth. As part of this broad area of research, the associations between youth texting and literacy have been investigated in a variety of contexts. A comprehensive, semi-systematic review of the literature into texting and literacy…

  2. Choices of texts for literary education

    DEFF Research Database (Denmark)

    Skyggebjerg, Anna Karlskov

    This paper charts the general implications of the choice of texts for literature teaching in the Danish school system, especially in Grades 8 and 9. It will analyze and discuss the premises of the choice of texts, and the possibilities of a certain choice of text in a concrete classroom situation...

  3. Effects of Text Messaging on Academic Performance

    OpenAIRE

    Barks Amanda; Searight H. Russell; Ratwik Susan

    2011-01-01

    University students frequently send and receive cellular phone text messages during classroominstruction. Cognitive psychology research indicates that multi-tasking is frequently associatedwith performance cost. However, university students often have considerable experience withelectronic multi-tasking and may believe that they can devote necessary attention to a classroomlecture while sending and receiving text messages. In the current study, university students whoused text messaging were ...

  4. Text-Picture Relations in Cooking Instructions

    NARCIS (Netherlands)

    van der Sluis, Ielka; Leito, Shadira; Redeker, Gisela; Bunt, Harry

    2016-01-01

    Like many other instructions, recipes on packages with ready-to-use ingredients for a dish combine a series of pictures with short text paragraphs. The information presentation in such multimodal instructions can be compact (either text or picture) and/or cohesive (text and picture). In an

  5. Academic Journal Embargoes and Full Text Databases.

    Science.gov (United States)

    Brooks, Sam

    2003-01-01

    Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…

  6. A quick survey of text categorization algorithms

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

  7. Inclusion in the Workplace - Text Version | NREL

    Science.gov (United States)

    Careers » Inclusion in the Workplace - Text Version Inclusion in the Workplace - Text Version This is the text version for the Inclusion: Leading by Example video. I'm Martin Keller. I'm the NREL of the laboratory. Another very important element in inclusion is diversity. Because if we have a

  8. Effects of Text Messaging on Academic Performance

    Directory of Open Access Journals (Sweden)

    Barks Amanda

    2011-12-01

    Full Text Available University students frequently send and receive cellular phone text messages during classroominstruction. Cognitive psychology research indicates that multi-tasking is frequently associatedwith performance cost. However, university students often have considerable experience withelectronic multi-tasking and may believe that they can devote necessary attention to a classroomlecture while sending and receiving text messages. In the current study, university students whoused text messaging were randomly assigned to one of two conditions: 1. a group that sent andreceived text messages during a lecture or, 2. a group that did not engage in text messagingduring the lecture. Participants who engaged in text messaging demonstrated significantlypoorer performance on a test covering lecture content compared with the group that did notsend and receive text messages. Participants exhibiting higher levels of text messaging skill hadsignificantly lower test scores than participants who were less proficient at text messaging. It ishypothesized that in terms of retention of lecture material, more frequent task shifting by thosewith greater text messaging proficiency contributed to poorer performance. Overall, the findingsdo not support the view, held by many university students, that this form of multitasking has littleeffect on the acquisition of lecture content. Results provide empirical support for teachers andprofessors who ban text messaging in the classroom.

  9. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    Science.gov (United States)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  10. THE 2014 ALMA LONG BASELINE CAMPAIGN: AN OVERVIEW

    Energy Technology Data Exchange (ETDEWEB)

    Partnership, ALMA [Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF (United Kingdom); Fomalont, E. B.; Vlahakis, C.; Corder, S.; Remijan, A.; Barkats, D.; Dent, W. R. F.; Phillips, N.; Cox, P.; Hales, A. S. [Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura, Santiago (Chile); Lucas, R. [Institut de Planétologie et d’Astrophysique de Grenoble (UMR 5274), BP 53, F-38041 Grenoble Cedex 9 (France); Hunter, T. R.; Brogan, C. L.; Amestica, R.; Cotton, W. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Asaki, Y. [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Matsushita, S. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 106, Taiwan (China); Hills, R. E. [Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Richards, A. M. S. [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL (United Kingdom); Broguiere, D., E-mail: efomalon@nrao.edu [Institut de Radioastronomie Millime´trique (IRAM), 300 rue de la Piscine, Domaine Universitaire, F-38406 Saint Martin d’Hères (France); and others

    2015-07-20

    A major goal of the Atacama Large Millimeter/submillimeter Array (ALMA) is to make accurate images with resolutions of tens of milliarcseconds, which at submillimeter (submm) wavelengths requires baselines up to ∼15 km. To develop and test this capability, a Long Baseline Campaign (LBC) was carried out from 2014 September to late November, culminating in end-to-end observations, calibrations, and imaging of selected Science Verification (SV) targets. This paper presents an overview of the campaign and its main results, including an investigation of the short-term coherence properties and systematic phase errors over the long baselines at the ALMA site, a summary of the SV targets and observations, and recommendations for science observing strategies at long baselines. Deep ALMA images of the quasar 3C 138 at 97 and 241 GHz are also compared to VLA 43 GHz results, demonstrating an agreement at a level of a few percent. As a result of the extensive program of LBC testing, the highly successful SV imaging at long baselines achieved angular resolutions as fine as 19 mas at ∼350 GHz. Observing with ALMA on baselines of up to 15 km is now possible, and opens up new parameter space for submm astronomy.

  11. THE 2014 ALMA LONG BASELINE CAMPAIGN: AN OVERVIEW

    International Nuclear Information System (INIS)

    Partnership, ALMA; Fomalont, E. B.; Vlahakis, C.; Corder, S.; Remijan, A.; Barkats, D.; Dent, W. R. F.; Phillips, N.; Cox, P.; Hales, A. S.; Lucas, R.; Hunter, T. R.; Brogan, C. L.; Amestica, R.; Cotton, W.; Asaki, Y.; Matsushita, S.; Hills, R. E.; Richards, A. M. S.; Broguiere, D.

    2015-01-01

    A major goal of the Atacama Large Millimeter/submillimeter Array (ALMA) is to make accurate images with resolutions of tens of milliarcseconds, which at submillimeter (submm) wavelengths requires baselines up to ∼15 km. To develop and test this capability, a Long Baseline Campaign (LBC) was carried out from 2014 September to late November, culminating in end-to-end observations, calibrations, and imaging of selected Science Verification (SV) targets. This paper presents an overview of the campaign and its main results, including an investigation of the short-term coherence properties and systematic phase errors over the long baselines at the ALMA site, a summary of the SV targets and observations, and recommendations for science observing strategies at long baselines. Deep ALMA images of the quasar 3C 138 at 97 and 241 GHz are also compared to VLA 43 GHz results, demonstrating an agreement at a level of a few percent. As a result of the extensive program of LBC testing, the highly successful SV imaging at long baselines achieved angular resolutions as fine as 19 mas at ∼350 GHz. Observing with ALMA on baselines of up to 15 km is now possible, and opens up new parameter space for submm astronomy

  12. A Novel Approach for Arabic Text Steganography Based on the “BloodGroup” Text Hiding Method

    Directory of Open Access Journals (Sweden)

    S. Malalla,

    2017-04-01

    Full Text Available Steganography is the science of hiding certain messages (data in groups of irrelevant data possibly of other form. The purpose of steganography is covert communication to hide the existence of a message from an intermediary. Text Steganography is the process of embedding secret message (text in another text (cover text so that the existence of secret message cannot be detected by a third party. This paper presents a novel approach for text steganography using the Blood Group (BG method based on the behavior of blood group. Experimentally it is found that the proposed method got good results in capacity, hiding capacity, time complexity, robustness, visibility, and similarity which shows its superiority as compared to most several existing methods.

  13. Figure text extraction in biomedical literature.

    Directory of Open Access Journals (Sweden)

    Daehyun Kim

    2011-01-01

    Full Text Available Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures.We first evaluated an off-the-shelf Optical Character Recognition (OCR tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons.The evaluation on 382 figures (9,643 figure texts in total randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for

  14. Pilot study of psychotherapeutic text messaging for depression.

    Science.gov (United States)

    Pfeiffer, Paul N; Henry, Jennifer; Ganoczy, Dara; Piette, John D

    2017-08-01

    Background Text messaging services could increase access to psychotherapeutic content for individuals with depression by avoiding barriers to in-person psychotherapy such as cost, transportation, and therapist availability. Determining whether text messages reflecting different psychotherapeutic techniques exhibit differences in acceptability or effectiveness may help guide service development. Objectives We aimed to determine: (1) the feasibility of delivering a psychotherapy-based text messaging service to people with depression identified via the internet, (2) whether there is variation in satisfaction with messages according to the type of psychotherapeutic technique they represent, and (3) whether symptoms of depression vary according to receipt of each message type and participants' satisfaction with the messages they received. Methods For this study 190 US adults who screened positive for a major depressive episode (Patient Health Questionnaire (PHQ-9) score ≥10) were recruited from online advertisements. Participants received a daily psychotherapy-based text message 6 days per week for 12 weeks. Text messages were developed by a team of psychiatrists, psychologists, and social workers to reflect three psychotherapeutic approaches: acceptance and commitment therapy (ACT), behavioural activation, and cognitive restructuring. Each week the message type for the week was randomly assigned from one of the three types, allowing for repeats. Participants were asked daily to rate each message. On the 7th day of each week, participants completed a two-item depression screener (PHQ-2). Web-based surveys at baseline, 6, and 12 weeks were used as the primary measure of depressive symptoms (PHQ-9). Results Of the 190 participants enrolled, 85 (45%) completed the 6-week web survey and 67 (35%) completed the 12-week survey. The mean baseline PHQ-9 score was 19.4 (SD 4.2) and there was a statistically significant mean improvement in PHQ-9 scores of -2.9 (SD 6.0; p

  15. Precise baseline determination for the TanDEM-X mission

    Science.gov (United States)

    Koenig, Rolf; Moon, Yongjin; Neumayer, Hans; Wermuth, Martin; Montenbruck, Oliver; Jäggi, Adrian

    The TanDEM-X mission will strive for generating a global precise Digital Elevation Model (DEM) by way of bi-static SAR in a close formation of the TerraSAR-X satellite, already launched on June 15, 2007, and the TanDEM-X satellite to be launched in May 2010. Both satellites carry the Tracking, Occultation and Ranging (TOR) payload supplied by the GFZ German Research Centre for Geosciences. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), and a Laser retro-reflector (LRR) for precise orbit determination (POD) and atmospheric sounding. The IGOR is of vital importance for the TanDEM-X mission objectives as the millimeter level determination of the baseline or distance between the two spacecrafts is needed to derive meter level accurate DEMs. Within the TanDEM-X ground segment GFZ is responsible for the operational provision of precise baselines. For this GFZ uses two software chains, first its Earth Parameter and Orbit System (EPOS) software and second the BERNESE software, for backup purposes and quality control. In a concerted effort also the German Aerospace Center (DLR) generates precise baselines independently with a dedicated Kalman filter approach realized in its FRNS software. By the example of GRACE the generation of baselines with millimeter accuracy from on-board GPS data can be validated directly by way of comparing them to the intersatellite K-band range measurements. The K-band ranges are accurate down to the micrometer-level and therefore may be considered as truth. Both TanDEM-X baseline providers are able to generate GRACE baselines with sub-millimeter accuracy. By merging the independent baselines by GFZ and DLR, the accuracy can even be increased. The K-band validation however covers solely the along-track component as the K-band data measure just the distance between the two GRACE satellites. In addition they inhibit an un-known bias which must be modelled in the comparison, so the

  16. An Embedded Application for Degraded Text Recognition

    Directory of Open Access Journals (Sweden)

    Thillou Céline

    2005-01-01

    Full Text Available This paper describes a mobile device which tries to give the blind or visually impaired access to text information. Three key technologies are required for this system: text detection, optical character recognition, and speech synthesis. Blind users and the mobile environment imply two strong constraints. First, pictures will be taken without control on camera settings and a priori information on text (font or size and background. The second issue is to link several techniques together with an optimal compromise between computational constraints and recognition efficiency. We will present the overall description of the system from text detection to OCR error correction.

  17. The Instructional Text like a Textual Genre

    Directory of Open Access Journals (Sweden)

    Adiane Fogali Marinello

    2011-07-01

    Full Text Available This article analyses the instructional text as a textual genre and is part of the research called Reading and text production from the textual genre perspective, done at Universidade de Caxias do Sul, Campus Universitário da Região dos Vinhedos. Firstly, some theoretical assumptions about textual genre are presented, then, the instructional text is characterized. After that an instructional text is analyzed and, finally, some activities related to reading and writing of the mentioned genre directed to High School and University students are suggested.

  18. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  19. Script-independent text line segmentation in freestyle handwritten documents.

    Science.gov (United States)

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  20. Learning From Short Text Streams With Topic Drifts.

    Science.gov (United States)

    Li, Peipei; He, Lu; Wang, Haiyan; Hu, Xuegang; Zhang, Yuhong; Li, Lei; Wu, Xindong

    2017-09-18

    Short text streams such as search snippets and micro blogs have been popular on the Web with the emergence of social media. Unlike traditional normal text streams, these data present the characteristics of short length, weak signal, high volume, high velocity, topic drift, etc. Short text stream classification is hence a very challenging and significant task. However, this challenge has received little attention from the research community. Therefore, a new feature extension approach is proposed for short text stream classification with the help of a large-scale semantic network obtained from a Web corpus. It is built on an incremental ensemble classification model for efficiency. First, more semantic contexts based on the senses of terms in short texts are introduced to make up of the data sparsity using the open semantic network, in which all terms are disambiguated by their semantics to reduce the noise impact. Second, a concept cluster-based topic drifting detection method is proposed to effectively track hidden topic drifts. Finally, extensive studies demonstrate that as compared to several well-known concept drifting detection methods in data stream, our approach can detect topic drifts effectively, and it enables handling short text streams effectively while maintaining the efficiency as compared to several state-of-the-art short text classification approaches.

  1. Text mining with R a tidy approach

    CERN Document Server

    Silge, Julia

    2017-01-01

    Much of the data available today is unstructured and text-heavy, making it challenging for analysts to apply their usual data wrangling and visualization tools. With this practical book, you'll explore text-mining techniques with tidytext, a package that authors Julia Silge and David Robinson developed using the tidy principles behind R packages like ggraph and dplyr. You'll learn how tidytext and other tidy tools in R can make text analysis easier and more effective. The authors demonstrate how treating text as data frames enables you to manipulate, summarize, and visualize characteristics of text. You'll also learn how to integrate natural language processing (NLP) into effective workflows. Practical code examples and data explorations will help you generate real insights from literature, news, and social media. Learn how to apply the tidy text format to NLP Use sentiment analysis to mine the emotional content of text Identify a document's most important terms with frequency measurements E...

  2. The nuclear modification of charged particles in Pb-Pb at $\\sqrt{\\text{s}_\\text{NN}} = \\text{5.02}\\,\\text{TeV}$ measured with ALICE

    CERN Document Server

    Gronefeld, Julius

    2016-09-21

    The study of inclusive charged-particle production in heavy-ion collisions provides insights into the density of the medium and the energy-loss mechanisms. The observed suppression of high-$\\textit{p}_\\text{T}$ yield is generally attributed to energy loss of partons as they propagate through a deconfined state of quarks and gluons - Quark-Gluon Plasma (QGP) - predicted by QCD. Such measurements allow the characterization of the QGP by comparison with models. In these proceedings, results on high-$\\textit{p}_\\text{T}$ particle production measured by ALICE in Pb-Pb collisions at $ \\sqrt{\\text{s}_\\text{NN}}\\, = 5.02\\ \\rm{TeV}$ as well as well in pp at $\\sqrt{\\text{s}}\\,=5.02\\ \\rm{TeV}$ are presented for the first time. The nuclear modification factors ($\\text{R}_\\text{AA}$) in Pb-Pb collisions are presented and compared with model calculations.

  3. Scheme for Generation highly monochromatic X-Rays from a baseline XFEL undulator

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2010-03-01

    One goal of XFEL facilities is the production of narrow bandwidth X-ray radiation. The self-seeding scheme was proposed to obtain a bandwidth narrower than that achievable with conventional X-ray SASE FELs. A self-seeded FEL is composed of two undulators separated by a monochromator and an electron beam bypass that must compensate for the path delay of X-rays in the monochromator. This leads to a long bypass, with a length in the order of 40-60 m, which requires modifications of the baseline undulator configuration. As an attempt to get around this obstacle, together with a study of the self-seeding scheme for the European XFEL, here we propose a novel technique based on a pulse doubler concept. Using a crystal monochromator installed within a short magnetic chicane in the baseline undulator, it is possible to decrease the bandwidth of the radiation well beyond the XFEL design down to 10 -5 . The magnetic chicane can be installed without any perturbation of the XFEL focusing structure, and does not interfere with the baseline mode of operation. We present a feasibility study and we make exemplifications with the parameters of the SASE2 line of the European XFEL. (orig.)

  4. Profiling School Shooters: Automatic Text-Based Analysis

    Directory of Open Access Journals (Sweden)

    Yair eNeuman

    2015-06-01

    Full Text Available School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various charateristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by six school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/priorization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology.

  5. Geographical baselines of sustainable planning of the regional development of Zasavje region

    Directory of Open Access Journals (Sweden)

    Dušan Plut

    2002-12-01

    Full Text Available Geographical baselines of planning the regional development and interventions into the geographical environment derive from the premises of the concept of permanent adjusting the anthropogenic changes in the landscape to specific capacities and limitations of landscape-forming components. In the landscape-degraded region of Zasavje the improvement of environmental quality (curative measures and regional economic progress within the scope of carrying capacities and space (preventative measures are the primary, developmentaly-environmentally devised goal of developmental strategy.

  6. ASM Based Synthesis of Handwritten Arabic Text Pages

    Directory of Open Access Journals (Sweden)

    Laslo Dinges

    2015-01-01

    Full Text Available Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  7. Placental baseline conditions modulate the hyperoxic BOLD-MRI response.

    Science.gov (United States)

    Sinding, Marianne; Peters, David A; Poulsen, Sofie S; Frøkjær, Jens B; Christiansen, Ole B; Petersen, Astrid; Uldbjerg, Niels; Sørensen, Anne

    2018-01-01

    Human pregnancies complicated by placental dysfunction may be characterized by a high hyperoxic Blood oxygen level-dependent (BOLD) MRI response. The pathophysiology behind this phenomenon remains to be established. The aim of this study was to evaluate whether it is associated with altered placental baseline conditions, including a lower oxygenation and altered tissue morphology, as estimated by the placental transverse relaxation time (T2*). We included 49 normal pregnancies (controls) and 13 pregnancies complicated by placental dysfunction (cases), defined by a birth weight baseline BOLD)/baseline BOLD) from a dynamic single-echo gradient-recalled echo (GRE) MRI sequence and the absolute ΔT2* (hyperoxic T2*- baseline T2*) from breath-hold multi-echo GRE sequences. In the control group, the relative ΔBOLD response increased during gestation from 5% in gestational week 20 to 20% in week 40. In the case group, the relative ΔBOLD response was significantly higher (mean Z-score 4.94; 95% CI 2.41, 7.47). The absolute ΔT2*, however, did not differ between controls and cases (p = 0.37), whereas the baseline T2* was lower among cases (mean Z-score -3.13; 95% CI -3.94, -2.32). Furthermore, we demonstrated a strong negative linear correlation between the Log 10 ΔBOLD response and the baseline T2* (r = -0.88, p baseline conditions, as the absolute increase in placental oxygenation (ΔT2*) does not differ between groups. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Adaptive Text Entry for Mobile Devices

    DEFF Research Database (Denmark)

    Proschowsky, Morten Smidt

    The reduced size of many mobile devices makes it difficult to enter text with them. The text entry methods are often slow or complicated to use. This affects the performance and user experience of all applications and services on the device. This work introduces new easy-to-use text entry methods...... for mobile devices and a framework for adaptive context-aware language models. Based on analysis of current text entry methods, the requirements to the new text entry methods are established. Transparent User guided Prediction (TUP) is a text entry method for devices with one dimensional touch input. It can...... be touch sensitive wheels, sliders or similar input devices. The interaction design of TUP is done with a combination of high level task models and low level models of human motor behaviour. Three prototypes of TUP are designed and evaluated by more than 30 users. Observations from the evaluations are used...

  9. Planning Multisentential English Text Using Communicative Acts

    Science.gov (United States)

    1990-12-01

    Composition, Vol. XI in series Advances in Discourse Processing, Alex Publishing Corporation. de Joia , A. and Stenton, A. 1980. Terms in Linguistics: A Guide to...investigate how attentional constraints relate to text planning and linguistic realization. 14 SUBJECT TE1MS I I N& De OF PAGES Natural Language Generation...surface form? Page I 4. What is the relation of communicative intentions to text structure and surface form? 5. What effects can texts be designed to have

  10. Text Mining of Supreme Administrative Court Jurisdictions

    OpenAIRE

    Feinerer, Ingo; Hornik, Kurt

    2007-01-01

    Within the last decade text mining, i.e., extracting sensitive information from text corpora, has become a major factor in business intelligence. The automated textual analysis of law corpora is highly valuable because of its impact on a company's legal options and the raw amount of available jurisdiction. The study of supreme court jurisdiction and international law corpora is equally important due to its effects on business sectors. In this paper we use text mining methods to investigate Au...

  11. MERI: an ultra-long-baseline Moon-Earth radio interferometer.

    Science.gov (United States)

    Burns, J. O.

    Radiofrequency aperture synthesis, pioneered by Ryle and his colleagues at Cambridge in the 1960's, has evolved to ever longer baselines and larger arrays in recent years. The limiting resolution at a given frequency for modern ground-based very-long-baseline interferometry is simply determined by the physical diameter of the Earth. A second-generation, totally space-based VLB network was proposed recently by a group at the Naval Research Laboratory. The next logical extension of space-based VLBI would be a station or stations on the Moon. The Moon could serve as an outpost or even the primary correlator station for an extended array of space-based antennas.

  12. Ecological risk assessments for the baseline condition for the Port Hope and Port Granby Projects

    International Nuclear Information System (INIS)

    Hart, D.R.; Kleb, H.

    2006-01-01

    Baseline ecological risk assessments were completed in and around the areas where cleanup of low-level radioactive waste (LLRW) and marginally contaminated soil (MCS) is planned under the Port Hope Area Initiative (PHAI). Both aquatic and terrestrial environments were assessed, in the vicinity of the proposed waste management facilities near Welcome and Port Granby, in locations potentially influenced by LLRW and MCS that will be cleaned up in future, and in reference locations that are not potentially influenced. The calculated doses and risk quotients suggest potential radiation effects for pre-cleanup benthic invertebrates in Port Hope Harbour, for any ducks feeding exclusively in this area, and for soil invertebrates in some other waste sites. In addition, risk quotients suggest potential baseline effects from some elements, particularly uranium and arsenic, in localized areas that are influenced by LLRW and MCS. (author)

  13. Science and Technology Text Mining Basic Concepts

    National Research Council Canada - National Science Library

    Losiewicz, Paul

    2003-01-01

    ...). It then presents some of the most widely used data and text mining techniques, including clustering and classification methods, such as nearest neighbor, relational learning models, and genetic...

  14. Using Unlabeled Data to Improve Text Classification

    National Research Council Canada - National Science Library

    Nigam, Kamal P

    2001-01-01

    .... This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers...

  15. Atmospheric pressure loading parameters from very long baseline interferometry observations

    Science.gov (United States)

    Macmillan, D. S.; Gipson, John M.

    1994-01-01

    Atmospheric mass loading produces a primarily vertical displacement of the Earth's crust. This displacement is correlated with surface pressure and is large enough to be detected by very long baseline interferometry (VLBI) measurements. Using the measured surface pressure at VLBI stations, we have estimated the atmospheric loading term for each station location directly from VLBI data acquired from 1979 to 1992. Our estimates of the vertical sensitivity to change in pressure range from 0 to -0.6 mm/mbar depending on the station. These estimates agree with inverted barometer model calculations (Manabe et al., 1991; vanDam and Herring, 1994) of the vertical displacement sensitivity computed by convolving actual pressure distributions with loading Green's functions. The pressure sensitivity tends to be smaller for stations near the coast, which is consistent with the inverted barometer hypothesis. Applying this estimated pressure loading correction in standard VLBI geodetic analysis improves the repeatability of estimated lengths of 25 out of 37 baselines that were measured at least 50 times. In a root-sum-square (rss) sense, the improvement generally increases with baseline length at a rate of about 0.3 to 0.6 ppb depending on whether the baseline stations are close to the coast. For the 5998-km baseline from Westford, Massachusetts, to Wettzell, Germany, the rss improvement is about 3.6 mm out of 11.0 mm. The average rss reduction of the vertical scatter for inland stations ranges from 2.7 to 5.4 mm.

  16. Scheme for generating and transporting THz radiation to the X-ray experimental floor at LCLS baseline

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2011-08-15

    This paper describes a novel scheme for integrating a coherent THz source in the baseline of the LCLS facility. Any method relying on the spent electron beam downstream of the baseline undulator should provide a way of transporting the radiation up to the experimental floor.Herewe propose to use the dump area access maze. In this way the THz output must propagate with limited size at least for one hundred meters in a maze, following many turns, to reach the near experimental hall. The use of a standard, discrete, open beam-waveguide formed by periodic reflectors, that is a mirror guide, would lead to unacceptable size of the system. To avoid these problems, in this paper we propose an alternative approach based on periodically spaced metallic screens with holes. This quasi-optical transmission line is referred to as an iris line. We present complete calculations for the iris line using both analytical and numerical methods, which we find in good agreement. We present a design of a THz edge radiation source based on the use of an iris line. The proposed setup takes almost no cost nor time to be implemented at the LCLS baseline, and can be used at other facilities as well. The edge radiation source is limited in maximally achievable field strength at the sample. An extension based on the use of an undulator in the presence of the iris line, which is feasible at the LCLS energies, is proposed as a possible upgrade of the baseline THz source. (orig)

  17. Scheme for generating and transporting THz radiation to the X-ray experimental floor at LCLS baseline

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2011-08-01

    This paper describes a novel scheme for integrating a coherent THz source in the baseline of the LCLS facility. Any method relying on the spent electron beam downstream of the baseline undulator should provide a way of transporting the radiation up to the experimental floor.Herewe propose to use the dump area access maze. In this way the THz output must propagate with limited size at least for one hundred meters in a maze, following many turns, to reach the near experimental hall. The use of a standard, discrete, open beam-waveguide formed by periodic reflectors, that is a mirror guide, would lead to unacceptable size of the system. To avoid these problems, in this paper we propose an alternative approach based on periodically spaced metallic screens with holes. This quasi-optical transmission line is referred to as an iris line. We present complete calculations for the iris line using both analytical and numerical methods, which we find in good agreement. We present a design of a THz edge radiation source based on the use of an iris line. The proposed setup takes almost no cost nor time to be implemented at the LCLS baseline, and can be used at other facilities as well. The edge radiation source is limited in maximally achievable field strength at the sample. An extension based on the use of an undulator in the presence of the iris line, which is feasible at the LCLS energies, is proposed as a possible upgrade of the baseline THz source. (orig)

  18. Mouse Chromosome 4 Is Associated with the Baseline and Allergic IgE Phenotypes

    Directory of Open Access Journals (Sweden)

    Cynthia Kanagaratham

    2017-08-01

    Full Text Available Regulation of IgE concentration in the blood is a complex trait, with high concentrations associated with parasitic infections as well as allergic diseases. A/J strain mice have significantly higher plasma concentrations of IgE, both at baseline and after ovalbumin antigen exposure, when compared to C57BL/6J strain mice. Our objective was to determine the genomic regions associated with this difference in phenotype. To achieve this, we used a panel of recombinant congenic strains (RCS derived from A/J and C57BL/6J strains. We measured IgE in the RCS panel at baseline and following allergen exposure. Using marker by marker analysis of the RCS genotype and phenotype data, we identified multiple regions associated with the IgE phenotype. A single region was identified to be associated with baseline IgE level, while multiple regions wereassociated with the phenotype after allergen exposure. The most significant region was found on Chromosome 4, from 81.46 to 86.17 Mbp. Chromosome 4 substitution strain mice had significantly higher concentration of IgE than their background parental strain mice, C57BL/6J. Our data presents multiple candidate regions associated with plasma IgE concentration at baseline and following allergen exposure, with the most significant one located on Chromosome 4.

  19. Text-Independent Speaker Identification Using the Histogram Transform Model

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Yu, Hong; Tan, Zheng-Hua

    2016-01-01

    In this paper, we propose a novel probabilistic method for the task of text-independent speaker identification (SI). In order to capture the dynamic information during SI, we design a super-MFCCs features by cascading three neighboring Mel-frequency Cepstral coefficients (MFCCs) frames together....... These super-MFCC vectors are utilized for probabilistic model training such that the speaker’s characteristics can be sufficiently captured. The probability density function (PDF) of the aforementioned super-MFCCs features is estimated by the recently proposed histogram transform (HT) method. To recedes...

  20. Overfitting Reduction of Text Classification Based on AdaBELM

    Directory of Open Access Journals (Sweden)

    Xiaoyue Feng

    2017-07-01

    Full Text Available Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM, suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability.