WorldWideScience

Sample records for abbreviated scale sapas

  1. Clinical utility of Standardised Assessment of Personality - Abbreviated Scale (SAPAS) among patients with first episode depression

    Bukh, Jens Drachmann; Bock, Camilla; Vinberg, Maj; Gether, Ulrik; Kessing, Lars Vedel

    Structured Clinical Interview for DSM-IV Personality Disorders. RESULTS: We found, that a cut-off of 3 on the screen correctly identified the presence of comorbid personality disorder in 73.1% of the patients. The sensitivity and specificity were 0.80 and 0.70, respectively. LIMITATIONS: The findings cannot......BACKGROUND: Personality disorder frequently co-occurs with depression and seems to be associated with a poorer outcome of treatment and increased risk for recurrences. However, the diagnosing of personality disorder can be lengthy and requires some training. Therefore, a brief screening interview...... for comorbid personality disorder among patients suffering from depression would be of clinical use. METHOD: The present study aimed to assess the utility of the Standardised Assessment of Personality - Abbreviated Scale (SAPAS) as a screen for personality disorder in a population of patients recently...

  2. Test Review: Wechsler Abbreviated Scale of Intelligence, Second Edition

    Irby, Sarah M.; Floyd, Randy G.

    2013-01-01

    The Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II; Wechsler, 2011) is a brief intelligence test designed for individuals aged 6 through 90 years. It is a revision of the Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999). During revision, there were three goals: enhancing the link between the Wechsler…

  3. Abbreviations

    Brine, Kevin R.; Ciletti, Elena; Lähnemann, Henrike

    2013-01-01

    List of abbreviations of books of the Bible and versions of the Bible as used in this volume (according to The Chicago Manual of Style, 15th ed., 2003, sections 15.50–54). The Jewish Bible/Old Testament Am Amos 1 Chr 1 Chronicles 2 Chr 2 Chronicles Dn Daniel Dt Deuteronomy Eccl Ecclesiastes Est Esther Ex Exodus Ez Ezekiel Gn Genesis Hg Haggai Hos Hosea Is Isaiah Jer Jeremiah Jb Job Jl Joel Jon Jonah Jo Joshua Jgs Judges 1 Kgs 1 Kings 2 Kgs 2 Kings Lam Lamentations Lv Leviticus Mal Malachi Mi ...

  4. Abbreviations

    2013-01-01

    "AB" The official French logo for certified organic produce ("Agriculture Biologique") CF Conventional farming EF Ecological farming IFS Integrated farming systems LIF Low-input farming OF Organic farming OFgc Organic farming under group certification AFSAA Agence Française de Sécurité Sanitaire des Aliments (French food safety agency) AMAP Association pour le Maintien d'une Agriculture Paysanne (Association for the maintenance of small-scale farming – there is a network of such associations ...

  5. Test Review: Review of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II)

    McCrimmon, Adam W.; Smith, Amanda D.

    2013-01-01

    The Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II; Wechsler, 2011), published by Pearson, is a newly updated abbreviated measure of cognitive intelligence designed for individuals 6 to 90 years of age. Primarily used in clinical, psychoeducational, and research settings, the WASI-II was developed to quickly and accurately…

  6. The Abbreviated Injury Scale: application to autopsy data.

    Adams, V I; Carrubba, C

    1998-09-01

    Twenty autopsy reports, comprising 1 fall, 1 cutting, 1 burn, 1 drowning, 1 strangulation, 3 gunshot wound, and 13 traffic fatalities, were scored by the Abbreviated Injury Scale (AIS) and the Injury Severity Score (ISS). The codes were adequate for wounds of skin and long bones, and for most wounds of viscera. The autopsy descriptions were more detailed than the coding criteria for craniocerebral, cervicovertebral and muscular trauma, and less detailed for thoracoabdominal visceral, and long bone trauma. Lung contusions and rib fractures received scores that seemed unduly high, possibly reflecting the greater sensitivity of autopsy diagnosis over clinical diagnosis for these lesions. Complete hinge fractures of the skull base scored 4 (severe), which does not reflect the almost universally lethal nature of the accompanying cerebral concussion, which was itself not codeable. AIS scores were low and did not seem to reflect the lethal outcome when the lethal mechanism was purely physiologic and without a striking morphologic derangement, as in instances of cerebral or cardiac concussion, compression of the neck, occlusive airway hemorrhage, and visceral herniation into an adjacent body cavity. The scores were similarly low when therapy was delayed or adverse. Low AIS and ISS scores in a fatality from blunt or penetrating trauma may be useful retrospective clues to the presence of purely physiologic death mechanisms or therapeutic problems. PMID:9760090

  7. Adaptation of abbreviated mathematics anxiety rating scale for engineering students

    Nordin, Sayed Kushairi Sayed; Samat, Khairul Fadzli; Sultan, Al Amin Mohamed; Halim, Bushra Abdul; Ismail, Siti Fatimah; Mafazi, Nurul Wirdah

    2015-05-01

    Mathematics is an essential and fundamental tool used by engineers to analyse and solve problems in their field. Due to this, most engineering education programs involve a concentration of study in mathematics courses whereby engineering students have to take mathematics courses such as numerical methods, differential equations and calculus in the first two years and continue to do so until the completion of the sequence. However, the students struggled and had difficulties in learning courses that require mathematical abilities. Hence, this study presents the factors that caused mathematics anxiety among engineering students using Abbreviated Mathematics Anxiety Rating Scale (AMARS) through 95 students of Universiti Teknikal Malaysia Melaka (UTeM). From 25 items in AMARS, principal component analysis (PCA) suggested that there are four mathematics anxiety factors, namely experiences of learning mathematics, cognitive skills, mathematics evaluation anxiety and students' perception on mathematics. Minitab 16 software was used to analyse the nonparametric statistics. Kruskal-Wallis Test indicated that there is a significant difference in the experience of learning mathematics and mathematics evaluation anxiety among races. The Chi-Square Test of Independence revealed that the experience of learning mathematics, cognitive skills and mathematics evaluation anxiety depend on the results of their SPM additional mathematics. Based on this study, it is recommended to address the anxiety problems among engineering students at the early stage of studying in the university. Thus, lecturers should play their part by ensuring a positive classroom environment which encourages students to study mathematics without fear.

  8. A Confirmatory Factor Analysis of the Structure of Abbreviated Math Anxiety Scale

    Farahman Farrokhi; Shahram vahedi

    2011-01-01

    "nObjective: The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Abbreviated Math Anxiety Scale (AMAS), proposed by Hopko, Mahadevan, Bare & Hunt. "nMethod: The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. The confirmatory factor analysis (CFA) was carried out to determine the factor structures of the Persian version of AMAS. "nResults: As expected,...

  9. Application of abbreviated injury scale and injury severity score in fatal cases with abdominopelvic injuries.

    Subedi, Nuwadatta; Yadav, Bishwanath; Jha, Shivendra

    2014-12-01

    In forensic casework, investigation of injury severity is important in evaluating the mortality, occasionally in terms of the adequacy of clinical management. The study was conducted with an objective to study the relationship of severity of the injuries using Abbreviated Injury Scale and Injury Severity Score (ISS) with survival period and place of death among fatal cases with abdominopelvic trauma.The total number of cases studied was 80. The injuries in all the body parts were allotted using the Abbreviated Injury Scale 2005, Update 2008, and the ISS was calculated. The male/female ratio was 4:1, and the mean (SD) age was 30.76 (15.2) years. The cause of trauma was road traffic accidents in 82.5% of the cases. The median duration of survival was 2 hours. The mean (SD) ISS was 38.90 (14.89). Abbreviated Injury Scale scores of 5 and 4 were the most common in the region. With increase in the ISS, the survival period was decreased. There was a highly significant difference between the mean ISS of the victims who died prehospital and that of who died in the emergency department (P < 0.005). The mean ISS of the victims who died in the emergency department and of those who died in the ward, intensive care unit, or after discharge was also significantly different (P < 0.05).Although the cases with more severe injuries died sooner, there should be provision of treatment on the spot without delay. More time taken to start the treatment increases the fatalities. PMID:25354224

  10. SAPA: A Multi-objective Metric Temporal Planner

    Do, M; 10.1613/jair.1156

    2011-01-01

    SAPA is a domain-independent heuristic forward chaining planner that can handle durative actions, metric resource constraints, and deadline goals. It is designed to be capable of handling the multi-objective nature of metric temporal planning. Our technical contributions include (i) planning-graph based methods for deriving heuristics that are sensitive to both cost and makespan (ii) techniques for adjusting the heuristic estimates to take action interactions and metric resource limitations into account and (iii) a linear time greedy post-processing technique to improve execution flexibility of the solution plans. An implementation of SAPA using many of the techniques presented in this paper was one of the best domain independent planners for domains with metric and temporal constraints in the third International Planning Competition, held at AIPS-02. We describe the technical details of extracting the heuristics and present an empirical evaluation of the current implementation of SAPA.

  11. Rearrangement of sapA homologs with conserved and variable regions in Campylobacter fetus.

    Tummuru, M K; Blaser, M J

    1993-08-01

    The Campylobacter fetus surface-layer (S-layer) proteins mediate both complement resistance and antigenic variation in mammalian hosts. Wild-type strain 23D possesses the sapA gene, which encodes a 97-kDa S-layer protein, and several sapA homologs are present in both wild-type and mutant strains. Here we report that a cloned silent gene (sapA1) in C. fetus can express a functional full-length S-layer protein in Escherichia coli. Analysis of sapA and sapA1 and partial analysis of sapA2 indicate that a block of approximately 600 bp beginning upstream and continuing into the open reading frames is completely conserved, and then the sequences diverge completely, but immediately downstream of each gene is another conserved 50-bp sequence. Conservation of sapA1 among strains, the presence of a putative Chi (RecBCD recognition) site upstream of sapA, sapA1, and sapA2, and the sequence identities of the sapA genes suggest a system for homologous recombination. Comparison of the wild-type strain (23D) with a phenotypic variant (23D-11) indicates that variation is associated with removal of the divergent region of sapA from the expression locus and exchange with a corresponding region from a sapA homolog. We propose that site-specific reciprocal recombination between sapA homologs leads to expression of divergent S-layer proteins as one of the mechanisms that C. fetus uses for antigenic variation. PMID:8346244

  12. A comparative validation of the abbreviated Apathy Evaluation Scale (AES-10) with the Neuropsychiatric Inventory apathy subscale against diagnostic criteria of apathy.

    Leontjevas, R.; Evers-Stephan, A.; Smalbrugge, M.; Pot, A.M.; Thewissen, V.; Gerritsen, D.L.; Koopmans, R.T.C.M.

    2012-01-01

    OBJECTIVE: To compare the Neuropsychiatric Inventory apathy subscale (NPIa) with the abbreviated Apathy Evaluation Scale (AES-10) on discriminant validity and on their performance to distinguish residents as apathetic or nonapathetic. DESIGN: Cross-sectional design. SETTING: Nursing home. PARTICIPAN

  13. Self efficacy for fruit, vegetable and water intakes: Expanded and abbreviated scales from item response modeling analyses

    Cullen Karen W

    2010-03-01

    Full Text Available Abstract Objective To improve an existing measure of fruit and vegetable intake self efficacy by including items that varied on levels of difficulty, and testing a corresponding measure of water intake self efficacy. Design Cross sectional assessment. Items were modified to have easy, moderate and difficult levels of self efficacy. Classical test theory and item response modeling were applied. Setting One middle school at each of seven participating sites (Houston TX, Irvine CA, Philadelphia PA, Pittsburg PA, Portland OR, rural NC, and San Antonio TX. Subjects 714 6th grade students. Results Adding items to reflect level (low, medium, high of self efficacy for fruit and vegetable intake achieved scale reliability and validity comparable to existing scales, but the distribution of items across the latent variable did not improve. Selecting items from among clusters of items at similar levels of difficulty along the latent variable resulted in an abbreviated scale with psychometric characteristics comparable to the full scale, except for reliability. Conclusions The abbreviated scale can reduce participant burden. Additional research is necessary to generate items that better distribute across the latent variable. Additional items may need to tap confidence in overcoming more diverse barriers to dietary intake.

  14. Math Anxiety Assessment with the Abbreviated Math Anxiety Scale: Applicability and usefulness: insights from the Polish adaptation

    Krzysztof eCipora

    2015-11-01

    Full Text Available Math anxiety has an important impact on mathematical development and performance. However, although math anxiety is supposed to be a transcultural trait, assessment instruments are scarce and are validated mainly for Western cultures so far. Therefore, we aimed at examining the transcultural generality of math anxiety by a thorough investigation of the validity of math anxiety assessment in Eastern Europe. We investigated the validity and reliability of a Polish adaptation of the Abbreviated Math Anxiety Scale (AMAS, known to have very good psychometric characteristics in its original, American-English version as well as in its Italian and Iranian adaptations.We also observed high reliability, both for internal consistency and test-retest stability of the AMAS in the Polish sample. The results also show very good construct, convergent and discriminant validity: The factorial structure in Polish adult participants (n = 857 was very similar to the one previously found in other samples; AMAS scores correlated moderately in expected directions with state and trait anxiety, self-assessed math achievement and skill as well temperamental traits of emotional reactivity, briskness, endurance and perseverance. Average scores obtained by participants as well as gender differences and correlations with external measures were also similar across cultures. Beyond the cultural comparison, we used path model analyses to show that math anxiety relates to math grades and self-competence when controlling for trait anxiety.The current study shows transcultural validity of math anxiety assessment with the AMAS.

  15. Math Anxiety Assessment with the Abbreviated Math Anxiety Scale: Applicability and Usefulness: Insights from the Polish Adaptation

    Cipora, Krzysztof; Szczygieł, Monika; Willmes, Klaus; Nuerk, Hans-Christoph

    2015-01-01

    Math anxiety has an important impact on mathematical development and performance. However, although math anxiety is supposed to be a transcultural trait, assessment instruments are scarce and are validated mainly for Western cultures so far. Therefore, we aimed at examining the transcultural generality of math anxiety by a thorough investigation of the validity of math anxiety assessment in Eastern Europe. We investigated the validity and reliability of a Polish adaptation of the Abbreviated ...

  16. FDA Acronyms and Abbreviations

    U.S. Department of Health & Human Services — The FDA Acronyms and Abbreviations database provides a quick reference to acronyms and abbreviations related to Food and Drug Administration (FDA) activities

  17. SECURED DATA ON CLOUD ENVIRONMENT BY SAPA PROTOCOL WITH AUTO-RENEWAL

    K. Prashanthi

    2015-10-01

    Full Text Available Cloud computing is rising as a rife knowledge interactive paradigm to understand users’ knowledge remotely hold on in a web cloud server. Cloud services offer nice conveniences for the users to relish the on-demand cloud applications while not c Knowledge Obstrucity, Forward Security, Universal Composability onsidering the native infrastructure limitations. Throughout the information accessing, completely different users could also be in a very cooperative relationship, and so knowledge sharing becomes vital to attain productive edges. The prevailing security solutions chiefly concentrate on the authentication to understand that a user’s privative knowledge cannot be unauthorized accessed, however neglect a delicate privacy issue throughout a user.It is difficult for the cloud server to request different users for knowledge sharing. The challenged access request itself might reveal the user’s privacy in spite of whether or not it will acquire the information access permissions. During this paper, we have a tendency to propose a shared authority primarily based privacy-preserving authentication protocol (SAPA to deal with higher than privacy issue for cloud storage. Within the SAPA, 1 shared access authority is achieved by anonymous access request matching mechanism with security and privacy concerns (e.g., authentication, knowledge obscurity, user privacy, and forward security; 2 attribute primarily based access management is adopted to understand that the user will solely access its own knowledge fields; 3 proxy re-encryption is applied by the cloud server to supply knowledge sharing among the multiple users. Meanwhile, universal compos ability (UC model is established to prove that the SAPA on paper has the planning correctness. It indicates that the projected protocol realizing privacy-preserving knowledge access authority sharing is enticing for multi-user cooperative cloud applications.

  18. Cross-validation of the factorial structure of the Neighborhood Environment Walkability Scale (NEWS and its abbreviated form (NEWS-A

    Cerin Ester

    2009-06-01

    Full Text Available Abstract Background The Neighborhood Environment Walkability Scale (NEWS and its abbreviated form (NEWS-A assess perceived environmental attributes believed to influence physical activity. A multilevel confirmatory factor analysis (MCFA conducted on a sample from Seattle, WA showed that, at the respondent level, the factor-analyzable items of the NEWS and NEWS-A measured 11 and 10 constructs of perceived neighborhood environment, respectively. At the census blockgroup (used by the US Census Bureau as a subunit of census tracts level, the MCFA yielded five factors for both NEWS and NEWS-A. The aim of this study was to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A in a geographical location and population different from those used in the original validation study. Methods A sample of 912 adults was recruited from 16 selected neighborhoods (116 census blockgroups in the Baltimore, MD region. Neighborhoods were stratified according to their socio-economic status and transport-related walkability level measured using Geographic Information Systems. Participants self-completed the NEWS. MCFA was used to cross-validate the individual- and blockgroup-level measurement models of the NEWS and NEWS-A. Results The data provided sufficient support for the factorial validity of the original individual-level measurement models, which consisted of 11 (NEWS and 10 (NEWS-A correlated factors. The original blockgroup-level measurement model of the NEWS and NEWS-A showed poor fit to the data and required substantial modifications. These included the combining of aspects of building aesthetics with safety from crime into one factor; the separation of natural aesthetics and building aesthetics into two factors; and for the NEWS-A, the separation of presence of sidewalks/walking routes from other infrastructure for walking. Conclusion This study provided support for the generalizability of the individual

  19. A study of abbreviations in MEDLINE abstracts.

    Liu, Hongfang; Aronson, Alan R; Friedman, Carol

    2002-01-01

    Abbreviations are widely used in writing, and the understanding of abbreviations is important for natural language processing applications. Abbreviations are not always defined in a document and they are highly ambiguous. A knowledge base that consists of abbreviations with their associated senses and a method to resolve the ambiguities are needed. In this paper, we studied the UMLS coverage, textual variants of senses, and the ambiguity of abbreviations in MEDLINE abstracts. We restricted ou...

  20. Global change: Acronyms and abbreviations

    Woodard, C.T. [Oak Ridge National Lab., TN (United States); Stoss, F.W. [Univ. of Tennessee, Knoxville, TN (United States). Energy, Environment and Resources Center

    1995-05-01

    This list of acronyms and abbreviations is compiled to provide the user with a ready reference to dicipher the linguistic initialisms and abridgements for the study of global change. The terms included in this first edition were selected from a wide variety of sources: technical reports, policy documents, global change program announcements, newsletters, and other periodicals. The disciplinary interests covered by this document include agriculture, atmospheric science, ecology, environmental science, oceanography, policy science, and other fields. In addition to its availability in hard copy, the list of acronyms and abbreviations is available in DOS-formatted diskettes and through CDIAC`s anonymous File Transfer Protocol (FTP) area on the Internet.

  1. 40 CFR 86.1203-85 - Abbreviations.

    2010-07-01

    ... Test Procedures for New Gasoline-Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.1203-85 Abbreviations. The abbreviations in § 86.079-3 apply...

  2. 40 CFR 86.098-3 - Abbreviations.

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.098-3 Abbreviations. (a) The abbreviations in §...

  3. 40 CFR 86.000-3 - Abbreviations.

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.000-3 Abbreviations. The abbreviations in §...

  4. 40 CFR 86.096-3 - Abbreviations.

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.096-3 Abbreviations. (a) The abbreviations in §...

  5. 40 CFR 86.094-3 - Abbreviations.

    2010-07-01

    ...-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.094-3 Abbreviations. (a) The abbreviations in § 86... Petroleum Gas NMHC—Nonmethane Hydrocarbons NMHCE—Non-Methane Hydrocarbon Equivalent PM—Particulate...

  6. Classification and Translation of Chinese Abbreviations

    郭颖婷

    2014-01-01

    Chinese abbreviation, containing fewer words and delivering a wealth of information, is a vital component of Chinese language. But the tremendous differences between Chinese and English make it an arduous task to translate Chinese abbreviations into English. Based on the analyses of the structure and patterns of word-formation of Chinese abbreviations, it makes a classifi-cation of Chinese abbreviations, summarize the translation methods, and point out some attention points in translation. A system-atic analysis on the structure and classification of Chinese abbreviations will be beneficial to reduce the mistakes in its translation.

  7. Evaluating the Measurement Structure of the Abbreviated HIV Stigma Scale in a Sample of African Americans Living with HIV/AIDS

    Johnson, Eboneé T.; Yaghmaian, Rana A.; Best, Andrew; Chan, Fong; Burrell, Reginald, Jr.

    2016-01-01

    Purpose: The purpose of this study was to validate the 10-item version of the HIV Stigma Scale (HSS-10) in a sample of African Americans with HIV/AIDS. Method: One hundred and ten African Americans living with HIV/AIDS were recruited from 3 case management agencies in Baton Rouge, Louisiana. Measurement structure of the HSS-10 was evaluated using…

  8. New Abbreviations in Colloquial French

    Vladimir Pogačnik

    2015-12-01

    Full Text Available The author of the article treats the process of abbreviations, which he explored forty years ago in his master thesis. The article is based on the corpus created on the basis of Télématin broadcast on French television network TV5. According to the author, clipping is a widespread process that occurs primarily in various forms of oral communication. 

  9. Abbreviations

    2013-01-01

    ACAME Association des centrales d’achat de médicaments essentiels (Association of central medical stores for essential generic drugs) AMPOT Association malienne pour la promotion des ophtalmologues traditionnels (Malian association for the promotion of traditional ophthalmologists) CT Cicatricial (scar) trachoma DALY Disability Adjusted Life-Years DDT Dichlorodiphenyltrichloroethane DMT Department of traditional medicine DOTS Directly Observed Therapy Short Course FT Follicular trachoma GIS G...

  10. Abbreviations

    2013-01-01

    ADEGE Agence nationale pour la démoustication et la gestion des espaces démoustiqués CAREC Caribbean Epidemiology Center CDC Center for Disease Control and Prevention CIRE Cellule interrégionale d'épidémiologie Antilles-Guyane DHF Dengue Hemorrhagic Fever DSDS Direction de la santé et du développement social DSS Dengue Shock Syndrome EDEN European Association of Public Operators for Mosquito Control FDAs French Départements of America (refers to Guadeloupe, Martinique and French Guiana) IGR I...

  11. Abbreviations used in scientific and technical reports

    Reports contain a large number of abbreviations which have not yet been included in the current specialized dictionaries or lists of abbreviations. It is therefore often time-consuming or even fruitless to search for such abbreviations. The present alphabetical list of more than 4,000 abbreviations gathered from the report inventory of the Central Library of the KFA Juelich in the period from 1982-1986, taking into consideration all the scientific and technical disciplines, is intended to remedy a deficiency and to offer assistance which will undoubtedly be welcomed by scientists and engineers. (orig./HP)

  12. Sapa And Base Communication Of Sambas Society A Case Of Malay-Madurese Post-Conflict 1999-2014

    Wahab

    2015-02-01

    Full Text Available Abstract This article discusses the impact of inter-ethnic conflict in 1999 to the multi-ethnic community life in Sambas and offers a concept of education as a modified formulation of the local wisdom in the communication aspect that the Malay ethnic community in Sambas have in responding relations between ethnic groups post-conflict of ethnics in 1999. The methodology used is literature review observation interview and documentation-based qualitative analysis. The result is that ethnic conflict 1999 in Sambas West Kalimantan causes a number of problems or moral and social impacts in some small communities of Malay. By gaining the value of local wisdom into a new form of education an effort to respond the post-conflict negative impact through cultural communication greeting of sapa and base that shows a polite language education in Malay Sambas society and even the culture is believed to be an alternative solution that can deal with inter-ethnic conflicts and prevent conflict to happen again

  13. 40 CFR 86.090-3 - Abbreviations.

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas-Fueled and Methanol-Fueled Heavy-Duty Vehicles § 86.090-3 Abbreviations. (a) The abbreviations in § 86.... GC—Gas chromatograph. HPLC—High-pressure liquid chromatography. MeOH—Methanol (CH3OH)....

  14. Acronyms, initialisms, and abbreviations: Fourth Revision

    Tolman, B.J. [comp.

    1994-04-01

    This document lists acronyms used in technical writing. The immense list is supplemented by an appendix containing chemical elements, classified information access, common abbreviations used for functions, conversion factors for selected SI units, a flowcharting template, greek alphabet, metrix terminology, proofreader`s marks, signs and symbols, and state abbreviations.

  15. Generating abbreviations using Google Books library

    Solovyev, Valery D.; Bochkarev, Vladimir V.

    2014-01-01

    The article describes the original method of creating a dictionary of abbreviations based on the Google Books Ngram Corpus. The dictionary of abbreviations is designed for Russian, yet as its methodology is universal it can be applied to any language. The dictionary can be used to define the function of the period during text segmentation in various applied systems of text processing. The article describes difficulties encountered in the process of its construction as well as the ways to over...

  16. A Study of Abbreviations in Clinical Notes

    Xu, Hua; Stetson, Peter D.; Friedman, Carol

    2007-01-01

    Various natural language processing (NLP) systems have been developed to unlock patient information from narrative clinical notes in order to support knowledge based applications such as error detection, surveillance and decision support. In many clinical notes, abbreviations are widely used without mention of their definitions, which is very different from the use of abbreviations in the biomedical literature. Thus, it is critical, but more challenging, for NLP systems to correctly interpret...

  17. Abbreviation definition identification based on automatic precision estimates

    Kim Won; Comeau Donald C; Sohn Sunghwan; Wilbur W John

    2008-01-01

    Abstract Background The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. ...

  18. 40 CFR 600.403-77 - Abbreviations.

    2010-07-01

    ....403-77 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Dealer Availability of Fuel Economy Information § 600.403-77 Abbreviations....

  19. 40 CFR 600.203-77 - Abbreviations.

    2010-07-01

    ....203-77 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.203-77 Abbreviations....

  20. Detecting Abbreviations in Discharge Summaries using Machine Learning Methods

    WU, YONGHUI; Rosenbloom, S. Trent; Denny, Joshua C.; Miller, Randolph A; Mani, Subramani; Giuse, Dario A.; Xu, Hua

    2011-01-01

    Recognition and identification of abbreviations is an important, challenging task in clinical natural language processing (NLP). A comprehensive lexical resource comprised of all common, useful clinical abbreviations would have great applicability. The authors present a corpus-based method to create a lexical resource of clinical abbreviations using machine-learning (ML) methods, and tested its ability to automatically detect abbreviations from hospital discharge summaries. Domain experts man...

  1. Abbreviated guide pneumatic conveying design guide

    Mills, David

    1990-01-01

    Abbreviated Guide: Pneumatic Conveying Design Guide describes the selection, design, and specification of conventional pneumatic conveying systems. The design procedure uses previous test data on the materials to be conveyed. The book also discusses system economics, operating costs, the choice of appropriate components or systems, system control, and system flexibility. The design system involves the type of conveying system for installation, the pipeline parameters, and also the plant components. System selection covers the properties of the material to be conveyed, plant layout, material pr

  2. 40 CFR 310.4 - What abbreviations should I know?

    2010-07-01

    ... Pollution Contingency Plan also known as the National Contingency Plan (40 CFR part 300). NRC—National... 40 Protection of Environment 27 2010-07-01 2010-07-01 false What abbreviations should I know? 310... RESPONSE TO HAZARDOUS SUBSTANCE RELEASES General Information § 310.4 What abbreviations should I know?...

  3. Sequenced Contractions and Abbreviations for Model 2 Reading.

    Cronnell, Bruce

    The nature and use of contractions and abbreviations in beginning reading is discussed and applied to the Southwest Regional Laboratory (SWRL) Mod 2 Reading Program, a four-year program (K-3) for teaching reading skills to primary-grade children. The contractions and abbreviations are listed and sequenced for the reading program. The results of…

  4. 7 CFR 4274.302 - Definitions and abbreviations.

    2010-01-01

    ..., which is a problem solving activity. The Agency will determine whether a specific activity qualifies as... 7 Agriculture 15 2010-01-01 2010-01-01 false Definitions and abbreviations. 4274.302 Section 4274... Relending Program (IRP) § 4274.302 Definitions and abbreviations. (a) General definitions. The...

  5. The use of abbreviations in surgical note keeping

    B. Collard

    2015-06-01

    Full Text Available Abbreviations are used to improve the speed of note keeping and to simplify patient notes. However studies have shown that they can reduce clarity, increase mistakes and cause confusion in management plans. Our review highlights the misuse of abbreviations in surgical note keeping.

  6. 32 CFR 516.3 - Explanation of abbreviations and terms.

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Explanation of abbreviations and terms. 516.3 Section 516.3 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY AID OF CIVIL AUTHORITIES AND PUBLIC RELATIONS LITIGATION General § 516.3 Explanation of abbreviations and terms. (a)...

  7. McArthur-Bates Communicative Development Inventory (CDI: Proposal of an abbreviate version

    Chamarrita Farkas Klein

    2011-01-01

    Full Text Available The McArthur-Bates Communicative Development Inventories (CDI assesses language development en children, through a significant caregiver report. The first inventory assesses verbal and non verbal language in infants who are from 8 to 18 months old and it is composed of 949 items distributed in 6 scales. This study proposes an abbreviate form of this instrument, and was tested on families and educators of 130 Chilean children of 11-15 months old. Analyses related to the items, reliability and validity of the instrument and factorial analyses of subscales were realized. The abbreviate version consider 241 items distributed in 4 scales. The evaluation of the psychometric properties of the instrument was acceptable, demonstrating adequate reliability and validity.

  8. THE ISSUES OF LATIN ABBREVIATIONS IN THE DIPLOMATICES DOCUMENTS

    Иван Балта

    2012-12-01

    Full Text Available The issues of Latin abbreviations in the diplomatic documents are a very complex problem in the diplomatic word processing and deciphering the historiography documents. Abbreviations originated from Latin letters caused a favorable advantage for the scribe, but created difficulties for the reader, since they were complicated and questionable at the time. The Romans called the Latin abbreviations notae-or-sigla, using them just to save space and materials on which the letters were written. Increased use of abbreviations in medieval manuscripts emphasized a need for a manual in both the Code and Charter, specifying the form of abbreviations and centuries which they belonged to. Steffens divided the abbreviations of the Roman epoch according to the chronological development into the five ways of shortening: suspension, notes Tironianae, contraction, notes iuris, and signs for numbers, which were retained and added new shortcuts in the Middle Ages, especially in the territory of today's Pannonia and the Eastern Adriatic coast. Punctuation, which was alike the abbreviations, had a long process of development, which began as an antique-scriptura continua, trying to clarify the meaning of putting points in various positions between the individual words. Abbreviations differed not only by the type of Latin letters, but also by the territory in which they evolved, at different time periods. Abbreviations of Latin letters from the Beneventan East Adriatic coast were specific, were formed gradually, could be used as a criterion for dating, and were carried out with more ways of shortening the letters. Schiaparelli probably entered the deepest essence of the development of Latin minuscule in which he noted that the Frankish influences were much stronger in the first period of the development of the Beneventan East Adriatic coast. However, perhaps no other medieval script was developed better than a system of abbreviations in the Gothic style, especially in

  9. Selected personality data from the SAPA-Project: On the structure of phrased self-report items

    Condon, David M; William Revelle

    2015-01-01

    These data were collected to evaluate the structure of personality constructs in the temperament domain. In the context of modern personality theory, these constructs are typically construed in terms of the Big Five (Conscientiousness, Agreeableness, Neuroticism, Openness, and Extraversion) though several additional constructs were included here. Approximately 24,000 individuals were administered random subsets of 696 items from 92 public-domain personality scales using the Synthetic Aperture...

  10. Selected personality data from the SAPA-Project: On the structure of phrased self-report items

    David M Condon

    2015-08-01

    Full Text Available These data were collected to evaluate the structure of personality constructs in the temperament domain. In the context of modern personality theory, these constructs are typically construed in terms of the Big Five (Conscientiousness, Agreeableness, Neuroticism, Openness, and Extraversion though several additional constructs were included here. Approximately 24,000 individuals were administered random subsets of 696 items from 92 public-domain personality scales using the Synthetic Aperture Personality Assessment method between December 8, 2013 and July 26, 2014. The data are available in rdata format and are accompanied by documentation stored as a text file. Re-use potential include many types of structural and correlational analyses of personality.

  11. Selected personality data from the SAPA-Project: On the structure of phrased self-report items

    David M Condon

    2015-08-01

    Full Text Available These data were collected to evaluate the structure of personality constructs in the temperament domain. In the context of modern personality theory, these constructs are typically construed in terms of the Big Five (Conscientiousness, Agreeableness, Neuroticism, Openness, and Extraversion though several additional constructs were included here. Approximately 24,000 individuals were administered random subsets of 696 items from 92 public-domain personality scales using the Synthetic Aperture Personality Assessment method between December 8, 2013 and July 26, 2014. The data are available in rdata format and are accompanied by documentation stored as a text file. Re-use potential include many types of structural and correlational analyses of personality. 

  12. Detecting abbreviations in discharge summaries using machine learning methods.

    Wu, Yonghui; Rosenbloom, S Trent; Denny, Joshua C; Miller, Randolph A; Mani, Subramani; Giuse, Dario A; Xu, Hua

    2011-01-01

    Recognition and identification of abbreviations is an important, challenging task in clinical natural language processing (NLP). A comprehensive lexical resource comprised of all common, useful clinical abbreviations would have great applicability. The authors present a corpus-based method to create a lexical resource of clinical abbreviations using machine-learning (ML) methods, and tested its ability to automatically detect abbreviations from hospital discharge summaries. Domain experts manually annotated abbreviations in seventy discharge summaries, which were randomly broken into a training set (40 documents) and a test set (30 documents). We implemented and evaluated several ML algorithms using the training set and a list of pre-defined features. The subsequent evaluation using the test set showed that the Random Forest classifier had the highest F-measure of 94.8% (precision 98.8% and recall of 91.2%). When a voting scheme was used to combine output from various ML classifiers, the system achieved the highest F-measure of 95.7%. PMID:22195219

  13. Sourcing archaeological obsidian by an abbreviated NAA procedure

    An abbreviated NAA procedure has been developed to fingerprint obsidian artifacts in the Mesoamerican region. Despite the large number of available sources, an NAA procedure, which relies on producing short-lived isotopes, has been applied with a success rate greater than 90 percent. The abbreviated NAA procedure is rapid and cost competitive with the XRF technique more often applied in obsidian sourcing. Results from the analysis of over 1,200 obsidian artifacts from throughout Mesoamerica are presented. (author) 8 refs.; 6 figs.; 2 tabs

  14. Predicting Chinese Abbreviations from Definitions: An Empirical Learning Approach Using Support Vector Regression

    Xu Sun; Hou-Feng Wang; Bo Wang

    2008-01-01

    In Chinese, phrases and named entities play a central role in information retrieval. Abbreviations, however,make keyword-based approaches less effective. This paper presents an empirical learning approach to Chinese abbreviation prediction. In this study, each abbreviation is taken as a reduced form of the corresponding definition (expanded form),and the abbreviation prediction is formalized as a scoring and ranking problem among abbreviation candidates, which are automatically generated from the corresponding definition. By employing Support Vector Regression (SVR) for scoring,we can obtain multiple abbreviation candidates together with their SVR values, which are used for candidate ranking.Experimental results show that the SVR method performs better than the popular heuristic rule of abbreviation prediction.In addition, in abbreviation prediction, the SVR method outperforms the hidden Markov model (HMM).

  15. Interactive Hangman Teaches Amino Acid Structures and Abbreviations

    Pennington, Britney O.; Sears, Duane; Clegg, Dennis O.

    2014-01-01

    We developed an interactive exercise to teach students how to draw the structures of the 20 standard amino acids and to identify the one-letter abbreviations by modifying the familiar game of "Hangman." Amino acid structures were used to represent single letters throughout the game. To provide additional practice in identifying…

  16. Abbreviated Pandemic Influenza Planning Template for Primary Care Offices

    HCTT CHE

    2010-01-01

    The Abbreviated Pandemic Influenza Plan Template for Primary Care Provider Offices is intended to assist primary care providers and office managers with preparing their offices for quickly putting a plan in place to handle an increase in patient calls and visits, whether during the 2009-2010 influenza season or future influenza seasons.

  17. 40 CFR 205.155 - Motorcycle class and manufacturer abbreviation.

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Motorcycle class and manufacturer...) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.155 Motorcycle class and manufacturer abbreviation. (a) Motorcycles must be grouped into classes determined...

  18. Investigation of abbreviated 4 and 8 item versions of the PTSD Checklist 5.

    Price, Matthew; Szafranski, Derek D; van Stolk-Cooke, Katherine; Gros, Daniel F

    2016-05-30

    Posttraumatic stress disorder (PTSD) is a significant public health concern associated with marked impairment across the lifespan. Exposure to traumatic events alone, however, is insufficient to determine if an individual has PTSD. PTSD is a heterogeneous diagnosis such that assessment of all 20 symptoms is problematic in time-limited treatment settings. Brief assessment tools that identify those at risk for PTSD and measure symptom severity are needed to improve access to care and assess treatment response. The present study evaluated abbreviated measures of PTSD symptoms derived from the PTSD Checklist for DSM-5 (PCL-5) - a 20-item validated measure of PTSD symptoms - across two studies. In the first, using a community sample of adults exposed to a traumatic event, 4-and 8-item versions of the PCL-5 were identified that were highly correlated with the full PCL-5. In the second, using a sample of combat veterans, the 4-and 8-item measures had comparable diagnostic utility to the total-scale PCL-5. These results provide support for an abbreviated measure of the PCL-5 as an alternative to the 20-item total scale. PMID:27137973

  19. Fabrication of Nanodot Decorated Sapphire Substrates for Abbreviated Growth Mode Deposition of Gallium Nitride

    Biser, Jeffrey M.

    The overarching theme of this body of work is the development and demonstration of sapphire substrates with sub-micron scale surface features laid out in arrays with controlled shape, size, and distribution. The key contributions of the work are: (1) the collaborative demonstration that such substrates enable novel GaN fabrication options like the Abbreviated Growth Mode (AGM) approach that can lead to lower cost, higher quality LED devices, (2) the proof-of-concept demonstration that large scale surface patterning with the use of anodic aluminum oxide (AAO) templates is a feasible approach for creating low-cost patterns that should be compatible with AGM, and (3) that the Aluminum-to-sapphire conversion process used to fabricate the surface structures has distinct zones of behavior with regard to feature size and temperature that can be used to suggest an optimized set of process conditions.

  20. Abbreviations [Annex to The Fukushima Daiichi Accident, Technical Volume 2/5

    This annex is a list of abbreviations used in the publication The Fukushima Daiichi Accident, Technical Volume 2/5. The list includes the abbreviations for: • Agency for Natural Resources and Energy; • essential service water; • International Nuclear and Radiological Event Scale;• Integrated Regulatory Review Service; • Japan Atomic Energy Agency; • Japan Atomic Energy Commission; • Japan Power Engineering and Inspection Corp; • Japan Nuclear Energy Safety Organization; • low head safety injection; • low level radioactive waste; • Madras Atomic Power Station; • main control room; • Ministry of Economy, Trade and Industry; • Ministry of Education, Culture, Sports, Science and Technology; • Ministry of International Trade and Industry; • Ministry of Foreign Affairs; • Nuclear and Industrial Safety Agency; • Nuclear Power Corporation of India Limited; • nuclear power plant; • Nuclear Safety Commission; • Nuclear Power Engineering Corporation; • Nuclear Safety Technology Centre; • Onahama Port; • pressurized water reactor; • Science and Technology Agency; • Tokyo Electric Power Company

  1. Unlocking Runes? Reading Anglo-Saxon Runic Abbreviations in Their Immediate Literary Context

    Birkett, Tom

    2015-01-01

    Runic abbreviations appear sporadically in a number of Old English manuscripts, including three of the four major poetic codices. A convincing rational for the apparently erratic deployment of these unusual abbreviations has yet to be proposed. In this article I identify the immediate literary context as an important factor influencing the distribution of Anglo-Saxon runic abbreviations, noting in particular that the runic brevigraphs often appear in passages which deal with unlocking. To ill...

  2. Use of abbreviations in the nursing records of a teaching hospital

    Sylvia Miranda Carneiro; Herica Silva Dutra; Fernanda Mazzoni da Costa; Simone Emerich Mendes; Cristina Arreguy-Sena

    2016-01-01

    Objective: to evaluate the use of abbreviations in nursing records of a teaching hospital and describing their profile in different sectors, work shifts and professional nursing categories. Methods: documentary study that analyzed 627 nursing records in 24 patient charts using a systematic observation script. Results: we identified 1,792 abbreviations, and 35.8% were nonstandard. The incidence of abbreviations was higher in the Intensive Care Unit, used by nurses and in the night shift. Concl...

  3. Pharmacist and Physician Interpretation of Abbreviations for Acetaminophen Intended for Use in a Consumer Icon

    Saul Shiffman; Helene Cotton; Christina Jessurun; Sembower, Mark A.; Steve Pype; Jerry Phillips

    2015-01-01

    Concomitant use of multiple acetaminophen medications is associated with overdose. To help patients identify acetaminophen medications and thus avoid concomitant use, an icon with an abbreviation for “acetaminophen” has been proposed for all acetaminophen medications. This study assessed pharmacists’ and physicians’ use and interpretation of abbreviations for “acetaminophen”, to identify abbreviations with other meanings that might cause confusion. Physicians (n = 150) reported use and interp...

  4. New bilingual version of the VGB abbreviation catalogue for power plant technology released

    Hantschel, Jochen; Seiffert, Joerg [E.ON New Build and Technology GmbH, Gelsenkirchen (Germany); Froehner, Joerg [ct.e Controltechnology Engineering GmbH, Herne (Germany)

    2013-04-01

    The objective of the VGB Standard for power plant technology VGB-S-891-00 (abbreviation catalogue) is to regulate the systematic creation of abbreviations. The determination of abbreviations for terms related to power plants provides a common basis for planners, erectors, and operators of power plants and their systems. In combination with VGB-B 108 ''Rules for the creation of denominations and their application for power plant engineering'' the abbreviation catalogue is the basis for the creation of denominations.

  5. 49 CFR 1500.3 - Terms and abbreviations used in this chapter.

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Terms and abbreviations used in this chapter. 1500.3 Section 1500.3 Transportation Other Regulations Relating to Transportation (Continued) TRANSPORTATION SECURITY ADMINISTRATION, DEPARTMENT OF HOMELAND SECURITY ADMINISTRATIVE AND PROCEDURAL RULES APPLICABILITY, TERMS, AND ABBREVIATIONS §...

  6. 21 CFR 314.92 - Drug products for which abbreviated applications may be submitted.

    2010-04-01

    ... offered for sale by its manufacturer, a person who wishes to submit an abbreviated new drug application... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Drug products for which abbreviated applications may be submitted. 314.92 Section 314.92 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT...

  7. Automatic Word Sense Disambiguation of Acronyms and Abbreviations in Clinical Texts

    Moon, Sungrim

    2012-01-01

    The use of acronyms and abbreviations is increasing profoundly in the clinical domain in large part due to the greater adoption of electronic health record (EHR) systems and increased electronic documentation within healthcare. A single acronym or abbreviation may have multiple different meanings or senses. Comprehending the proper meaning of an…

  8. 21 CFR 314.100 - Timeframes for reviewing applications and abbreviated applications.

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Timeframes for reviewing applications and abbreviated applications. 314.100 Section 314.100 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... DRUG FDA Action on Applications and Abbreviated Applications § 314.100 Timeframes for...

  9. Compression and the origins of Zipf's law of abbreviation

    Ferrer-i-Cancho, R; Seguin, C

    2015-01-01

    Languages across the world exhibit Zipf's law of abbreviation, namely more frequent words tend to be shorter. The generalized version of the law, namely an inverse relationship between the frequency of a unit and its magnitude, holds also for the behaviors of other species and the genetic code. The apparent universality of this pattern in human language and its ubiquity in other domains calls for a theoretical understanding of its origins. We generalize the information theoretic concept of mean code length as a mean energetic cost function over the probability and the magnitude of the symbols of the alphabet. We show that the minimization of that cost function and a negative correlation between probability and the magnitude of symbols are intimately related.

  10. Dictionary of International Abbreviations - Environment and Natural Sciences

    The dictionary comprises about 3000 acronyms and abbreviations, with explanations in German and English. Subjects: Chemistry, medicine, geology, air, water, soil, waste, air pollution and noise abatement, chemicals and pollutants, agriculture and food, conservation and landscaping, energy, immission protection, radiation protection and nuclear safety, industry and biotechnology, environmental pollution, waste management and recycling. It is intended as a working and communication tool for a wide range of users in industry, administration, universities, scientists and students, journalists, translators and interested laymen. There is an appendix with supplementary information, i.e. mass, volume, SI units, chemical compounds and formulas, occupational pollutant exposure, food additives, environmental disasters, environmental laws, regulations and specifications, international programmes and organisations for environmental protection, and guidelines of environmental and international law. (orig.)

  11. Pharmacist and Physician Interpretation of Abbreviations for Acetaminophen Intended for Use in a Consumer Icon

    Saul Shiffman

    2015-10-01

    Full Text Available Concomitant use of multiple acetaminophen medications is associated with overdose. To help patients identify acetaminophen medications and thus avoid concomitant use, an icon with an abbreviation for “acetaminophen” has been proposed for all acetaminophen medications. This study assessed pharmacists’ and physicians’ use and interpretation of abbreviations for “acetaminophen”, to identify abbreviations with other meanings that might cause confusion. Physicians (n = 150 reported use and interpretation of candidate abbreviations Ac and Acm. Pharmacists (n = 150 interpretations of prescription orders using the candidate abbreviations APAP, Ac, Ace and Acm in typed, handwritten or spoken form, were judged for critical confusions likely to cause patient harm. Critical confusion was rare, except for omission by pharmacists of the acetaminophen dose for Hydrocodone/APAP prescriptions (10%. Ac was in common use to indicate “before meals”, and was interpreted as such, but some physicians (8% said they use Ac to indicate anticoagulant drugs. Most pharmacists (54% interpreted Ace as acetaminophen, and none interpreted it as referring to ACE-inhibitors. Acm was rarely used in prescriptions, had no common interfering meanings, and was often (63% interpreted as acetaminophen, especially when prescribed in combination with an opiate (85%. The data validated concerns about abbreviations in prescribing: all abbreviations resulted in some misinterpretations. However, Acm was rarely misinterpreted, was readily associated with “acetaminophen”, and seemed appropriate for use in a graphic icon to help consumers/patients identify acetaminophen medications.

  12. Combining Corpus-derived Sense Profiles with Estimated Frequency Information to Disambiguate Clinical Abbreviations

    Xu, Hua; Stetson, Peter D.; Friedman, Carol

    2012-01-01

    Abbreviations are widely used in clinical notes and are often ambiguous. Word sense disambiguation (WSD) for clinical abbreviations therefore is a critical task for many clinical natural language processing (NLP) systems. Supervised machine learning based WSD methods are known for their high performance. However, it is time consuming and costly to construct annotated samples for supervised WSD approaches and sense frequency information is often ignored by these methods. In this study, we prop...

  13. An abbreviated task-oriented assessment (Bay Area Functional Performance Evaluation).

    Mann, W C; Huselid, R

    1993-02-01

    The purpose of this study was to explore development of an abbreviated version of the Task-Oriented Assessment component of the Bay Area Functional Performance Evaluation (BaFPE). The BaFPE is widely used by occupational therapists practicing in mental health, but therapists have requested an instrument that could be administered and scored more quickly. Both a subjective and objective analysis support the development of an abbreviated version of the Task-Oriented Assessment. PMID:8470740

  14. The Convergent, Discriminant, and Concurrent Validity of Scores on the Abbreviated Self-Leadership Questionnaire

    Faruk Şahin

    2015-10-01

    Full Text Available The present study reports the psychometric properties of a short measure of self-leadership in the Turkish context: the Abbreviated Self-Leadership Questionnaire (ASLQ. The ASLQ was examined using two samples and showed sound psychometric properties. Confirmatory factor analysis showed that nine-item ASLQ measured a single construct of self-leadership. The results supported the convergent and discriminant validity of the one-factor model of the ASLQ in relation to the 35-item Revised Self-Leadership Questionnaire and General Self-Efficacy scale, respectively. With regard to internal consistency and test-retest reliability, the ASLQ showed acceptable results. Furthermore, the results provided evidence that scores on the ASLQ positively predicted individual's self-reported task performance and self-efficacy mediated this relationship. Taken together, these findings suggest that the Turkish version of the ASLQ is a reliable and valid measure that can be used to measure self-leadership as one variable of interest in the future studies.

  15. Hydrographic and Impairment Statistics Database: SAPA

    National Park Service, Department of the Interior — Hydrographic and Impairment Statistics (HIS) is a National Park Service (NPS) Water Resources Division (WRD) project established to track certain goals created in...

  16. Using Genetic Algorithms in a Large Nationally Representative American Sample to Abbreviate the Multidimensional Experiential Avoidance Questionnaire

    Sahdra, Baljinder K.; Ciarrochi, Joseph; Parker, Philip; Scrucca, Luca

    2016-01-01

    Genetic algorithms (GAs) are robust machine learning approaches for abbreviating a large set of variables into a shorter subset that maximally captures the variance in the original data. We employed a GA-based method to shorten the 62-item Multidimensional Experiential Avoidance Questionnaire (MEAQ) by half without much loss of information. Experiential avoidance or the tendency to avoid negative internal experiences is a key target of many psychological interventions and its measurement is an important issue in psychology. The 62-item MEAQ has been shown to have good psychometric properties, but its length may limit its use in most practical settings. The recently validated 15-item brief version (BEAQ) is one short alternative, but it reduces the multidimensional scale to a single dimension. We sought to shorten the 62-item MEAQ by half while maintaining fidelity to its six dimensions. In a large nationally representative sample of Americans (N = 7884; 52% female; Age: M = 47.9, SD = 16), we employed a GA method of scale abbreviation implemented in the R package, GAabbreviate. The GA-derived short form, MEAQ-30 with five items per subscale, performed virtually identically to the original 62-item MEAQ in terms of inter-subscales correlations, factor structure, factor correlations, and zero-order correlations and unique latent associations of the six subscales with other measures of mental distress, wellbeing and personal strivings. The two measures also showed similar distributions of means across American census regions. The MEAQ-30 provides a multidimensional assessment of experiential avoidance whilst minimizing participant burden. The study adds to the emerging literature on the utility of machine learning methods in psychometrics. PMID:26941672

  17. Combining corpus-derived sense profiles with estimated frequency information to disambiguate clinical abbreviations.

    Xu, Hua; Stetson, Peter D; Friedman, Carol

    2012-01-01

    Abbreviations are widely used in clinical notes and are often ambiguous. Word sense disambiguation (WSD) for clinical abbreviations therefore is a critical task for many clinical natural language processing (NLP) systems. Supervised machine learning based WSD methods are known for their high performance. However, it is time consuming and costly to construct annotated samples for supervised WSD approaches and sense frequency information is often ignored by these methods. In this study, we proposed a profile-based method that used dictated discharge summaries as an external source to automatically build sense profiles and applied them to disambiguate abbreviations in hospital admission notes via the vector space model. Our evaluation using a test set containing 2,386 annotated instances from 13 ambiguous abbreviations in admission notes showed that the profile-based method performed better than two baseline methods and achieved a best average precision of 0.792. Furthermore, we developed a strategy to combine sense frequency information estimated from a clustering analysis with the profile-based method. Our results showed that the combined approach largely improved the performance and achieved a highest precision of 0.875 on the same test set, indicating that integrating sense frequency information with local context is effective for clinical abbreviation disambiguation. PMID:23304376

  18. The Use of Abbreviations in English-Medium Astrophysics Research Paper Titles: A Problematic Issue

    David I. Méndez

    2015-06-01

    Full Text Available In this study, we carry out a qualitative and quantitative analysis of abbreviations in 300 randomly collected research paper titles published in the most prestigious European and US-based Astrophysics journals written in English. Our main results show that the process of shortening words and groups of words is one of the most characteristic and recurrent features in Astrophysics research paper titling construction. In spite of the convenience of abbreviations as a mechanism for word-formation, some of them may pose certain difficulties of understanding and/or misinterpretation because of their specificity, ambiguity, or overlapping. To overcome these difficulties, we propose a series of options which with no doubt would lead to a better interaction among the different branches of Astrophysics in particular and of science in general and would definitely improve how research is currently performed and communicated.Keywords: Abbreviations; Astrophysics; English; Research Papers

  19. Abbreviations [Annex to The Fukushima Daiichi Accident, Technical Volume 5/5

    This annex is a list of abbreviations used in the publication The Fukushima Daiichi Accident, Technical Volume 5/5. The list includes the abbreviations for: • General Safety Requirements; • International Commission on Radiological Protection; • Intensive Contamination Survey Area; • International Experts Meeting; • Ministry of Education, Culture, Sports, Science and Technology; • Ministry of the Environment; • Nuclear Damage Compensation and Decommissioning Facilitation Corporation; • Nuclear Emergency Response Headquarters; • nuclear power plant; • Nuclear Safety Commission; • OECD Nuclear Energy Agency; • Special Decontamination Area; • Specific Safety Requirements; • technical cooperation; • Three Mile Island; • United Nations Scientific Committee on the Effects of Atomic Radiation

  20. Abbreviated protocol for breast MRI: Are multiple sequences needed for cancer detection?

    Highlights: • Abbreviated breast MR demonstrates high sensitivity for breast carcinoma detection. • Time to perform/interpret the abbreviated exam is shorter than a standard MRI exam. • An abbreviated breast MRI could reduce costs and make MRI screening more available. - Abstract: Objective: To evaluate the ability of an abbreviated breast magnetic resonance imaging (MRI) protocol, consisting of a precontrast T1 weighted (T1W) image and single early post-contrast T1W image, to detect breast carcinoma. Materials and methods: A HIPAA compliant Institutional Review Board approved review of 100 consecutive breast MRI examinations in patients with biopsy proven unicentric breast carcinoma. 79% were invasive carcinomas and 21% were ductal carcinoma in situ. Four experienced breast radiologists, blinded to carcinoma location, history and prior examinations, assessed the abbreviated protocol evaluating only the first post-contrast T1W image, post-processed subtracted first post-contrast and subtraction maximum intensity projection images. Detection and localization of tumor were compared to the standard full diagnostic examination consisting of 13 pre-contrast, post-contrast and post-processed sequences. Results: All 100 cancers were visualized on initial reading of the abbreviated protocol by at least one reader. The mean sensitivity for each sequence was 96% for the first post-contrast sequence, 96% for the first post-contrast subtraction sequence and 93% for the subtraction MIP sequence. Within each sequence, there was no significant difference between the sensitivities among the 4 readers (p = 0.471, p = 0.656, p = 0.139). Mean interpretation time was 44 s (range 11–167 s). The abbreviated imaging protocol could be performed in approximately 10–15 min, compared to 30–40 min for the standard protocol. Conclusion: An abbreviated breast MRI protocol allows detection of breast carcinoma. One pre and post-contrast T1W sequence may be adequate for detecting

  1. Abbreviated protocol for breast MRI: Are multiple sequences needed for cancer detection?

    Mango, Victoria L., E-mail: vlm2125@columbia.edu [Columbia University Medical Center, Herbert Irving Pavilion, 161 Fort Washington Avenue, 10th Floor, New York, NY 10032 (United States); Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Morris, Elizabeth A., E-mail: morrise@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); David Dershaw, D., E-mail: dershawd@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Abramson, Andrea, E-mail: abramsoa@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Fry, Charles, E-mail: charles_fry@nymc.edu [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); New York Medical College, 40 Sunshine Cottage Rd, Valhalla, NY 10595 (United States); Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Hughes, Mary, E-mail: hughesm@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Kaplan, Jennifer, E-mail: kaplanj@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States); Jochelson, Maxine S., E-mail: jochelsm@mskcc.org [Memorial Sloan-Kettering Cancer Center, Breast and Imaging Center, 300 East 66th Street, New York, NY 10065 (United States)

    2015-01-15

    Highlights: • Abbreviated breast MR demonstrates high sensitivity for breast carcinoma detection. • Time to perform/interpret the abbreviated exam is shorter than a standard MRI exam. • An abbreviated breast MRI could reduce costs and make MRI screening more available. - Abstract: Objective: To evaluate the ability of an abbreviated breast magnetic resonance imaging (MRI) protocol, consisting of a precontrast T1 weighted (T1W) image and single early post-contrast T1W image, to detect breast carcinoma. Materials and methods: A HIPAA compliant Institutional Review Board approved review of 100 consecutive breast MRI examinations in patients with biopsy proven unicentric breast carcinoma. 79% were invasive carcinomas and 21% were ductal carcinoma in situ. Four experienced breast radiologists, blinded to carcinoma location, history and prior examinations, assessed the abbreviated protocol evaluating only the first post-contrast T1W image, post-processed subtracted first post-contrast and subtraction maximum intensity projection images. Detection and localization of tumor were compared to the standard full diagnostic examination consisting of 13 pre-contrast, post-contrast and post-processed sequences. Results: All 100 cancers were visualized on initial reading of the abbreviated protocol by at least one reader. The mean sensitivity for each sequence was 96% for the first post-contrast sequence, 96% for the first post-contrast subtraction sequence and 93% for the subtraction MIP sequence. Within each sequence, there was no significant difference between the sensitivities among the 4 readers (p = 0.471, p = 0.656, p = 0.139). Mean interpretation time was 44 s (range 11–167 s). The abbreviated imaging protocol could be performed in approximately 10–15 min, compared to 30–40 min for the standard protocol. Conclusion: An abbreviated breast MRI protocol allows detection of breast carcinoma. One pre and post-contrast T1W sequence may be adequate for detecting

  2. Improving Discrete Trial Instruction by Paraprofessional Staff Through an Abbreviated Performance Feedback Intervention

    Leblanc, Marie-Pierre; Ricciardi, Joseph N.; Luiselli, James K.

    2005-01-01

    We evaluated an abbreviated performance feedback intervention as a training strategy to improve discrete trial instruction of children with autism by three paraprofessional staff (assistant teachers) at a specialized day school. Feedback focused on 10 discrete trial instructional skills demonstrated by the staff during teaching sessions. Following…

  3. Text-Message Abbreviations and Language Skills in High School and University Students

    De Jonge, Sarah; Kemp, Nenagh

    2012-01-01

    This study investigated the use of text-message abbreviations (textisms) in Australian adolescents and young adults, and relations between textism use and literacy abilities. Fifty-two high school students aged 13-15 years, and 53 undergraduates aged 18-24 years, all users of predictive texting, translated conventional English sentences into…

  4. Symbolic Capital in a Virtual Heterosexual Market: Abbreviation and Insertion in Italian iTV SMS

    Herring, Susan C.; Zelenkauskaite, Asta

    2009-01-01

    This study analyzes gender variation in nonstandard typography--specifically, abbreviations and insertions--in mobile phone text messages (SMS) posted to a public Italian interactive television (iTV) program. All broadcast SMS were collected for a period of 2 days from the Web archive for the iTV program, and the frequency and distribution of…

  5. 21 CFR 314.127 - Refusal to approve an abbreviated new drug application.

    2010-04-01

    ... from sale for safety or effectiveness reasons under § 314.161, or the reference listed drug has been... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Refusal to approve an abbreviated new drug application. 314.127 Section 314.127 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH...

  6. Psychometric Properties of the Abbreviated Perceived Motivational Climate in Exercise Questionnaire

    Moore, E. Whitney G.; Brown, Theresa C.; Fry, Mary D.

    2015-01-01

    The purpose of this study was to develop an abbreviated version of the Perceived Motivational Climate in Exercise Questionnaire (PMCEQ-A) to provide a more practical instrument for use in applied exercise settings. In the calibration step, two shortened versions' measurement and latent model values were compared to each other and the original…

  7. A dictionary of nuclear power and waste management with abbreviations and acronyms

    This book provides defined terms from the nuclear power and radioactive waste management industries. It includes abbreviations and acronyms associated with nuclear power and the sister industry of waste management, for example, NIMBY (not in my backyard). Technical definitions from other sciences which are related to the subject of nuclear waste management have also been included

  8. Evaluating an Abbreviated Version of the Paths Curriculum Implemented by School Mental Health Clinicians

    Gibson, Jennifer E.; Werner, Shelby S.; Sweeney, Andrew

    2015-01-01

    When evidence-based prevention programs are implemented in schools, adaptations are common. It is important to understand which adaptations can be made while maintaining positive outcomes for students. This preliminary study evaluated an abbreviated version of the Promoting Alternative Thinking Strategies (PATHS) Curriculum implemented by…

  9. An evaluation of expert human and automated Abbreviated Injury Scale and ICD-9-CM injury coding.

    Long, W B; Sacco, W J; Copes, W S; Lawnick, M M; Proctor, S M; Sacco, J B

    1994-04-01

    Two hundred ninety-five injury descriptions from 135 consecutive patients treated at a level-I trauma center were coded by three human coders (H1, H2, H3) and by TRI-CODE (T), a PC-based artificial intelligence software program. Two study coders are nationally recognized experts who teach AIS coding for its developers (the Association for the Advancement of Automotive Medicine); the third has 5 years experience in ICD and AIS coding. A "correct coding" (CC) was established for the study injury descriptions. Coding results were obtained for each coder relative to the CC. The correct ICD codes were selected in 96% of cases for H2, 92% for H1, 91% for T, and 86% for H3. The three human coders agreed on 222 (75%) injuries. The correct 7 digit AIS codes (six identifying digits and the severity digit) were selected in 93% of cases for H2, 87% for T, 77% for H3, and 73% for H1. The correct AIS severity codes (seventh digit only) were selected in 98.3% of cases for H2, 96.3% for T, 93.9% for H3, and 90.8% for H1. On the basis of the weighted kappa statistic TRI-CODE had excellent agreement with the correct coding (CC) of AIS severities. Each human coder had excellent agreement with CC and with TRI-CODE. Coders H1 and H2 were in excellent agreement. Coder H3 was in good agreement with H1 and H2. However, errors among the human coders often occur for different codes, accentuating the variability.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8158710

  10. The Abbreviated Character Strengths Test (ACST): A Preliminary Assessment of Test Validity.

    Vanhove, Adam J; Harms, P D; DeSimone, Justin A

    2016-01-01

    The 24-item Abbreviated Character Strengths Test (ACST) was developed to efficiently measure character strengths (Peterson, Park, & Castro, 2011 ). However, its validity for this purpose has not yet been sufficiently established. Using confirmatory factor analysis to test a series of structural models, only a modified bifactor model showed reasonably acceptable fit. Further analyses of this model failed to demonstrate measurement invariance between male and female respondents. Relationships between ACST dimension and Big Five personality trait scores were generally weak-to-moderate, and support for hypotheses regarding each ACST virtue's expected correspondence with specific Big Five dimensions was mixed. Finally, scores on ACST dimensions accounted for a combined 12% of the variance in satisfaction with life scores, after controlling for socially desirability. Although an abbreviated measure of character strengths represents a practical need, considerable improvements to the ACST are needed for it to adequately meet this purpose. PMID:26983465

  11. Evaluation of an abbreviated abdominal-pelvic CT blunt trauma protocol

    In an attempt to expedite computed tomographic (CT) imaging in patients who have suffered multiple blunt trauma, an abbreviated abdominal-pelvic CT protocol was designed and tested. From 30 cases of abnormal full, 1.0-cm abdominal-pelvic scans, only post-contrast scans were selected for blind review at 1.0-cm increments through the spleen, 2.0-cm increments through the liver, and 3.0-cm increments to the symphysis pubis. Results and receiver operating characteristic curves were correlated with formal scan results and medical records. Preliminary results suggest that with adequate plain film spine and pelvic evaluation abbreviated abdominal -pelvic CT scanning may be effective in evaluating solid organ injury and in establishing trauma imaging protocols for the victim of multiple blunt trauma

  12. Abbreviated epitaxial growth mode (AGM) method for reducing cost and improving quality of LEDs and lasers

    Tansu, Nelson; Chan, Helen M; Vinci, Richard P; Ee, Yik-Khoon; Biser, Jeffrey

    2013-09-24

    The use of an abbreviated GaN growth mode on nano-patterned AGOG sapphire substrates, which utilizes a process of using 15 nm low temperature GaN buffer and bypassing etch-back and recovery processes during epitaxy, enables the growth of high-quality GaN template on nano-patterned AGOG sapphire. The GaN template grown on nano-patterned AGOG sapphire by employing abbreviated growth mode has two orders of magnitude lower threading dislocation density than that of conventional GaN template grown on planar sapphire. The use of abbreviated growth mode also leads to significant reduction in cost of the epitaxy. The growths and characteristics of InGaN quantum wells (QWs) light emitting diodes (LEDs) on both templates were compared. The InGaN QWs LEDs grown on the nano-patterned AGOG sapphire demonstrated at least a 24% enhancement of output power enhancement over that of LEDs grown on conventional GaN templates.

  13. BUSINESS ENGLISH OUTSIDE THE BOX. BUSINESS JARGON AND ABBREVIATIONS IN BUSINESS COMMUNICATION

    Pop Anamaria-Mirabela

    2014-12-01

    Full Text Available Business English is commonly understood language, yet Harvard Business Review called business jargon “The Silent Killer of Big Companies”. As we all have been taught in school, we are aware of the fact that in communication we must comply with linguistic rules so that our message gets across succinctly. Yet, there is one place where all these rules can be omitted (at least in the recent decades: the corporate office. Here, one can use euphemisms and clichés, can capitalize any word that is considered important, the passive voice is used wherever possible and abbreviations occur in every sentence. The worst part is that all of these linguistic enormities are carried out deliberately. The purpose of this paper is to analyse to what extent business jargon and abbreviations have affected business communication (which most of the time, it is filled with opaque language to mask different activities and operations and the reasons for which these linguistic phenomena have become so successful in the present. One of the reasons for the research is that in business English, jargon can be annoying because it overcomplicates. It is frequently unnecessary and it can transform a simple idea or instruction into something very confusing. It is true that every field has its jargon. Education, journalism, law, politics, medicine, urban planning – no filed is immune. Yet, it seems that business jargon has been described as “the most annoying”. Another reason is that jargon tends to be elitist. Those who do not understand the terms feel confused and uncertain. The paper starts with defining these two concepts, business jargon and abbreviations, and then it attempts to explain the “unusual” pervasion of these, both in business communication and in everyday communication. For this, the paper includes a list with the most common business jargon and abbreviations. In this view, the authors have accessed different economic blogs and specialty journals

  14. An Analysis of the Predictive Validity of the New Ecological Paradigm Scale.

    Cordano, Mark; Welcomer, Stephanie A.; Scherer, Robert F.

    2003-01-01

    Evaluates the predictive validity of the original and revised versions of the New Environmental Paradigm (NEP) scale, some abbreviated NEP-derived scales, and a non-NEP environmental attitudes scale. Finds that all scales explain a significant amount of the variance in a measure of intention to engage in pro-environmental behavior. (Contains 33…

  15. Abbreviations list

    2013-01-01

    Adege National Agency for Mosquito Destruction and Management of Mosquito-controlled Areas(Agence nationale pour la démoustication et la gestion des espaces naturels démoustiqués) AFD French Development Agency(Agence française de développement) Afssa French Agency for Health Security of Food(Agence française de sécurité sanitaire des aliments) Afsset French Agency for Environmental and Occupational Safety(Agence française de sécurité sanitaire de l’environnement et du travail) Anaes French Na...

  16. An abbreviated version of the brief assessment of cognition in schizophrenia (BACS

    MD Yasuhiro Kaneda

    2015-06-01

    Full Text Available Background and Objectives: A short version of the Brief Assessment of Cognition in Schizophrenia (BACS was derived. Methods: We calculated the corrected item-total correlation (CITC for each test score relative to the composite score, and then computed the proportion of variance that each test shares with the global score excluding that test (Rt² = CITCt² and the variance explained per minute of administration time for each test (Rt²/mint. Results and Conclusions: The 3 tests with the highest Rt²/mint, Symbol Coding, Digit Sequencing, and Token Motor, were selected for the Abbreviated BACS.

  17. USAGE OF ABBREVIATIONS IN THE NAMES OF WORKPLACES İŞ YERİ ADLARINDA KISALTMALARIN KULLANIMI

    Gülnaz ÇETINOĞLU BERBEROĞLU

    2009-12-01

    Full Text Available Abbreviations take an important place in the names of workplaces which have the feature of showing the service type that is giving by the workplace as separating its similars and have the characteristic of a kind of identity. In being formed of abbreviations some apprehensions like being conspicuous, being different from similar, keeping in mind easily and euphony become prominence as well as the thougt of save time and effort. Therefore, attempt to comply with the rules of language while the abbreviations are being formed is ignored. The abbreviations are formed casually and as being ignored the rules of language is concluded by the evaluation of data. So, it is determined that an order can’t be found and it can’t be connected to the rules. Verilen hizmet türünü benzerlerinden ayırarak gösterme özelliğine sahip, bir çeşit kimlik niteliği taşıyan iş yeri adlarında kısaltmalar, büyük yer tutar. Kısaltmaların oluşturulmasında zamandan ve emekten tasarruf etme düşüncesinin yanı sıra dikkat çekme, benzerlerinden farklı olma, akılda kolay kalma ve kulağa hoş gelme gibi kaygılar öne çıkmaktadır. Bu nedenle, yapılan kısaltmalarda dilin kurallarına uyma çabası göz ardı edilmektedir. Verilerin değerlendirilmesiyle, kısaltmaların dil kuralları göz ardı edilerek gelişigüzel oluşturulduğu sonucuna varılmış; dolayısıyla bir düzen sağlanamayacağı ve kurala bağlanamayacağı saptanmıştır.

  18. Abbreviations, acronyms, and initialisms frequently used by Martin Marietta Energy Systems, Inc.. Second edition

    Miller, J.T.

    1994-09-01

    Guidelines are given for using abbreviations, acronyms, and initialisms (AAIs) in documents prepared by US Department of Energy facilities managed by Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee. The more than 10,000 AAIs listed represent only a small portion of those found in recent documents prepared by contributing editors of the Information Management Services organization of Oak Ridge National Laboratory, the Oak Ridge K-25 Site, and the Oak Ridge Y-12 Plant. This document expands on AAIs listed in the Document Preparation Guide and is intended as a companion document

  19. Abbreviated laparotomy or damage control laparotomy: Why, when and how to do it?

    Voiglio, E J; Dubuisson, V; Massalou, D; Baudoin, Y; Caillot, J L; Létoublon, C; Arvieux, C

    2016-08-01

    The goal of abbreviated laparotomy is to treat severely injured patients whose condition requires an immediate surgical operation but for whom a prolonged procedure would worsen physiological impairment and metabolic failure. Indeed, in severely injured patients, blood loss and tissue injuries enhance the onset of the "bloody vicious circle", triggered by the triad of acidosis-hypothermia-coagulopathy. Abbreviated laparotomy is a surgical strategy that forgoes the completeness of operation in favor of a physiological approach, the overriding preference going to rapidity and limiting the procedure to control the injuries. Management is based on sequential association of the shortest possible preoperative resuscitation with surgery limited to essential steps to control injury (stop the bleeding and contamination), without definitive repair. The latter will be ensured during a scheduled re-operation after a period of resuscitation aiming to correct physiological abnormalities induced by the trauma and its treatment. This strategy necessitates a pre-defined plan and involvement of the entire medical and nursing staff to reduce time loss to a strict minimum. PMID:27542655

  20. [Abbreviated laparotomy for treatment of severe abdominal trauma: use in austere settings].

    Balandraud, P; Biance, N; Peycru, T; Savoie, P H; Avaro, J P; Tardat, E; Pourrière, M; Cador, L

    2007-10-01

    Abbreviated laparotomy is a recent technique for management of patients with severe abdominal trauma. It is based on a unified approach taking into account the overall extent of injury and the victim's physiologic potential to respond to hemorrhage. It is the first step in a multi-modal strategy. The second step is the critical care phase. The third step consists of "second-look" laparotomy that should ideally be performed on an elective basis within 48 hours and is aimed at definitive treatment of lesions. The goal of abbreviated laparotomy is damage control using temporary quick-fix procedures limited to conspicuous lesions and rapid hemostasis and/or viscerostasis procedures so that the patient can survive the acute critical period. Tension-free closure of the abdominal wall, if necessary using laparostomy, is essential to avoid abdominal compartment syndrome. With reported survival rates of about 50% in Europe and the United States, this simple life-saving technique that requires limited resources should be introduced in Africa where severe abdominal trauma often involves young patients. PMID:18225739

  1. Rest improves performance, nature improves happiness: Assessment of break periods on the abbreviated vigilance task.

    Finkbeiner, Kristin M; Russell, Paul N; Helton, William S

    2016-05-01

    The abbreviated vigilance task can quickly generate vigilance decrements, which has been argued is due to depletion of cognitive resources needed to sustain performance. Researchers suggest inclusion of rest breaks within vigilance tasks improve overall performance (Helton & Russell, 2015; Ross, Russell, & Helton, 2014), while different types of breaks demonstrate different effects. Some literature suggests exposure to natural movements/stimuli helps restore attention (Herzog, Black, Fountaine, & Knotts, 1997; Kaplan, 1995). Participants were randomly assigned to one experimental condition: dog video breaks, robot video breaks, countdown breaks or continuous vigilance. We assessed task performance and subjective reports of stress/workload. The continuous group displayed worst performance, suggesting breaks help restore attention. The dog videos did not affect performance, however, decreased reports of distress. These results support the importance of rest breaks and acknowledge the benefit of natural stimuli for promoting wellbeing/stress relief, overall suggesting performance and wellbeing may be independent, which warrants future studies. PMID:27089530

  2. Abbreviations [Annex to The Fukushima Daiichi Accident, Technical Volume 1/5

    This annex is a list of abbreviations used in the publication The Fukushima Daiichi Accident, Technical Volume 1/5. The list includes the abbreviations for: • accident management; • accident management guideline; • auxiliary operator; • abnormal operating procedure; • air operated value; • alarm pocket dosimeter; • all rods in; • containment atmospheric monitoring system; • containment cooling system; • control rod; • core spray; • condensate storage tank; • diesel driven fire pump; • dry well; • emergency core cooling system; • emergency diesel generator; • emergency operating procedure; • Emergency Response Centre; • fire protection; • high pressure coolant injection; • heating, ventilating and air-conditioning; • isolation condenser; • loss of off-site power; • metal clad switch gear; • motor control centre; • main control room; • Ministry of Economy, Trade and Industry; • motor operated valve; • measuring point/monitoring post; • main steam isolation valve; • make-up water condensate; • Nuclear and Industrial Safety Agency; • nuclear power plant; • Nuclear Regulatory Authority; • power centre; • primary containment isolation signal; • primary containment vessel; • reactor building; • reactor core isolution cooling; • residual heat removal; • residual heat removal and cooling seawater; • reactor pressure vessel; • station blackout; • suppression chamber; • self-contained breathing apparatus; • spent fuel pool; • standby gas treatment system; • standby liquid control; • standby liquid control system; • safety relief valve; • top of active fuel; • turbine building; • Tokyo Electric Power Company; • wide range

  3. 78 FR 25749 - Submission of New Drug Application/Abbreviated New Drug Application Field Alert Reports: Notice...

    2013-05-02

    ... Application Field Alert Reports: Notice of Form FDA 3331--Automated Pilot Program AGENCY: Food and Drug... submit new drug application (NDA) and abbreviated new drug application (ANDA) Field Alert Reports (FARs... program should be sent to district Drug Field Alert Monitors (contact information for each of...

  4. Matching Element Symbols with State Abbreviations: A Fun Activity for Browsing the Periodic Table of Chemical Elements

    Woelk, Klaus

    2009-01-01

    A classroom activity is presented in which students are challenged to find matches between the United States two-letter postal abbreviations for states and chemical element symbols. The activity aims to lessen negative apprehensions students might have when the periodic table of the elements with its more than 100 combinations of letters is first…

  5. 76 FR 26307 - Guidance for Industry on the Submission of Summary Bioequivalence Data for Abbreviated New Drug...

    2011-05-06

    ... the Federal Register in January 2009 (74 FR 2849, January 16, 2009). The final rule requires ANDA... announced the availability of the draft version of this guidance (74 FR 17872). The public comment period... Bioequivalence Data for Abbreviated New Drug Applications; Availability AGENCY: Food and Drug Administration,...

  6. Txt Msg N School Literacy: Does Texting and Knowledge of Text Abbreviations Adversely Affect Children's Literacy Attainment?

    Plester, Beverly; Wood, Clare; Bell, Victoria

    2008-01-01

    This paper reports on two studies which investigated the relationship between children's texting behaviour, their knowledge of text abbreviations and their school attainment in written language skills. In Study One, 11-12-year-old children provided information on their texting behaviour. They were also asked to translate a standard English…

  7. Magnetic resonance imaging. Sequence acronyms and other abbreviations in MR imaging

    The role of magnetic resonance imaging in clinical routine is still increasing. The large number of possible MR acquisition schemes reflects the variety of tis ue-dependent parameters that may influence the contrast within the image. Those schemes can be categorized into gradient echo and spin echo techniques. Within hese groups, further sorting can be done to differentiate between single-echo, ulti-echo, and single-shot techniques. Each of these techniques can be combined with preparation schemes for modifying the longitudinal magnetization. Hybrids re found between the groups, which are those techniques that utilize spin echoe as well as gradient echoes. Academic groups as well as vendors often have diff rent sequence acronyms for the same acquisition scheme. This contribution will ort these sequence acronyms into the previously mentioned scheme. The basic pri nciple of the data acquisition is elaborated on and hints are given for potentia clinical applications. Besides the sequence-specific acronyms, new abbreviations have surfaced recently in conjunction with ''parallel acquisition techniques.'' The latter means the utilization of multiple surface coils where the position a d the sensitivity profile of the coils provide additional spatial information, allowing the application of reduced matrixes leading to a shorter measurement ti e. (orig.)

  8. ‘LOL’, ‘OMG’ and Other Acronyms and Abbreviations : A study in the creation of initialisms

    Lundell, Ida

    2012-01-01

    Marchand (1969) claims that abbreviations and acronyms, which are also known as ‘initialisms’, are used to create “names of new scientific discoveries, trade-names, names of organizations, new foundations or offices, but occasionally, and chiefly in American English, personal and geographical names are also coined in this way” (Marchand, 1969: 452). However, initialisms that originate from netspeak, such as ‘LOL’, are different from the initialisms Marchand (1969) describes. These initialisms...

  9. The effectiveness of an abbreviated training program for health workers in breast cancer awareness: innovative strategies for resource constrained environments

    Mutebi, Miriam; Wasike, Ronald; Mushtaq, Ahmed; Kahie, Aideed; Ntoburi, Stephen

    2013-01-01

    Background Breast cancer is characterized by late presentation and significant morbidity and mortality in developing countries. Breast screening aids in early detection of breast cancer. Nurses are uniquely placed to provide advocacy and screening in a resource limited environment. Objectives To assess the effectiveness of an abbreviated training program in breast cancer awareness on nurses at a tertiary hospital, in a resource constrained environment. Methods Using a statistical tool, the So...

  10. Diagnostic per-patient accuracy of an abbreviated hepatobiliary phase gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance.

    Marks, Robert M; Ryan, Andrew; Heba, Elhamy R; Tang, An; Wolfson, Tanya J; Gamst, Anthony C; Sirlin, Claude B; Bashir, Mustafa R

    2015-03-01

    OBJECTIVE. The purpose of this study is to evaluate the per-patient diagnostic performance of an abbreviated gadoxetic acid-enhanced MRI protocol for hepatocellular carcinoma (HCC) surveillance. MATERIALS AND METHODS. A retrospective review identified 298 consecutive patients at risk for HCC enrolled in a gadoxetic acid-enhanced MRI-based HCC surveillance program. For each patient, the first gadoxetic acid-enhanced MRI was analyzed. To simulate an abbreviated protocol, two readers independently read two image sets per patient: set 1 consisted of T1-weighted 20-minute hepatobiliary phase and T2-weighted single-shot fast spin-echo (SSFSE) images; set 2 included diffusion-weighted imaging (DWI) and images from set 1. Image sets were scored as positive or negative according to the presence of at least one nodule 10 mm or larger that met the predetermined criteria. Agreement was assessed using Cohen kappa statistics. A composite reference standard was used to determine the diagnostic performance of each image set for each reader. RESULTS. Interreader agreement was substantial for both image sets (κ = 0.72 for both) and intrareader agreement was excellent (κ = 0.97-0.99). Reader performance for image set 1 was sensitivity of 85.7% for reader A and 79.6% for reader B, specificity of 91.2% for reader A and 95.2% for reader B, and negative predictive value of 97.0% for reader A and 96.0% for reader B. Reader performance for image set 2 was nearly identical, with only one of 298 examinations scored differently on image set 2 compared with set 1. CONCLUSION. An abbreviated MRI protocol consisting of T2-weighted SSFSE and gadoxetic acid-enhanced hepatobiliary phase has high negative predictive value and may be an acceptable method for HCC surveillance. The inclusion of a DWI sequence did not significantly alter the diagnostic performance of the abbreviated protocol. PMID:25714281

  11. Towards a Theory and View of Teaching Compressed and Abbreviated Research Methodology and Statistics Courses

    James Carifio

    2007-01-01

    Full Text Available One of the highly questionable effects of educational reform and other curriculum reshaping factors at both the high school, post-secondary and graduate levels has been the shift to teaching compressed, pared-down or abbreviated courses in still needed or required subject-matter that became de-emphasized in the current educational reformation. Research methodology, particularly the highly quantitative and experimental kind and statistics, are two still needed to some degree subject matters that has been especially affected by this demotion and compression movement at the pre-service, in-service, professional development, undergraduate, continuing education and graduate levels, even though the professional areas of education, science, business, politics and most other areas (including history have become far more quantitative and objective research oriented than in the past. Until there are more enlightened policy shifts, effective means of teaching such compressed courses need to be devised and tested, if only to lessen the negative outcomes of such critical courses. This article, therefore, analyzes compressed courses from the point of view of cognitive learning and then describes 5 methods and approaches that were tested to improve the effectiveness of research methodology and statistics courses taught in these formats. Each of the formats helped to reduce student stress and anxiety about the content and its compressed presentation and improved understanding and achievement. The theory and view developed in this article is also applicable to similar compressed courses for scientific and/or technical content which are currently prevalent in allied health and biotechnology areas.

  12. Essay on the pertinence of Luscher's abbreviate test in psychological evaluation of the radioactive accident victims of Goiania

    The essay on the pertinence of Luscher's abbreviate test in psychological evaluation of the radioactive accident victims of Goiania - Brazilian city - occurred in 1987 is consequence of confront of data obtained in two distinct situations having for criterion: time, efficiency and pertinence. Besides of this, they are introduced palografic and the house-tree-person - HTP - tests. These tests aimed at the common psychological characteristics verification to radioactive accident victims' personality of Goiania and to the data existential moment for those people. Among the three tests, the one of Luscher was what obtained the best interviewees acceptance index

  13. Perceived HIV-Associated Stigma among HIV-Seropositive Men: Psychometric Study of HIV Stigma Scale

    Valle, Adrian; Treviño, Ana Cecilia; Zambrano, Farith Francisco; Urriola, Karla Elizabeth; Sánchez, Luis Antonio; Elizondo, Jesus Eduardo

    2015-01-01

    Objectives To assess the internal consistency and factor structure of the abridged Spanish version of the Berger HIV Stigma Scale (HSS-21), to provide evidence for its convergent and discriminant validity, and to describe perceived stigma in an urban population from northeast Mexico. Methods Seventy-five HIV-positive men who have sex with men (MSM) were recruited. Participants answered the Spanish versions of three Likert-type scales: HSS-21, Robsenberg’s self-esteem scale, and the abbreviate...

  14. Order of age at onset for substance use, substance use disorder, conduct disorder and psychiatric illness

    Guldager, Steen; Linneberg, Inger Holm; Hesse, Morten

    2012-01-01

    of Personality – Abbreviated Scale (SAPAS), completed the MCMI-III, the Beck Anxiety Inventory (BAI), and were rated with the Montgomery Åsberg Depression Rating Scale. Age at onset was lowest for conduct disorder/antisocial behaviour, followed by tasting alcohol, trying drugs, post-traumatic stress......This study aimed to assess the number of patients who reported earlier age at onset for psychiatric illness versus those with an earlier age at onset for substance use. Subjects were 194 patients from substance use disorder (SUD) treatment services in the Municipality of Fredericia who accepted an...... offer of psychological assessment. Patients were administered the Mini International Neuropsychiatric Interview (MINI), and when diagnoses were indicated, queried about the age at onset for each disorder. Additionally, subjects were administered the WAIS-III vocabulary scale, the Structured Assessment...

  15. Costs for renewable electricity. Small-scale hydroelectric power

    The aim of the study on the title subject is to provide an objective basis for the determination of the assumptions that are used for the calculation of the so-called uneconomic top of electricity production from renewable energy sources, carried out by ECN and KEMA. The results will be used for the determination of the subsidy tariffs for small-scale hydroelectric power plants and is part of the Environmental Quality of Electricity Production (MEP, abbreviated in Dutch) policy

  16. 21 CFR 314.107 - Effective date of approval of a 505(b)(2) application or abbreviated new drug application under...

    2010-04-01

    ... this chapter, but does not include transfer of the drug product for reasons other than sale within the...) application or abbreviated new drug application under section 505(j) of the act. 314.107 Section 314.107 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS...

  17. Screening for personality disorder in incarcerated adolescent boys

    Kongerslev, Mickey Toftkjær; Moran, Paul; Bo, Sune;

    2012-01-01

    ABSTRACT: BACKGROUND: Personality disorder (PD) is associated with significant functional impairment and an elevated risk of violent and suicidal behaviour. The prevalence of PD in populations of young offenders is likely to be high. However, because the assessment of PD is time-consuming, it is...... identification of PD in adults (Standardised Assessment of Personality - Abbreviated Scale; SAPAS) for use with adolescents and then carried out a study of the reliability and validity of the adapted instrument in a sample of 80 adolescent boys in secure institutions. Participants were administered the screen......, reliability, and usefulness of the screen in secure institutions for adolescent male offenders. It can be used in juvenile offender institutions with limited resources, as a brief, acceptable, staff-administered routine screen to identify individuals in need of further assessment of PD or by researchers...

  18. Screening for personality disorder in incarcerated adolescent boys

    Simonsen, Erik; Kongerslev, Mickey; Moran, Paul;

    2012-01-01

    Background: Personality disorder (PD) is associated with significant functional impairment and an elevated risk of violent and suicidal behaviour. The prevalence of PD in populations of young offenders is likely to be high. However, because the assessment of PD is time-consuming, it is not...... in adults (Standardised Assessment of Personality – Abbreviated Scale; SAPAS) for use with adolescents and then carried out a study of the reliability and validity of the adapted instrument in a sample of 80 adolescent boys in secure institutions. Participants were administered the screen and shortly......, and usefulness of the screen in secure institutions for adolescent male offenders. It can be used in juvenile offender institutions with limited resources, as a brief, acceptable, staff-administered routine screen to identify individuals in need of further assessment of PD or by researchers conducting...

  19. One of These Things Is Not Quite the Same: A Comparison of the Patent Doctrine of Equivalents with Suitability for Filing an Abbreviated New Drug Application

    Halstead, David P.

    2002-01-01

    The doctrine of equivalents as applied to chemical patents is compared to the FDA’s findings of bioequivalence in reviewing suitability petitions for filing Abbreviated New Drug Applications (ANDAs). The doctrine of equivalents provides the greatest flexibility early in the drug-development process, gradually diminishing as the product refinements become increasingly minor. Determinations of bioequivalence, however, exhibit the reverse trend as applied to analogous situatio...

  20. Validation of the Abbreviated Brucella AMOS PCR as a Rapid Screening Method for Differentiation of Brucella abortus Field Strain Isolates and the Vaccine Strains, 19 and RB51

    Ewalt, Darla R; Bricker, Betsy J.

    2000-01-01

    The Brucella AMOS PCR assay was previously developed to identify and differentiate specific Brucella species. In this study, an abbreviated Brucella AMOS PCR test was evaluated to determine its accuracy in differentiating Brucella abortus into three categories: field strains, vaccine strain 19 (S19), and vaccine strain RB51/parent strain 2308 (S2308). Two hundred thirty-one isolates were identified and tested by the conventional biochemical tests and Brucella AMOS PCR. This included 120 isola...

  1. Assessing Giftedness in Children: Comparing the Accuracy of Three Shortened Measures of Intelligence to the Stanford-Binet Intelligence Scales, Fifth Edition

    Newton, Jocelyn H.; McIntosh, David E.; Dixon, Felicia; Williams, Tasha; Youman, Elizabeth

    2008-01-01

    This study examined the accuracy of three shortened measures of intelligence: the Woodcock-Johnson Tests of Cognitive Ability, Third Edition Brief Intellectual Ability (WJ III COG BIA) score; the Stanford-Binet Intelligence Scale, Fifth Edition Abbreviated IQ (SB5 ABIQ); and the Kaufman Brief Intelligence Test IQ Composite (K-BIT) in predicting…

  2. Abbreviation of larval development and extension of brood care as key features of the evolution of freshwater Decapoda.

    Vogt, Günter

    2013-02-01

    The transition from marine to freshwater habitats is one of the major steps in the evolution of life. In the decapod crustaceans, four groups have colonized fresh water at different geological times since the Triassic, the freshwater shrimps, freshwater crayfish, freshwater crabs and freshwater anomurans. Some families have even colonized terrestrial habitats via the freshwater route or directly via the sea shore. Since none of these taxa has ever reinvaded its environment of origin the Decapoda appear particularly suitable to investigate life-history adaptations to fresh water. Evolutionary comparison of marine, freshwater and terrestrial decapods suggests that the reduction of egg number, abbreviation of larval development, extension of brood care and lecithotrophy of the first posthatching life stages are key adaptations to fresh water. Marine decapods usually have high numbers of small eggs and develop through a prolonged planktonic larval cycle, whereas the production of small numbers of large eggs, direct development and extended brood care until the juvenile stage is the rule in freshwater crayfish, primary freshwater crabs and aeglid anomurans. The amphidromous freshwater shrimp and freshwater crab species and all terrestrial decapods that invaded land via the sea shore have retained ocean-type planktonic development. Abbreviation of larval development and extension of brood care are interpreted as adaptations to the particularly strong variations of hydrodynamic parameters, physico-chemical factors and phytoplankton availability in freshwater habitats. These life-history changes increase fitness of the offspring and are obviously favoured by natural selection, explaining their multiple origins in fresh water. There is no evidence for their early evolution in the marine ancestors of the extant freshwater groups and a preadaptive role for the conquest of fresh water. The costs of the shift from relative r- to K-strategy in freshwater decapods are traded

  3. Little Rock and El Dorado 10 x 20 NTMS quadrangles and adjacent areas, Arkansas: data report (abbreviated)

    This abbreviated data report presents results of ground water and stream sediment reconnaissance in the National Topographic Map Series Little Rock 10 x 20 quadrangle (Cleveland, Dallas, and Howard Counties do not have stream sediment analyses); the El Dorado 10 x 20 quadrangle (only Clark County has stream sediment analyses); the western part (Lonoke and Jefferson Counties) of Helena 10 x 20 quadrangle; the southern part (Franklin, Logan, Yell, Perry, Faulkner, and Lonoke Counties) of Russellville 10 x 20 quadrangle; and the southwestern corner (Ashley County) of the Greenwood 10 x 20 quadrangle. Stream samples were collected at 943 sites in the Little Rock quadrangle, 806 sites in the El Dorado quadrangle, 121 sites in the Helena area, 292 sites in the Russellville area, and 77 in the Greenwood area. Ground water samples were collected at 1211 sites in the Little Rock quadrangle, 1369 sites in the El Dorado quadrangle, 186 sites in the Helena area, 470 sites in the Russellville area, and 138 sites in the Greenwood area. Stream sediment and stream water samples were collected from small streams at nominal density of one site per 21 square kilometers in rural areas. Ground water samples were collected at a nominal density of one site per 13 square kilometers. Neutron activation analysis results are given for uranium and 16 other elements in sediments, and for uranium and 8 other elements in ground water. Field measurements and observations are reported for each site. Uranium concentrations in the sediments ranged from less than 0.1 ppM to 23.5 ppM with a mean of 1.7 ppM. The ground water uranium mean concentration is 0.113 ppB, and the uranium concentrations range from less than 0.002 ppB to 15.875 ppB. High ground water uranium values in the Ouachita Mountain region of the Little Rock quadrangle appear to be associated with Ordovician black shale units

  4. Some Common Abbreviations

    ... Coronary artery disease A common type of heart disease CAT Computerized axial tomography A type of x-ray ... Gastrointestinal Another term for your digestive system GFR Glomerular ... that causes one type of liver disease HBV Hepatitis B virus A virus that causes ...

  5. Discussion of the Abbreviations by Simplifying Sentence of “Life is Already Hard, so Don't Expose the Truth”%从“人艰不拆”等看句子缩词

    白鑫

    2015-01-01

    The abbreviation by simplifying sentences is a new language phenomenon originated in the network, which is abbre-viated by four characters and every abbreviations is single word and at least one morpheme is a verb.But most of the abbreviations is ambiguous.The paper thinks that the there is a mapping relationship between the abbreviation and the sentence The theory of spreading activation and the grammar generation can explain the mapping relationship and the cognitive process of sentence reduc-tion.In addition, the cognitive limitation of the abbreviation by simplifying is mainly from the spreading activation level of strength and cognitive scope.%句子缩词是网络中萌芽产生的一种新的语言现象,是缩略词的一种。句子缩词以四字格居多,词中每个字均为单字词,并且至少有一个语素为动词,但缩略词大多语义不明确。我们认为,缩略词与原句具有一定的映射关系,通过激活扩散模型和语法生成理论可以解释这种映射关系,并说明句子缩词的认知过程。此外,我们认为句子缩词的认知限制主要来自激活扩散程度强弱和认知范围大小。

  6. DYNAMIC SCALING OF GROWING SURFACES WITH GROWTH INHOMOGENEITIES OF SCREENED COULOMBIC FUNCTION

    TANG GANG; MA BEN-KUN

    2000-01-01

    The dynamic scaling properties of growing surfaces with growth inhomogeneities are studied by applying a dy namic renormalization-group analysis to the generalized Kardar-Parisi-Zhang(hereafter abbreviated to KPZ) equation, which contains an additional term of growth inhomogeneities. In a practical crystal growth process, these growth inho mogeneities can be induced by surface impurities and defects and are modeled by a screened Coulomb function in this paper. Our results show that the existence of the growth inhomogeneities can significantly change the dynamic scaling properties of a growing surface and can lead to a rougher surface.

  7. Consideration of Change of Abbreviated Network Chinese Words%缩略型网络语言的变化及思考--“然并卵”等例析

    李新梅

    2016-01-01

    This paper analyzed the latest abbreviated network Chinese word “Ranbingluan” from three aspects of origin , struc-ture and semanteme , and it also discussed the generating motivation of the abbreviated network Chinese word from the perspective of social environment and human mentality , the combination of abbreviation and regional dialects .Considering the rationality , possi-bility, innovativeness , openness and uncontrollability , the paper analyzed the changes of abbreviated network Chinese words .%先从来源、结构和语义这三方面分析最新缩略型的网络语言“然并卵”;进而从社会环境和人文心理这两大方面探讨网络缩略语的生成动因及最新变化:缩略与地方方言相结合;最后结合网络语言的共同特性:合理性、可能性、创新性、渗透性、开放性与不可控性,谈谈对缩略型的网络语言变化的几点思考。

  8. Weight scaling

    Hultnäs, Mikael

    2012-01-01

    Scaling pulpwood according to its weight for payment purposes is done in many places around the world. In North America, pulpwood is commonly scaled according to its green weight, while in Central Europe the wood is commonly scaled according to its dry weight. In Sweden pulpwood is measured according to its volume. This doctoral thesis aims to study the prerequisites for Sweden to weight scale its pulpwood. Historical observations of the green density plus meteorological data were used...

  9. Evaluation of the small-scale hydro-energetic potential in micro-basins in Colombia

    A definition of the small-scale hydroelectric power plants (PCHs, abbreviations in Spanish), its classification according to potency and fall and its classification as utilization form is present. The general parameters to PCHs design in aspects as topography studies, geology and geotechnical studies and hydrologic studies are described. The primary elements of a PCH, as dam (little dam), conduction, tank of charge, sand trap (water re claimer), floodgate, grating (network), pressure's pipe, fall, principal valve and turbine are shown. In the study of potential of micro-basins, general points as topography, draining, population, supply and demand of electric energy, morphology, hydrology, geology and hydraulic potential are consider

  10. Scaling laws

    Scaling of beam transport systems to non-relativistic particles of different mass, charge and energy in the presence of external magnetic and self-electric fields is discussed in the context of HIF simulation experiments. Having considered paraxial scaling first, more general scaling including non-paraxial effects such as aberrations, are discussed. (U.K.)

  11. System state monitoring and lifetime scales--II

    The first part of this paper [Kordonsky, Kh. B. and Gertsbakh, I. B., System state monitoring and lifetime scales - I. Reliab. Engng and System Safety, 47 (1995) 1-14] was devoted to finding a time-scale for system state monitoring. The Best Monitoring Scale (BMS) was defined as a linear combination of several observable 'principal' time scales like the operational time, number of cycles, etc. These 'principal' time scales were chosen in such a way that they capture the significant dimensions of failure behaviour. In practice, the key issue is the estimation of the BMS. In the first part we suggested an estimation procedure when all observations (in two time-scales) are complete. This is rarely the case in real-life situations where most of observations are censored. The second part is devoted to finding the BMS on the basis of incomplete observations. We consider two types of data: observations up to a certain time or up to the first failure, and observations of a renewable system. We preserve the notation, abbreviations, and the terminology of part I of this paper

  12. Risk adapted transmission prophylaxis to prevent vertical HIV–1 transmission: Effectiveness and safety of an abbreviated regimen of postnatal oral Zidovudine

    Neubert Jennifer

    2013-01-01

    Full Text Available Abstract Background Antiretroviral drugs including zidovudine (ZDV are effective in reducing HIV mother to child transmission (MTCT, however safety concern remains. The optimal duration of postnatal ZDV has not been established in clinical studies and there is a lack of consensus regarding optimal management. The objective of this study was to investigate the effectiveness and safety of a risk adapted two week course of oral postnatal ZDV as part of a combined intervention to reduce MTCT. Methods 118 mother infant pairs were treated according to the German-Austrian recommendations for HIV therapy in pregnancy and in HIV exposed newborns between 2000–2010. In the absence of factors associated with an increased HIV–1 transmission risk, children were assigned to the low risk group and treated with an abbreviated postnatal regimen with oral ZDV for 2 weeks. In the presence of risk factors, postnatal ZDV was escalated accordingly. Results Of 118 mother-infant pairs 79 were stratified to the low risk group, 27 to the high risk group and 11 to the very high risk group for HIV–1 MTCT. 4 children were lost to follow up. Overall Transmission risk in the group regardless of risk factors and completion of prophylaxis was 1.8% (95% confidence interval (CI 0.09–6.6. If transmission prophylaxis was complete, transmission risk was 0.9% (95% CI 0.01-5.7. In the low risk group receiving two week oral ZDV transmission risk was 1.4% (95% CI 0.01–8.4 Conclusion These data demonstrate the effectiveness of a short neonatal ZDV regimen in infants of women on stable ART and effective HIV–1 suppression. Further evaluation is needed in larger studies.

  13. Maslowian Scale.

    Falk, C.; And Others

    The development of the Maslowian Scale, a method of revealing a picture of one's needs and concerns based on Abraham Maslow's levels of self-actualization, is described. This paper also explains how the scale is supported by the theories of L. Kohlberg, C. Rogers, and T. Rusk. After a literature search, a list of statements was generated…

  14. Planck Scale to Hubble Scale

    Sidharth, B. G.

    1998-01-01

    Within the context of the usual semi classical investigation of Planck scale Schwarzchild Black Holes, as in Quantum Gravity, and later attempts at a full Quantum Mechanical description in terms of a Kerr-Newman metric including the spinorial behaviour, we attempt to present a formulation that extends from the Planck scale to the Hubble scale. In the process the so called large number coincidences as also the hitherto inexplicable relations between the pion mass and the Hubble Constant, point...

  15. Appendix B: Some Common Abbreviations

    ... virus that causes one type of liver disease HCT Hematocrit A blood test measurement HCV Hepatitis C ... A type of joint disease RBC Red blood cell A type of blood cell RSV Respiratory syncytial ...

  16. Scaling satan.

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others. PMID:11577971

  17. A Study of Abbreviation Identification in Official Documents Based on Bivariate Correlations%基于二元相关性的公文缩略语识别研究

    孙启高

    2014-01-01

    运用相关性理论,建立1200多万字的当代汉语政教类公文抽样语料库,在对语料库进行分词、标注等加工的基础上,对其中词语的二元相关性组合进行了抽样统计分析,以此为基础对双音节缩略语进行识别和抽取,获得了比较理想的结果,为缩略语的自动识别和公文自动理解提供了新的思路和方法。%According to the correlation theory , this paper establishes a contemporary Chinese political and educational documents corpus which has more than 12 million words . The data are analyzed based on the corpus segmentation , tagging and processing . The results of identifying and extracting double syllable abbreviations based on the statistics and analysis are satisfactory , w hich provides more ideas and methods for the automatic identification of abbreviations and understanding of the official documents .

  18. Nuclear scales

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  19. Scaling CMOS

    G.A Brown

    2004-01-01

    Full Text Available The scaling of silicon integrated circuits to smaller physical dimensions became a primary activity of advanced device development almost as soon as the basic technology was established. The importance and persistence of this activity is rooted in the confluence of two of the strongest drives governing the business; the push for greater device performance, measured in terms of switching speed, and the desire for greater manufacturing profitability, dependent upon reduced cost per good device built.

  20. New Meta-Heuristic for Combinatorial Optimization Problems:Intersection Based Scaling

    Peng Zou; Zhi Zhou; Ying-Yu Wan; Guo-Liang Chen; Jun Gu

    2004-01-01

    Combinatorial optimization problems are found in many application fields such as computer science, engineering and economy. In this paper, a new efficient meta-heuristic, Intersection-Based Scaling (IBS for abbreviation),is proposed and it can be applied to the combinatorial optimization problems. The main idea of IBS is to scale the size of the instance based on the intersection of some local optima, and to simplify the search space by extracting the intersection from the instance, which makes the search more efficient. The combination of IBS with some local search heuristics of different combinatorial optimization problems such as Traveling Salesman Problem (TSP) and Graph Partitioning Problem (GPP) is studied, and comparisons are made with some of the best heuristic algorithms and meta-heuristic algorithms. It is found that it has significantly improved the performance of existing local search heuristics and significantly outperforms the known best algorithms.

  1. Psychometric properties of the Chinese translation of the proactive personality scale.

    Zhou, Le; Shi, Junqi

    2009-08-01

    The purpose of the present study was to examine the psychometric properties of the Chinese translation of the Proactive Personality Scale. Four samples were surveyed. In Sample 1 and Sample 2, exploratory and confirmatory factor analysis results confirmed the unidimensional structure of the 10-item abbreviated version of the scale. In Sample 2, Proactive Personality scores were found to be positively correlated with the personality factors of Extraversion, Conscientiousness, Openness, and Agreeableness, and negatively correlated with Neuroticism. In Sample 3, Proactive Personality was found to be positively related to self-efficacy and political skill. In Sample 4, demographics controlled, Proactive Personality explained significant incremental variance in the employees' self-rated career satisfaction and job performance as rated by immediate supervisors. PMID:19810432

  2. Scaling Cosmology

    Zimdahl, Winfried; Pavón, Diego

    2002-01-01

    We show that with the help of a suitable coupling between dark energy and cold dark matter it is possible to reproduce any scaling solution $\\rho _{X}\\propto \\rho_{M}a^{\\xi}$, where $\\rho_{X}$ and $\\rho_{M}$ are the densities of dark energy and dark matter, respectively. We demonstrate how the case $\\xi = 1$ alleviates the coincidence problem. Future observations of supernovae at high redshift as well as quasar pairs which are planned to discriminate between different cosmological models will...

  3. Nuclear Scales

    Friar, J. L.

    1998-01-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the $\\pi$-$\\gamma$ force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effe...

  4. 基于最大熵模型的学术缩写自动识别%Study on Automatic Identiifcation of Academic Abbreviations and their Deifnitions based on Maximum Entropy Model

    张秋子; 陆伟; 程齐凯; 黄永

    2015-01-01

    In order to effectively identify the abbreviations and their corresponding deifnitions from enormous English academic texts, this paper proposes an automatic identification algorithm called MELearn-AI.In the perspective of the sequence labelling,MELearn-AI utilizes a manually labelled dataset and adopts maximum entropy algorithm to train a model, and then identify abbreviations in computer science academic texts based on the model. This method achieves a 95.8% precision rate with a 86.3% recall rate in the "Paren-sen" evaluation dataset created in this paper,it shows an obvious improvement compared to the other two algorithms.This paper proposes a method to identify the abbreviations and their corresponding deifnitions.Tested in English academic texts of computer science, the algorithm achieves satisfactory results, which is helpful to better understanding and adopting the terminology of this ifeld.%为实现海量英文学术文本中缩写词及对应缩写定义的识别,本文提出了一种自动缩写识别算法MELearn-AI。该算法在人工标注数据集的基础上,从序列标注的角度,通过最大熵模型实现了计算机领域英文学术文本中的自动缩写识别。MELearn-AI在本文构建的评测数据集“Paren-sen”上得到了95.8%的查准率和86.3%的查全率,相对于其他两组对照实验的效果有较为明显的提升。本文提出的自动缩写识别方法能够在计算机领域的学术文本上取得令人满意的效果,有助于更好地理解并利用该领域术语。

  5. Nano Revolution--Big Impact: How Emerging Nanotechnologies Will Change the Future of Education and Industry in America (and More Specifically in Oklahoma). An Abbreviated Account

    Holley, Steven E.

    2009-01-01

    Scientists are creating new and amazing materials by manipulating molecules at the ultra-small scale of 0.1 to 100 nanometers. Nanosize super particles demonstrate powerful and unprecedented electrical, chemical, and mechanical properties. This study examines how nanotechnology, as the multidisciplinary engineering of novel nanomaterials into…

  6. Selected ICAR Data from the SAPA-Project: Development and Initial Validation of a Public-Domain Measure

    Condon, David M.; Revelle, William

    2016-01-01

    These data were collected during the initial evaluation of the International Cognitive Ability Resource (ICAR) project. ICAR is an international collaborative effort to develop open-source public-domain tools for cognitive ability assessment, including tools that can be administered in non-proctored environments (e.g., online administration) and those which are based on automatic item generation algorithms. These data provide initial validation of the first four ICAR item types as reported in...

  7. The Substance Use Risk Profile Scale: a scale measuring traits linked to reinforcement-specific substance use profiles.

    Woicik, Patricia A; Stewart, Sherry H; Pihl, Robert O; Conrod, Patricia J

    2009-12-01

    The Substance Use Risk Profile Scale (SURPS) is based on a model of personality risk for substance abuse in which four personality dimensions (hopelessness, anxiety sensitivity, impulsivity, and sensation seeking) are hypothesized to differentially relate to specific patterns of substance use. The current series of studies is a preliminary exploration of the psychometric properties of the SURPS in two populations (undergraduate and high school students). In study 1, an analysis of the internal structure of two versions of the SURPS shows that the abbreviated version best reflects the 4-factor structure. Concurrent, discriminant, and incremental validity of the SURPS is supported by convergent/divergent relationships between the SURPS subscales and other theoretically relevant personality and drug use criterion measures. In Study 2, the factorial structure of the SURPS is confirmed and evidence is provided for its test-retest reliability and validity with respect to measuring personality vulnerability to reinforcement-specific substance use patterns. In Study 3, the SURPS was administered in a more youthful population to test its sensitivity in identifying younger problematic drinkers. The results from the current series of studies demonstrate support for the reliability and construct validity of the SURPS, and suggest that four personality dimensions may be linked to substance-related behavior through different reinforcement processes. This brief assessment tool may have important implications for clinicians and future research. PMID:19683400

  8. The substance use risk profile scale: a scale measuring traits linked to reinforcement-specific substance use profiles.

    Woicik, P.A.; Stewart, S.H.; Pihl, R.O.; Conrod, P.J.

    2009-12-01

    The Substance Use Risk Profile Scale (SURPS) is based on a model of personality risk for substance abuse in which four personality dimensions (hopelessness, anxiety sensitivity, impulsivity, and sensation seeking) are hypothesized to differentially relate to specific patterns of substance use. The current series of studies is a preliminary exploration of the psychometric properties of the SURPS in two populations (undergraduate and high school students). In study 1, an analysis of the internal structure of two versions of the SURPS shows that the abbreviated version best reflects the 4-factor structure. Concurrent, discriminant, and incremental validity of the SURPS is supported by convergent/divergent relationships between the SURPS subscales and other theoretically relevant personality and drug use criterion measures. In Study 2, the factorial structure of the SURPS is confirmed and evidence is provided for its test-retest reliability and validity with respect to measuring personality vulnerability to reinforcement-specific substance use patterns. In Study 3, the SURPS was administered in a more youthful population to test its sensitivity in identifying younger problematic drinkers. The results from the current series of studies demonstrate support for the reliability and construct validity of the SURPS, and suggest that four personality dimensions may be linked to substance-related behavior through different reinforcement processes. This brief assessment tool may have important implications for clinicians and future research.

  9. Safety of oral glutamine in the abbreviation of preoperative fasting: a double-blind, controlled, randomized clinical trial Seguridad de la glutamina oral en la abreviación del ayuno preoperatorio: un ensayo clínico doble ciego, controlado, aleatorizado

    D. Borges Dock-Nascimento; J. E. D Aguilar-Nascimento; C. Caporossi; M. Sepulveda Magalhães Faria; R. Bragagnolo; F. S. Caporossi; D. Linetzky Waitzberg

    2011-01-01

    Introduction: No study so far has tested a beverage containing glutamine 2 h before anesthesia in patients undergoing surgery. Objectives: The aim of the study was to investigate: 1) the safety of the abbreviation of preoperative fasting to 2 h with a carbohydrate-L-glutamine-rich drink; and 2) the residual gastric volume (RGV) measured after the induction of anesthesia for laparoscopic cholecystectomies. Methods: Randomized controlled trial with 56 women (42 (17-65) years-old) submitted to e...

  10. Scaling Solution in the Large Population Limit of the General Asymmetric Stochastic Luria-Delbrück Evolution Process

    Kessler, David A.; Levine, Herbert

    2015-02-01

    One of the most popular models for quantitatively understanding the emergence of drug resistance both in bacterial colonies and in malignant tumors was introduced long ago by Luria and Delbrück. Here, individual resistant mutants emerge randomly during the birth events of an exponentially growing sensitive population. A most interesting limit of this process occurs when the population size is large and mutation rates are low, but not necessarily small compared to . Here we provide a scaling solution valid in this limit, making contact with the theory of Levy -stable distributions, in particular one discussed long ago by Landau. One consequence of this association is that moments of the distribution are highly misleading as far as characterizing typical behavior. A key insight that enables our solution is that working in the fixed population size ensemble is not the same as working in a fixed time ensemble. Some of our results have been presented previously in abbreviated form [12].

  11. Scales in space

    Veen, van der Anne; Otter, Henriëtte S.

    2002-01-01

    Economists have devoted more attention to the scale of time than to the scale of space. What has been done in the field of space is often general and abstract, not connected to an explicit observation set in time and space. Moreover, time scales and spatial scales are not tied, making the choice for

  12. Scaling of Metabolic Scaling within Physical Limits

    Douglas S. Glazier

    2014-10-01

    Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.

  13. Atomic Scale Plasmonic Switch

    Emboras, A.; Niegemann, J.; Ma, P; Haffner, C; Pedersen, A.; Luisier, M.; Hafner, C; Schimmel, T.; Leuthold, J.

    2016-01-01

    The atom sets an ultimate scaling limit to Moore’s law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocat...

  14. Occupational Cohort Time Scales

    Deubner, David C.; Roth, H. Daniel

    2015-01-01

    Purpose: This study explores how highly correlated time variables (occupational cohort time scales) contribute to confounding and ambiguity of interpretation. Methods: Occupational cohort time scales were identified and organized through simple equations of three time scales (relational triads) and the connections between these triads (time scale web). The behavior of the time scales was examined when constraints were imposed on variable ranges and interrelationships. Results: Constraints on ...

  15. On Quantitative Rorschach Scales.

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  16. Atlantic Salmon Scale Measurements

    National Oceanic and Atmospheric Administration, Department of Commerce — Scales are collected annually from smolt trapping operations in Maine as wellas other sampling opportunities (e.g. marine surveys, fishery sampling etc.). Scale...

  17. Cross-scale morphology

    Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.

    2013-01-01

    The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.

  18. Civilian PTSD Scales

    Shapinsky, Alicia C.; Rapport, Lisa J.; Henderson, Melinda J.; Axelrod, Bradley N.

    2005-01-01

    Strong associations between civilian posttraumatic stress disorder (PTSD) scales and measures of general psychological distress suggest that the scales are nonspecific to PTSD. Three common PTSD scales were administered to 122 undergraduates who had experienced an emotionally salient, nontraumatic event: a college examination. Results indicated…

  19. Dimensional approach to symptom factors of major depressive disorder in Koreans, using the Brief Psychiatric Rating Scale: the Clinical Research Center for Depression of South Korea study.

    Park, Seon-Cheol; Jang, Eun Young; Kim, Daeho; Jun, Tae-Youn; Lee, Min-Soo; Kim, Jae-Min; Kim, Jung-Bum; Jo, Sun-Jin; Park, Yong Chon

    2015-01-01

    Although major depressive disorder (MDD) has a variety of symptoms beyond the affective dimensions, the factor structure and contents of comprehensive psychiatric symptoms of this disorder have rarely been explored using the 18-item Brief Psychiatric Rating Scale (BPRS). We aimed to identify the factor structure of the 18-item BPRS in Korean MDD patients. A total of 258 MDD patients were recruited from a multicenter sample of the Clinical Research Center for Depression of South Korea study. Psychometric scales were used to assess overall psychiatric symptoms (BPRS), depression (Hamilton Depression Rating Scale), anxiety (Hamilton Anxiety Rating Scale), global severity (Clinical Global Impression of Severity Scale), suicidal ideation (Scale for Suicide Ideation), functioning (Social and Occupational Functioning Assessment Scale), and quality of life (World Health Organization Quality of Life Assessment-abbreviated version). Common factor analysis with oblique rotation was used to yield factor structure. A four-factor structure was designed and interpreted by the symptom dimensions to reflect mood disturbance, positive symptoms/apathy, bipolarity, and thought distortion/mannerism. These individual factors were also significantly correlated with clinical variables. The findings of this study support the view that the BPRS may be a promising measuring tool for the initial assessment of MDD patients. In addition, the four-factor structure of the BPRS may be useful in understanding the mood and psychotic characteristics of these patients. PMID:25600920

  20. 40 CFR 94.3 - Abbreviations.

    2010-07-01

    ... subparts of this part and have the following meanings: AECD—Auxiliary emission control device. API—American... emission limit. ft—foot or feet. FTP—Federal Test Procedure. g—gram(s). g/kW-hr—Grams per kilowatt...

  1. 40 CFR 92.3 - Abbreviations.

    2010-07-01

    ...—American Petroleum Institute ASTM—American Society for Testing and Materials BHP—Brake horsepower BSCO...—Kelvin kg—kilogram(s) km—kilometer(s) kPa—kilopascal(s) lb—pound(s) LPG—Liquified Petroleum Gas...

  2. 40 CFR 86.078-3 - Abbreviations.

    2010-07-01

    ..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas... following meanings: accel.—acceleration. AECD—Auxiliary emission control device. API—American...

  3. 40 CFR 88.103-94 - Abbreviations.

    2010-07-01

    ... Hydrocarbon Equivalent NMOG—Non-Methane Organic Gas NOx—Nitrogen Oxides PM—Particulate Matter GVWR—Gross Vehicle Weight Rating LVW—Loaded Vehicle Weight TW—Test Weight TLEV—Transitional Low-Emission Vehicle LEV—Low-Emission Vehicle ULEV—Ultra Low-Emission Vehicle ZEV—Zero-Emission Vehicle...

  4. 40 CFR 300.4 - Abbreviations.

    2010-07-01

    ... COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY PLAN...—CERCLA Information System CRC—Community Relations Coordinator CRP—Community Relations Plan DRAT—District...—Remedial Investigation ROD—Record of Decision RPM—Remedial Project Manager RRC—Regional Response Center...

  5. 40 CFR 86.1503 - Abbreviations.

    2010-07-01

    ... Heavy-Duty Engines, New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Heavy-Duty Engines, New Otto-Cycle Light-Duty Trucks, and New Methanol-Fueled Natural Gas-Fueled, and Liquefied Petroleum Gas-Fueled Diesel-Cycle Light-Duty Trucks; Idle Test Procedures §...

  6. 7 CFR 1945.5 - Abbreviations.

    2010-01-01

    ... 103-354. (l) LFAC—Local Food and Agriculture Council. (m) NASS—State Statistical Office of the USDA.... (e) EM—Emergency. (f) EOH—USDA Emergency Operations Handbook. (g) FAC—Food and Agriculture Council... Administration. (p) SFAC—USDA State Food and Agriculture Council. (q) USDA—United States Department...

  7. 77 FR 7517 - Definitions and Abbreviations

    2012-02-13

    ...'' in the definition section of the regulation and clarifying the Agency's policy as it relates to... the use of the Internet and other information technologies, to provide increased opportunities for... is amended by adding a new definition of Interest, to read as follows: Sec. 4279.2 Definitions...

  8. 77 FR 7546 - Definitions and Abbreviations

    2012-02-13

    ... interest with prior Agency approval. By defining ``interest'' in the definition section of the regulation... Internet and other information technologies, to provide increased opportunities for citizen access to... new definition of Interest, to read as follows: Sec. 4279.2 Definitions and...

  9. Abbreviated Case Studies in Organizational Communication

    Wanguri, Deloris McGee

    2005-01-01

    The cases contained within organizational communication texts are generally two to three pages, often followed by questions. These case studies are certainly useful. They generally describe events in the present, provide some type of organizational context, include first-hand data, include a record of what people say and think, develop a…

  10. Biology Attitude Scale

    YEŞİLYURT, Selami; GÜL, Şeyda

    2009-01-01

    The aim of this study is to develop a scale determining secondary school stu- dent’s attitude towards biology. For this aim, at first, totally 92 scale items were prepared by reviewing relevant literature. 88 items in this scale were a five-point Likert type scale. 4 of 92 items consisted of demographic variables. The scale was applied to a sample of 109 students randomly selected from two secondary schools in Erzurum. At the end of this application, SPSS 12.0 Statistical Program was used to ...

  11. Optimal renormalization scales and commensurate scale relations

    Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e- annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e- annihilation and the heavy quark coupling αV which is measurable in lattice gauge theory

  12. Optimal renormalization scales and commensurate scale relations

    Brodsky, S.J. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Lu, H.J. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Physics

    1996-01-01

    Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e{sup {minus}} annihilation cross section at a corresponding commensurate energy scale {radical}s {proportional_to} Q, thus generalizing Crewther`s relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e{sup {minus}} annihilation and the heavy quark coupling {alpha}{sub V} which is measurable in lattice gauge theory.

  13. Scaling of differential equations

    Langtangen, Hans Petter

    2016-01-01

    The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...

  14. Scaling and Scale Breaking in Polyelectrolyte

    Peterson, C; Söderberg, B; Peterson, Carsten; Sommelius, Ola

    1996-01-01

    We consider the thermodynamics of a uniformly charged polyelectrolyte with harmonic bonds. For such a system there is at high temperatures an approximate scaling of global properties like the end-to-end distance and the interaction energy with the chain-length divided by the temperature. This scaling is broken at low temperatures by the ultraviolet divergence of the Coulomb potential. By introducing a renormalization of the strength of the nearest- neighbour interaction the scaling is restored, making possible an efficient blocking method for emulating very large polyelectrolytes using small systems. The high temperature behaviour is well reproduced by the analytical high-$T$ expansions even for fairly low temperatures and system sizes. In addition, results from low-$T$ expansions, where the coefficients have been computed numerically, are presented. These results approximate well the corresponding Monte Carlo results at realistic temperatures. A corresponding analysis of screened chains is performed. The sit...

  15. Economic scale of utilization of radiation (2): agriculture. Comparison between Japan and the U.S.A

    The economic scale of the application of radiation in the field of agriculture in Japan was estimated from public documents to be about 964M$ (million dollars) in 1997. In the food irradiation, an amount of 15,000t of potatoes irradiated per year in Hokkaido was estimated to be worth 16M$. Sterile Insect Technique (SIT) used for combating losses due to the melon fly in the mainly Okinawa region produced as much as 70M$ in benefits. Production of rice using varieties developed by mutation breeding was about 3% of overall production in Japan and the economic scale was 774M$. Radioisotope (RI) utilized in laboratory work, environmental analysis and chronology was accounted to be as high as 24M$. The relative ratios of radiation processing (136M$), mutation breeding (804M$) and RI utilization (24M$) were 14%, 83%, and 3%, respectively. The economic scale surveys in food irradiation and mutation breeding were extended to the United Sates of America (hereinafter abbreviated as U.S.A. or U.S.) for a direct comparison to the situation in Japan. As to maximum estimation, it amounted to be 3.2b$ (billion dollars) for food irradiation and 11.2b$ for mutation breeding. The economic scale for agriculture products within our scope was 14.5b$ for the U.S. and about 0.8b$ for Japan, implying that the former is larger in magnitude by a factor of about 18. (author)

  16. Scales in space

    Veen, van der, J.T.; Otter, Henriëtte S.

    2002-01-01

    Economists have devoted more attention to the scale of time than to the scale of space. What has been done in the field of space is often general and abstract, not connected to an explicit observation set in time and space. Moreover, time scales and spatial scales are not tied, making the choice for a macro, meso or microeconomic theory a rather arbitrary process. We devote attention to the explanation of the phenomenon of emerging spatial structures. We will discuss the standard economic the...

  17. Aggregation of Scale Efficiency

    Valentin Zelenyuk

    2012-01-01

    In this article we extend the aggregation theory in efficiency and productivity analysis by deriving solutions to the problem of aggregation of individual scale efficiency measures, primal and dual, into aggregate primal and dual scale efficiency measures of a group. The new aggregation result is coherent with aggregation framework and solutions for the other related efficiency measures that already exist in the literature.

  18. Genome-Scale Models

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel;

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical...

  19. A Scale for Sexism

    Pingree, Suzanne; And Others

    1976-01-01

    Defines the consciousness scale as a measurement technique which divides media protrayals of women into five conceptually-derived categories that can be placed in ordinal relationships with one another. Suggests that such a scale may be useful as a tool for analyzing mass media content. (MH)

  20. Scaling up as Catachresis

    Tobin, Joseph

    2005-01-01

    The metaphor of scaling up is the wrong one to use for describing and prescribing educational change. Many of the strategies being employed to achieve scaling up are counter-productive: they conceive of practitioners as delivery agents or consumers, rather than as co-constructors of change. An approach to educational innovation based on the…

  1. The Fatherhood Scale

    Dick, Gary L.

    2004-01-01

    This article reports on the initial validation of the Fatherhood Scale (FS), a 64-item instrument designed to measure the type of relationship a male adult had with his father while growing up. The FS was validated using a convenience sample of 311 males. The assessment packet contained a demographic form, the Conflict Tactics Scale (2),…

  2. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...

  3. The RRR Scale.

    Christensen, K. Eleanor

    The School Readiness Rating Scale was developed to help teachers organize their suggestions to parents about how parents can help their children prepare for beginning reading experiences. The scale surveys five important aspects of readiness for beginning reading: visual perception, visual motor perception, auditory perception and discrimination,…

  4. The inflationary energy scale

    Liddle, Andrew R.

    1994-01-01

    The energy scale of inflation is of much interest, as it suggests the scale of grand unified physics, governs whether cosmological events such as topological defect formation can occur after inflation, and also determines the amplitude of gravitational waves which may be detectable using interferometers. The COBE results are used to limit the energy scale of inflation at the time large scale perturbations were imprinted. An exact dynamical treatment based on the Hamilton-Jacobi equations is then used to translate this into limits on the energy scale at the end of inflation. General constraints are given, and then tighter constraints based on physically motivated assumptions regarding the allowed forms of density perturbation and gravitational wave spectra. These are also compared with the values of familiar models.

  5. The career distress scale

    Creed, Peter; Hood, Michelle; Praskova, Anna;

    2016-01-01

    Career distress is a common and painful outcome of many negative career experiences, such as career indecision, career compromise, and discovering career barriers. However, there are very few scales devised to assess career distress, and the two existing scales identified have psychometric...... weaknesses. The absence of a practical, validated scale to assess this construct restricts research related to career distress and limits practitioners who need to assess and treat it. Using a sample of 226 young adults (mean age 20.5 years), we employed item response theory to assess 12 existing career......, which we combined into a scale labelled the Career Distress Scale, demonstrated excellent psychometric properties, meaning that both researchers and practitioners can use it with confidence, although continued validation is required, including testing its relationship to other nomological net variables...

  6. Universities Scale Like Cities

    van Raan, Anthony F J

    2012-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the gross university income in terms of total number of citations over size in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its ...

  7. Use of NON-PARAMETRIC Item Response Theory to develop a shortened version of the Positive and Negative Syndrome Scale (PANSS

    Khan Anzalee

    2011-11-01

    Full Text Available Abstract Background Nonparametric item response theory (IRT was used to examine (a the performance of the 30 Positive and Negative Syndrome Scale (PANSS items and their options ((levels of severity, (b the effectiveness of various subscales to discriminate among differences in symptom severity, and (c the development of an abbreviated PANSS (Mini-PANSS based on IRT and a method to link scores to the original PANSS. Methods Baseline PANSS scores from 7,187 patients with Schizophrenia or Schizoaffective disorder who were enrolled between 1995 and 2005 in psychopharmacology trials were obtained. Option characteristic curves (OCCs and Item Characteristic Curves (ICCs were constructed to examine the probability of rating each of seven options within each of 30 PANSS items as a function of subscale severity, and summed-score linking was applied to items selected for the Mini-PANSS. Results The majority of items forming the Positive and Negative subscales (i.e. 19 items performed very well and discriminate better along symptom severity compared to the General Psychopathology subscale. Six of the seven Positive Symptom items, six of the seven Negative Symptom items, and seven out of the 16 General Psychopathology items were retained for inclusion in the Mini-PANSS. Summed score linking and linear interpolation was able to produce a translation table for comparing total subscale scores of the Mini-PANSS to total subscale scores on the original PANSS. Results show scores on the subscales of the Mini-PANSS can be linked to scores on the original PANSS subscales, with very little bias. Conclusions The study demonstrated the utility of non-parametric IRT in examining the item properties of the PANSS and to allow selection of items for an abbreviated PANSS scale. The comparisons between the 30-item PANSS and the Mini-PANSS revealed that the shorter version is comparable to the 30-item PANSS, but when applying IRT, the Mini-PANSS is also a good indicator of

  8. Parallel Computing in SCALE

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  9. Allometric Scaling in Biology

    Banavar, Jayanth

    2009-03-01

    The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.

  10. Elucidating Sweet Corrosion Scales

    Joshi, Gaurav Ravindra

    2015-01-01

    The objective of this thesis is to improve understanding of the development of corrosion products (scales) that form on the inner walls of carbon steel pipelines in CO2-rich (sweet) oilfield environments. If well adherent to the carbon steel surface, such scales can significantly reduce the metal’s rate of corrosion. Typically, the open literature labels sweet corrosion scale as ferrous (II) carbonate (FeCO3) or siderite, although this may not always be the case. For example, Fe2(OH)2CO3 (chu...

  11. Small scale optics

    Yupapin, Preecha

    2013-01-01

    The behavior of light in small scale optics or nano/micro optical devices has shown promising results, which can be used for basic and applied research, especially in nanoelectronics. Small Scale Optics presents the use of optical nonlinear behaviors for spins, antennae, and whispering gallery modes within micro/nano devices and circuits, which can be used in many applications. This book proposes a new design for a small scale optical device-a microring resonator device. Most chapters are based on the proposed device, which uses a configuration know as a PANDA ring resonator. Analytical and nu

  12. Allometric Scaling of Countries

    Zhang, Jiang

    2010-01-01

    As huge complex systems consisting of geographic regions, natural resources, people and economic entities, countries follow the allometric scaling law which is ubiquitous in ecological, urban systems. We systematically investigated the allometric scaling relationships between a large number of macroscopic properties and geographic (area), demographic (population) and economic (GDP, gross domestic production) sizes of countries respectively. We found that most of the economic, trade, energy consumption, communication related properties have significant super-linear (the exponent is larger than 1) or nearly linear allometric scaling relations with GDP. Meanwhile, the geographic (arable area, natural resources, etc.), demographic(labor force, military age population, etc.) and transportation-related properties (road length, airports) have significant and sub-linear (the exponent is smaller than 1) allometric scaling relations with area. Several differences of power law relations with respect to population betwee...

  13. SMALL SCALE MORPHODYNAMICAL MODELLING

    D. Ditschke; O. Gothel; H. Weilbeer

    2001-01-01

    Long term morphological simulations using complete coupled models lead to very time consuming computations. Latteux (1995) presented modelling techniques developed for tidal current situations in order to reduce the computational effort. In this paper the applicability of such methods to small scale problems is investigated. It is pointed out that these methods can be transferred to small scale problems using the periodicity of the vortex shedding process.

  14. Hierarchical Dirichlet Scaling Process

    Kim, Dongwoo; Oh, Alice

    2014-01-01

    We present the \\textit{hierarchical Dirichlet scaling process} (HDSP), a Bayesian nonparametric mixed membership model. The HDSP generalizes the hierarchical Dirichlet process (HDP) to model the correlation structure between metadata in the corpus and mixture components. We construct the HDSP based on the normalized gamma representation of the Dirichlet process, and this construction allows incorporating a scaling function that controls the membership probabilities of the mixture components. ...

  15. Preliminary application of the abbreviated C-SPSI to nursing students in Shanghai%《简化中文版日常生活问题解决调查表》在沪护理实习生中的初步试用

    王伟; 程云; 袁浩斌

    2010-01-01

    Objective To explore the applicability of the abbreviated Chinese version of the social problem-solving inventory (C-SPSI) for nursing students in Shanghai. Methods The abbreviated C-SPSI was revised and 603 nursing students in Shanghai were surveyed by it, and the reliability and con-struct validity were evaluated by inter-item consistency analysis, test-retest reliability and principal factor analysis. Results The total CVI was 0.968. The construct validity was confirmed by factor analysis with 64.917% variance explained by four factors. The total Cronbach's α of C-SPSI was 0.897,and the total test-retest reliability coefficient was 0.781. Conclusions The abbreviated C-SPSI is an instrument with good reliability and validity and it can be used in assessing the nursing students' social problem-solving abilities and deficits.%目的 考察简化中文版日常生活问题解决调查表在上海护理实习生群体中的适用性.方法 修订,对603例在上海开展护理实习的护生进行调查,采用内部一致性分析、重测信度和因子分析考察量表的信度和效度.结果 调查表的表面效度值为0.968;因子分析得到4个公因子,共解释了64.917%的方差;量表总的Cronbach's α系数为0.897,重测信度为0.781.结论 是一份具有较好信度和效度的评定工具,可用于评价上海地区护理实习生日常生活问题解决能力和缺陷.

  16. Atomic Scale Plasmonic Switch.

    Emboras, Alexandros; Niegemann, Jens; Ma, Ping; Haffner, Christian; Pedersen, Andreas; Luisier, Mathieu; Hafner, Christian; Schimmel, Thomas; Leuthold, Juerg

    2016-01-13

    The atom sets an ultimate scaling limit to Moore's law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocation of an individual or, at most, a few atoms in a plasmonic cavity. Depending on the location of the atom either of two distinct plasmonic cavity resonance states are supported. Experimental results show reversible digital optical switching with an extinction ratio of 9.2 dB and operation at room temperature up to MHz with femtojoule (fJ) power consumption for a single switch operation. This demonstration of an integrated quantum device allowing to control photons at the atomic level opens intriguing perspectives for a fully integrated and highly scalable chip platform, a platform where optics, electronics, and memory may be controlled at the single-atom level. PMID:26670551

  17. Universities scale like cities.

    Anthony F J van Raan

    Full Text Available Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  18. Urban Scaling in Europe

    Bettencourt, Luis M A

    2015-01-01

    Over the last decades, in disciplines as diverse as economics, geography, and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the for...

  19. No-scale inflation

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  20. Systematics of geometric scaling

    Using all available data on the deep-inelastic cross-sections at HERA at x=-2, we look for geometric scaling of the form σγ*p(τ) where the scaling variable τ behaves alternatively like logQ2-λY, as in the original definition, or logQ2-λY, which is suggested by the asymptotic properties of the Balitsky-Kovchegov (BK) equation with running QCD coupling constant. A ''Quality Factor'' (QF) is defined, quantifying the phenomenological validity of the scaling and the uncertainty on the intercept λ. Both choices have a good QF, showing that the second choice is as valid as the first one, predicted for fixed coupling constant. A comparison between the QCD asymptotic predictions and data is made and the QF analysis shows that the agreement can be reached, provided going beyond leading logarithmic accuracy for the BK equation

  1. Scale of Critical Thinking

    Semerci, Nuriye; Fırat Üniversitesi Teknik Eğitim Fakültesi Eğitim Bilimleri Bölümü

    2000-01-01

    The main purpose of this study is to develop the scale for critical thinking. The Scale of Critical Thinking was applied to 200 student. In this scale, there are total 55 items, four of which are negative and 51 of which are positive. The KMO (Kaiser-Meyer-Olkin) value is 0.75, the Bartlett test value is 7145.41, and the Cronbach Alpha value is 0.90. Bu çalışmanın amacı, kritik düşünme ölçeğini geliştirmektir. Ölçek 200 öğrenciye uygulanmıştır. Ölçeğin son halinde dördü olumsuz, 51'i'oluml...

  2. X and Y scaling

    Although much of the intuition for interpreting the high energy data as scattering from structureless constituents came from nuclear physics (and to a lesser extent atomic physics) virtually no data existed for nuclear targets in the non-relativistic regime until relatively recently. It is therefore not so surprising that,in site of the fact that the basic nuclear physics has been well understood for a very long time, the corresponding non-relativistic scaling law was not written down until after the relativistic one,relevant to particle physics, had been explored. Of course, to the extent that these scaling laws simply reflect quasi-elastic scattering of the probe from the constituents, they contain little new physics once the nature of the constitutents is known and understood. On the other hand, deviations from scaling represent corrections to the impulse approximation and can reflect important dynamical and coherent features of the system. Furthermore, as will be discussed in detail here, the scaling curve itself represents the single particle momentum distribution of constituents inside the target. It is therefore prudent to plot the data in terms of a suitable scaling variable since this immediately focuses attention on the dominant physics. Extraneous physics, such as Rutherford scattering in the case of electrons, or magnetic scattering in the case of thermal neutrons is factored out and the use of a scaling variable (such as y) automatically takes into account the fact that the target is a bound state of well-defined constituents. In this talk I shall concentrate almost entirely on non-relativistic systems. Although the formalism applies equally well to both electron scattering from nuclei and thermal neutron scattering from liquids, I shall, because of my background, usually be thinking of the former. On the other hand I shall completely ignore spin considerations so, ironically, the results actually apply more to the latter case!

  3. Angular Scaling In Jets

    Jankowiak, Martin; Larkoski, Andrew J.; /SLAC

    2012-02-17

    We introduce a jet shape observable defined for an ensemble of jets in terms of two-particle angular correlations and a resolution parameter R. This quantity is infrared and collinear safe and can be interpreted as a scaling exponent for the angular distribution of mass inside the jet. For small R it is close to the value 2 as a consequence of the approximately scale invariant QCD dynamics. For large R it is sensitive to non-perturbative effects. We describe the use of this correlation function for tests of QCD, for studying underlying event and pile-up effects, and for tuning Monte Carlo event generators.

  4. Rolling at small scales

    Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.

    2016-01-01

    The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well...... plasticity. Metals are known to be stronger when large strain gradients appear over a few microns; hence, the forces involved in the rolling process are expected to increase relatively at these smaller scales. In the present numerical analysis, a steady-state modeling technique that enables convergence...

  5. 课文缩写法在大学英语课堂教学中的应用——“The Gratitude We Need”为例%On the Application of Abbreviation of the Text to College English Class Teaching:A Case Study on "The Gratitude We Need"

    李静

    2012-01-01

    This paper analyzes the strength and weakness of the product,process and genre approaches to writing,and finds the abbreviation of the text is an effective training for writing teaching.Besides,it can combines the writing teaching with reading teaching so as to enhance the students' input and output through noticing the text.%本文通过比较结果教学法、过程教学法和体裁教学法的优缺点,分析各教学法的教学过程,认为课文缩写是综合各教学法的可行的写作训练方式,借此把阅读和写作教学结合起来,促使学生对知识点的有意注意来强化其语言输入和输出。

  6. Developing a Biology Attitude Scale

    Melih KOÇAKOĞLU; Türkmen, Lütfullah

    2010-01-01

    The purpose of this study was to develop a likert-style reliable and valid biology attitude scale. Before developing the biology attitude scale, the current scales had been carefully scrutinized, the views of experts taken, and the first draft of scale prepared by choosingstatements. The validity and reliability studies of the scale were carried out by applying the first draft on 168 students as a pilot study. The content validity of the scale was confirmed by the panel of judges. The factor ...

  7. A Framework of Fingerprint Scaling

    Chunxiao Ren; Jianmin Guo; Dong Qiu; Guolei Chang; Yuxiao Wu

    2013-01-01

    Fingerprint scaling refers to the adjustment of fingerprint images to solve the problem of sensor interoperability. In this paper, we present an innovative framework of fingerprint scaling with minimum modification to existing systems. For the purpose of facilitating system configuration, we have developed a series of scaling methods, including scaling factors, graph- and template-based scaling methods. In graph-based scaling methods, we have explored the application of various technologies i...

  8. Scaling violation in QCD

    The effects of scaling violation in QCD are discussed in the perturbative scheme, based on the factorization of mass singularities in the light-like gauge. Some recent applications including the next-to-leading corrections are presented (large psub(T) scattering, numerical analysis of the leptoproduction data). A proposal is made for extending the method on the higher twist sector. (author)

  9. Is this scaling nonlinear?

    Leitao, J C; Gerlach, M; Altmann, E G

    2016-01-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g., patents) scale nonlinearly with the population~x of the cities in which they appear, i.e., $y\\sim x^\\beta, \\beta \

  10. LARGE SCALE GLAZED

    Bache, Anja Margrethe

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESUL......: COLOR, LIGHT AND TEXTURE, GLAZED AND UNGLAZED, BUILDING FACADES...

  11. Bracken Basic Concept Scale.

    Naglieri, Jack A.; Bardos, Achilles N.

    1990-01-01

    The Bracken Basic Concept Scale, for use with preschool and primary-aged children, determines a child's school readiness and knowledge of English-language verbal concepts. The instrument measures 258 basic concepts in such categories as comparisons, time, quantity, and letter identification. This paper describes test administration, scoring and…

  12. xi-scaling

    A class of purely kinematical corrections to xi-scaling is exposed. These corrections are inevitably present in any realistic hadron model with spin and gauge invariance and lead to phenomenologically important M/sub hadron/2/Q2 corrections to Nachtmann moments

  13. Cardinal scales for health evaluation

    Harvey, Charles; Østerdal, Lars Peter Raahave

    2010-01-01

    Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal ...... scales provide a meaningful and useful means of evaluating health policies. Thus, we develop a means of using the perspective of early neoclassical welfare economics as an alternative to ordinalist and expected-utility perspectives.......Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal...

  14. An Elastica Arm Scale

    Bosi, F; Corso, F Dal; Bigoni, D

    2015-01-01

    The concept of 'deformable arm scale' (completely different from a traditional rigid arm balance) is theoretically introduced and experimentally validated. The idea is not intuitive, but is the result of nonlinear equilibrium kinematics of rods inducing configurational forces, so that deflection of the arms becomes necessary for the equilibrium, which would be impossible for a rigid system. In particular, the rigid arms of usual scales are replaced by a flexible elastic lamina, free of sliding in a frictionless and inclined sliding sleeve, which can reach a unique equilibrium configuration when two vertical dead loads are applied. Prototypes realized to demonstrate the feasibility of the system show a high accuracy in the measure of load within a certain range of use. It is finally shown that the presented results are strongly related to snaking of confined beams, with implications on locomotion of serpents, plumbing, and smart oil drilling.

  15. [COMPREHENSIVE GERIATRIC ASSESSMENT SCALES].

    Casado Verdejo, Inés; Postigo Mota, Salvador; Muñoz Bermejo, Laura; Vallejo Villalobos, José Ramón; Arrabal Léon, Nazaret; Pinto Montealegre, Jose Eduardo

    2016-01-01

    The process of comprehensive geriatric assessment is one of the key elements of geriatric care management aimed at the population. it includes evaluating the clinical, functional, mental and social aspects of aging result and/or pathological processes that appear at this stage of the life cycle. For their achievement, as well as other tools, professionals have a large number of validated rating scales specifically designed in the assessment of the different areas or fields. Its use can be very useful, especially for the objectification of evaluation results. The future of research in this area goes through deepening the adequacy of the scales to the characteristics and needs of older people in each care level or place of care. PMID:26996044

  16. LARGE SCALE GLAZED

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...... IN NEW TYPES OF LARGE SCALE AND VERY THIN, GLAZED CONCRETE FAÇADES IN BUILDING. IF SUCH ARE INTRODUCED IN AN ARCHITECTURAL CONTEXT THEY WILL HAVE A DISTINCTIVE IMPACT ON THE VISUAL EXPRESSION OF THE BUILDING. THE QUESTION IS WHAT KIND. THAT I WILL ATTEMPT TO ANSWER IN THIS ARTICLE THROUGH OBSERVATION...... OF SELECTED EXISTING BUILDINGS IN AND AROUND COPENHAGEN COVERED WITH MOSAIC TILES, UNGLAZED OR GLAZED CLAY TILES. ITS BUILDINGS WHICH HAVE QUALITIES THAT I WOULD LIKE APPLIED, PERHAPS TRANSFORMED OR MOST PREFERABLY, INTERPRETED ANEW, FOR THE LARGE GLAZED CONCRETE PANELS I AM DEVELOPING. KEYWORDS...

  17. Large-Scale Disasters

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  18. The Chinese Politeness Scale

    王喜凤

    2012-01-01

    In order to make sense of what is said in an interaction,we have to look at various factors which relate to social distance and closeness.Generally,these factors include the specific situation language takes place,the relative status of the two participants,the message being delivered and finally the age of the participants.In this article,the Chinese Politeness Scale,based on Chinese social values and tradition,will be explained and demonstrated in detail.

  19. Scaling up Copy Detection

    Li, Xian; Dong, Xin Luna; Lyons, Kenneth B.; Meng, Weiyi; Srivastava, Divesh

    2015-01-01

    Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\\em how to improve scalability of copy detection on structured data}. Ou...

  20. Indian scales and inventories

    Venkatesan, S.

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, t...

  1. Urban Scaling in Europe

    Bettencourt, Luís M. A.; Lobo, José

    2015-01-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical e...

  2. Toughness Scaling Model Applications

    Dlouhý, Ivo; Kozák, Vladislav; Holzmann, Miloslav

    78. Dordrecht : Kluwer Academic Publishers, 2002 - (Dlouhý, I.), s. 195-212 - (NATO Science Series. Mathematics, Physics and Chemistry. 2) R&D Projects: GA AV ČR IAA2041003; GA MŠk ME 303 Institutional research plan: CEZ:AV0Z2041904 Keywords : Fracture toughness transferability * pre cracked Charpyspecimen * toughness scaling models Subject RIV: JL - Materials Fatigue, Friction Mechanics

  3. ATLAS Jet Energy Scale

    D. Schouten; Tanasijczuk, A.; Vetterli, M.(Department of Physics, Simon Fraser University, Burnaby, BC, Canada); Collaboration, for the ATLAS

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy cor...

  4. Beyond the Planck Scale

    I outline motivations for believing that important quantum gravity effects lie beyond the Planck scale at both higher energies and longer distances and times. These motivations arise in part from the study of ultra-high energy scattering, and also from considerations in cosmology. I briefly summarize some inferences about such ultra-planckian physics, and clues we might pursue towards the principles of a more fundamental theory addressing the known puzzles and paradoxes of quantum gravity.

  5. Weighted metric multidimensional scaling

    Greenacre, Michael J.

    2004-01-01

    This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide...

  6. The International Retrievability Scale

    The International Retrievability Scale has been developed with two main objectives: to support dialogue with stakeholders and to establish a common international framework. The notion of establishing an international retrievability scale (R-scale) was being tested even before the launch of the R and R project. Once the project was established, further development of the R-scale was undertaken by a dedicated working group, which was equally tasked with the drafting of a leaflet. More than 18 months were spent testing and improving the leaflet and the R-scale, both within the working group and beyond. It is hoped that discussion during the R and R Conference will lead to further refinement; feedback from interested parties is encouraged and appreciated. The R-scale is presented in schematic form in Figure 1. For added clarity, a tabular version of the R-scale is also provided in Table 1. As can be seen at the top of Figure 1, the different stages of waste disposal can be reduced to a series of common steps. The duration of steps is variable according to specific national programme provisions. After the visualisation of stages, the second part of the R-scale allows us to examine conceptually the ease and cost of retrieval at each stage. Again, the duration of each block, and the relative proportion between ease and cost, will depend on the national programme in place. In the third part of Figure 1, the character of safety assurance at each stage is represented through the relative weight of active and passive controls. A four-page leaflet entitled International Understanding of Reversibility of Decisions and Retrievability of Waste in Geological Disposal was prepared for distribution at this conference. The leaflet is divided into three sections. Section 1 provides a general description of the geological disposal process, and addresses topics such as the objective of a geological repository, the life cycle stages of the repository, the role of observation along the

  7. Urban scaling in Europe.

    Bettencourt, Luís M A; Lobo, José

    2016-03-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  8. The ''invisible'' radioactive scale

    Production and up-concentration of naturally occurring radioactive material (NORM) in the petroleum industry has attracted steadily increasing attention during the last 15 years. Most production engineers today associate this radioactivity with precipitates (scales) and sludges in production tubing, pumps, valves, separators, settling tanks etc., wherever water is being transported or treated. 226Ra and 228Ra are the most well known radioactive constituents in scale. Surprisingly little known is the radioactive contamination by 210Pb and progeny 210Bi and 210Po. These are found in combination with 226Ra in ordinary scale, often in layer of non-radioactive metallic lead in water transportation systems, but also in pure gas and condensate handling systems ''unsupported'' by 226Ra, but due to transportation and decay of the noble gas 222Rn in NG/LNG. This latter contamination may be rather thin, in some cases virtually invisible. When, in addition, the radiation energies are low enough for not being detectable on the equipment outer surface, its existence has for most people in the industry been a secret. The report discusses transportation and deposition mechanisms, detection methods and provides some examples of measured results from the North Sea on equipment sent for maintenance. It is concluded that a regular measurement program for this type of contamination should be mandatory under all dismantling processes of transportation and fluid handling equipment for fluids and gases offshore and onshore

  9. Advanced scale conditioning agents

    A technical description of Advanced Scale Conditioning Agents (ASCA) technology was published in the May-June 2003 edition of the Nuclear Plant Journal. That article described the development of programs of advanced scale conditioning agents and specific types to maintain the secondary side of steam generators within a pressurized water reactor free of deposited corrosion products and corrosion-inducing contaminants to ensure their long-term operation. This article describes the first two plant applications of advanced scale conditioning agents implemented at Southern Nuclear Operating Company's Vogtle Units 1 and 2 during their 2002 scheduled outages to minimize tube degradation and maintain full power operation using the most effective techniques while minimizing outage costs. The goal was to remove three to four fuel cycles of deposits from each steam generator so that after future chemical cleaning activities, ASCAs could be used to maintain the cleanliness of the steam generators without the need for additional chemical cleaning efforts. The goal was achieved as well as several other benefits that resulted in cost savings to the plant

  10. Mechanism for salt scaling

    Valenza, John J., II

    Salt scaling is superficial damage caused by freezing a saline solution on the surface of a cementitious body. The damage consists of the removal of small chips or flakes of binder. The discovery of this phenomenon in the early 1950's prompted hundreds of experimental studies, which clearly elucidated the characteristics of this damage. In particular it was shown that a pessimum salt concentration exists, where a moderate salt concentration (˜3%) results in the most damage. Despite the numerous studies, the mechanism responsible for salt scaling has not been identified. In this work it is shown that salt scaling is a result of the large thermal expansion mismatch between ice and the cementitious body, and that the mechanism responsible for damage is analogous to glue-spalling. When ice forms on a cementitious body a bi-material composite is formed. The thermal expansion coefficient of the ice is ˜5 times that of the underlying body, so when the temperature of the composite is lowered below the melting point, the ice goes into tension. Once this stress exceeds the strength of the ice, cracks initiate in the ice and propagate into the surface of the cementitious body, removing a flake of material. The glue-spall mechanism accounts for all of the characteristics of salt scaling. In particular, a theoretical analysis is presented which shows that the pessimum concentration is a consequence of the effect of brine pockets on the mechanical properties of ice, and that the damage morphology is accounted for by fracture mechanics. Finally, empirical evidence is presented that proves that the glue-small mechanism is the primary cause of salt scaling. The primary experimental tool used in this study is a novel warping experiment, where a pool of liquid is formed on top of a thin (˜3 mm) plate of cement paste. Stresses in the plate, including thermal expansion mismatch, result in warping of the plate, which is easily detected. This technique revealed the existence of

  11. Extragalactic Large-Scale Structures behind the Southern Milky Way. IV. Redshifts Obtained with MEFOS A Galactic O-Star Catalog

    Woudt, P A; Cayatte, V; Balkowski, C; Felenbok, P; Maíz-Appelániz, J; Walborn, N R; Galué, H A; Wei, L H; Woudt, Patrick A.; Kraan-Korteweg, Renee C.; Cayatte, Veronique; Balkowski, Chantal; Felenbok, Paul

    2004-01-01

    Abbreviated: As part of our efforts to unveil extragalactic large-scale structures behind the southern Milky Way, we here present redshifts for 764 galaxies in the Hydra/Antlia, Crux and Great Attractor region (266deg < l < 338deg, |b| < 10deg), obtained with the Meudon-ESO Fibre Object Spectrograph (MEFOS) at the 3.6-m telescope of ESO. The observations are part of a redshift survey of partially obscured galaxies recorded in the course of a deep optical galaxy search behind the southern Milky Way. A total of 947 galaxies have been observed, a small percentage of the spectra (N=109, 11.5%) were contaminated by foreground stars, and 74 galaxies (7.8%) were too faint to allow a reliable redshift determination. With MEFOS we obtained spectra down to the faintest galaxies of our optical galaxy survey, and hence probe large-scale structures out to larger distances (v <~ 30000 km/s) than our other redshift follow-ups. The most distinct large-scale structures revealed in the southern Zone of Avoidance ar...

  12. Comparing the theoretical versions of the Beaufort scale, the T-Scale and the Fujita scale

    Meaden, G. Terence; Kochev, S.; Kolendowicz, L.; Kosa-Kiss, A.; Marcinoniene, Izolda; Sioutas, Michalis; Tooming, Heino; Tyrrell, John

    2007-02-01

    2005 is the bicentenary of the Beaufort Scale and its wind-speed codes: the marine version in 1805 and the land version later. In the 1920s when anemometers had come into general use, the Beaufort Scale was quantified by a formula based on experiment. In the early 1970s two tornado wind-speed scales were proposed: (1) an International T-Scale based on the Beaufort Scale; and (2) Fujita's damage scale developed for North America. The International Beaufort Scale and the T-Scale share a common root in having an integral theoretical relationship with an established scientific basis, whereas Fujita's Scale introduces criteria that make its intensities non-integral with Beaufort. Forces on the T-Scale, where T stands for Tornado force, span the range 0 to 10 which is highly useful world wide. The shorter range of Fujita's Scale (0 to 5) is acceptable for American use but less convenient elsewhere. To illustrate the simplicity of the decimal T-Scale, mean hurricane wind speed of Beaufort 12 is T2 on the T-Scale but F1.121 on the F-Scale; while a tornado wind speed of T9 (= B26) becomes F4.761. However, the three wind scales can be uni-fied by either making F-Scale numbers exactly half the magnitude of T-Scale numbers [i.e. F'half = T / 2 = (B / 4) - 4] or by doubling the numbers of this revised version to give integral equivalence with the T-Scale. The result is a decimal formula F'double = T = (B / 2) - 4 named the TF-Scale where TF stands for Tornado Force. This harmonious 10-digit scale has all the criteria needed for world-wide practical effectiveness.

  13. Challenging comparison of stroke scales

    Kavian Ghandehari

    2013-01-01

    Full Text Available Stroke scales can be classified as clinicometric scales and functional impairment, handicap scales. All studies describing stroke scales were reviewed by internet searching engines with the final search performed on January 1, 2013. The following string of keywords was entered into search engines; stroke, scale, score and disability. Despite advantages of modified National Institute of Health Stroke Scale and Scandinavian stroke scale comparing to the NIHSS, including their simplification and less inter-rater variability; most of the stroke neurologists around the world continue using the NIHSS. The modified Rankin scale (mRS and Barthel index (BI are widely used functional impairment and disability scales. Distinction between grades of mRS is poorly defined. The Asian stroke disability scale is a simplified functional impairment, handicap scale which is as valid as mRS and BI. At the present time, the NIHSS, mRS and BI are routine stroke scales because physicians have used to work with these scales for more than two decades, although it could not be an acceptable reason. On the other side, results of previous stroke trials, which are the basis of stroke management guidelines are driven using these scales.

  14. Soil organic carbon across scales.

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. PMID:25918852

  15. Cellular-scale hydrodynamics

    Abkarian, Manouk; Faivre, Magalie; Horton, Renita; Smistrup, Kristian; Best-Popescu, Catherine A; Stone, Howard A.

    2008-01-01

    Microfluidic tools are providing many new insights into the chemical, physical and physicochemical responses of cells. Both suspension-level and single-cell measurements have been studied. We review our studies of these kinds of problems for red blood cells with particular focus on the shapes of ...... mechanical effects on suspended cells can be studied systematically in small devices, and how these features can be exploited to develop methods for characterizing physicochemical responses and possibly for the diagnosis of cellular-scale changes to environmental factors....

  16. Scaling CouchDB

    Holt, Bradley

    2011-01-01

    This practical guide offers a short course on scaling CouchDB to meet the capacity needs of your distributed application. Through a series of scenario-based examples, this book lets you explore several methods for creating a system that can accommodate growth and meet expected demand. In the process, you learn about several tools that can help you with replication, load balancing, clustering, and load testing and monitoring. Apply performance tips for tuning your databaseReplicate data, using Futon and CouchDB's RESTful interfaceDistribute CouchDB's workload through load balancingLearn option

  17. ATLAS Jet Energy Scale

    Schouten, D; Vetterli, M

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy corrections to within \\approx 4% in the central region of the calorimeter.

  18. Perceived prominence and scale types

    Tøndering, John; Jensen, Christian

    2005-01-01

    Three different scales which have been used to measure perceived prominence are evaluated in a perceptual experiment. Average scores of raters using a multi-level (31-point) scale, a simple binary (2-point) scale and an intermediate 4-point scale are almost identical. The potentially finer...... gradation possible with the multilevel scale(s) is compensated for by having multiple listeners, which is a also a requirement for obtaining reliable data. In other words, a high number of levels is neither a sufficient nor a necessary requirement. Overall the best results were obtained using the 4-point...

  19. A scale of risk.

    Gardoni, Paolo; Murphy, Colleen

    2014-07-01

    This article proposes a conceptual framework for ranking the relative gravity of diverse risks. This framework identifies the moral considerations that should inform the evaluation and comparison of diverse risks. A common definition of risk includes two dimensions: the probability of occurrence and the associated consequences of a set of hazardous scenarios. This article first expands this definition to include a third dimension: the source of a risk. The source of a risk refers to the agents involved in the creation or maintenance of a risk and captures a central moral concern about risks. Then, a scale of risk is proposed to categorize risks along a multidimensional ranking, based on a comparative evaluation of the consequences, probability, and source of a given risk. A risk is ranked higher on the scale the larger the consequences, the greater the probability, and the more morally culpable the source. The information from the proposed comparative evaluation of risks can inform the selection of priorities for risk mitigation. PMID:24372160

  20. A Validity Scale for the Sharp Consumer Satisfaction Scales.

    Tanner, Barry A.; Stacy, Webb, Jr.

    1985-01-01

    A validity scale for the Sharp Consumer Satisfaction Scale was developed and used in experiments to assess patients' satisfaction with community mental health centers. The scale discriminated between clients who offered suggestions and those who did not. It also improved researcher's ability to predict true scores from obtained scores. (DWH)

  1. Design and Optimization of Tramadol HCL immediate release tablets as per scale Up and Post Approval Changes (SUPAC) Level II

    Jaikishan Khatri; Mugdha Karde; Savita Yadav; Dr. S.S. Bhalerao

    2012-01-01

    The Filing of a New Drug Application (NDA), Abbreviated New Drug Application (ANDA) and Abbreviated Antibiotic Drug Application (AADA) is only the beginning for a drug to get into the market. The SUPAC Level changes are due to change in site, change in excipient levels, changes in batch size and equipment changes. The objective of this experiment was to make a robust, stable formulation which would withstand the SUPAC changes. An immediate release tablet formulation was made in order to carry...

  2. Global scale precipitation from monthly to centennial scales: empirical space-time scaling analysis, anthropogenic effects

    de Lima, Isabel; Lovejoy, Shaun

    2016-04-01

    The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to

  3. Northeast Snowfall Impact Scale (NESIS)

    National Oceanic and Atmospheric Administration, Department of Commerce — While the Fujita and Saffir-Simpson Scales characterize tornadoes and hurricanes respectively, there is no widely used scale to classify snowstorms. The Northeast...

  4. ScaleUp America Communities

    Small Business Administration — SBA’s new ScaleUp America Initiative is designed to help small firms with high potential “scale up” and grow their businesses so that they will provide more jobs...

  5. Developing a Biology Attitude Scale

    Melih KOÇAKOĞLU

    2010-08-01

    Full Text Available The purpose of this study was to develop a likert-style reliable and valid biology attitude scale. Before developing the biology attitude scale, the current scales had been carefully scrutinized, the views of experts taken, and the first draft of scale prepared by choosingstatements. The validity and reliability studies of the scale were carried out by applying the first draft on 168 students as a pilot study. The content validity of the scale was confirmed by the panel of judges. The factor analysis of the scale provided the construction validity of the scale. There are 36 statements as positive and negative statements. The value of Kaiser-Meyer-Olkin is 0.91, Barlet’s value is 5415.115 and Cronbach alpha coefficient is 0.941 for the reliability.

  6. MULTIPLE SCALES FOR SUSTAINABLE RESULTS

    This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...

  7. Scaling Equation for Invariant Measure

    LIU Shi-Kuo; FU Zun-Tao; LIU Shi-Da; REN Kui

    2003-01-01

    An iterated function system (IFS) is constructed. It is shown that the invariant measure of IFS satisfies the same equation as scaling equation for wavelet transform (WT). Obviously, IFS and scaling equation of WT both have contraction mapping principle.

  8. Resultados clínicos e metabólicos da abreviação do jejum com carboidratos na revascularização cirúrgica do miocárdio Clinical and metabolic results of fasting abbreviation with carbohydrates in coronary artery bypass graft surgery

    Gibran Roder Feguri

    2012-03-01

    Full Text Available INTRODUÇÃO: Existe pouca informação sobre abreviação do jejum pré-operatório com oferta de líquidos ricos em carboidratos (CHO nas operações cardiovasculares. OBJETIVOS: Avaliar variáveis clínicas, segurança do método e efeitos no metabolismo de pacientes submetidos à abreviação do jejum na cirurgia de revascularização do miocárdio (CRVM. MÉTODOS: Quarenta pacientes submetidos à CRVM foram randomizados para receberem 400 ml (6 horas antes e 200 mL (2 horas antes de maltodextrina a 12,5% (Grupo I, n=20 ou apenas água (Grupo II, n=20 antes da indução anestésica. Foram avaliadas diversas variáveis clínicas no perioperatório e também a resistência insulínica (RI pelo índice de Homa-IR e pela necessidade de insulina exógena; além da função excretora da célula beta pancreática pelo Homa-Beta e controle glicêmico por exames de glicemia capilar. RESULTADOS: Não ocorreram óbitos, broncoaspiração, mediastinite, infarto agudo do miocárdio ou acidente vascular encefálico perioperatórios. Fibrilação atrial ocorreu em dois pacientes de cada grupo e complicações infecciosas não diferiram entre os grupos (P=0,611. Pacientes do Grupo I apresentaram dois dias a menos de internação hospitalar (P=0,025 e um dia a menos na UTI (P0,05. Declínio da produção endógena de insulina ocorreu em ambos os grupos (PINTRODUCTION: Limited information is available about preoperative fasting abbreviation with administration of liquid enriched with carbohydrates (CHO in cardiovascular surgeries. OBJECTIVES: To evaluate clinical variables, security of the method and effects on the metabolism of patients undergoing fasting abbreviation in coronary artery bypass graft (CABG surgery. METHODS: Forty patients undergoing CABG were randomized to receive 400 ml (6 hours before and 200 ml (2 hours before of maltodextrin at 12.5% (Group I, n=20 or just water (Group II, n=20 before anesthetic induction. Perioperative clinical variables

  9. Mixed scale joint graphical lasso

    Pircalabelu, Eugen; Claeskens, Gerda; Waldorp, Lourens

    2016-01-01

    We develop a method for estimating brain networks from fMRI datasets that have not all been measured using the same set of brain regions. Some of the coarse scale regions have been split in smaller subregions. The proposed penalized estimation procedure selects undirected graphical models with similar structures that combine information from several subjects and several coarseness scales. Both within scale edges and between scale edges that identify possible connections between a large region...

  10. Scale setting in lattice QCD

    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  11. Scale setting in lattice QCD

    Sommer, Rainer [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2014-02-15

    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  12. Coma scales: a historical review

    Ana Luisa Bordini

    2010-12-01

    Full Text Available OBJECTIVE: To describe the most important coma scales developed in the last fifty years. METHOD: A review of the literature between 1969 and 2009 in the Medline and Scielo databases was carried out using the following keywords: coma scales, coma, disorders of consciousness, coma score and levels of coma. RESULTS: Five main scales were found in chronological order: the Jouvet coma scale, the Moscow coma scale, the Glasgow coma scale (GCS, the Bozza-Marrubini scale and the FOUR score (Full Outline of UnResponsiveness, as well as other scales that have had less impact and are rarely used outside their country of origin. DISCUSSION: Of the five main scales, the GCS is by far the most widely used. It is easy to apply and very suitable for cases of traumatic brain injury (TBI. However, it has shortcomings, such as the fact that the speech component in intubated patients cannot be tested. While the Jouvet scale is quite sensitive, particularly for levels of consciousness closer to normal levels, it is difficult to use. The Moscow scale has good predictive value but is little used by the medical community. The FOUR score is easy to apply and provides more neurological details than the Glasgow scale.

  13. Scaling Laws in Gravitational Collapse

    Cai, Rong-Gen; Yang, Run-Qiu(State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, 100190, China)

    2015-01-01

    This paper presents two interesting scaling laws, which relate some critical exponents in the critical behavior of spherically symmetric gravitational collapses. These scaling laws are independent of the details of gravity theory under consideration and share similar forms as those in thermodynamic and geometrical phase transitions in condensed matter system. The properties of the scaling laws are discussed and some numerical checks are given.

  14. Westside Test Anxiety Scale Validation

    Driscoll, Richard

    2007-01-01

    The Westside Test Anxiety Scale is a brief, ten item instrument designed to identify students with anxiety impairments who could benefit from an anxiety-reduction intervention. The scale items cover self-assessed anxiety impairment and cognitions which can impair performance. Correlations between anxiety-reduction as measured by the scale and…

  15. KNO scaling violations

    The author discusses KNO scaling violations from ISR to collider energies, and related topics such as cluster size (are clusters produced at colliders just ordinary resonances ?), long-range correlations etc. the authors begin with a result on the energy dependence of heavy flavor production. The dual parton model (DPM) is based on the 1/N expansion, which is valid for any field theory with N degrees of freedom, and provides for a perturbative expansion in cases where the coupling constant is not small. It turns out that graphs with the simplest topology are dominant. Graphs with increasing topological complexity are suppressed by factors of 1/N/sup 2/. It has been shown that there is a one-to-one correspondence between the graphs in the 1/N expansion and those in a multiple-scattering model. This observations allow one to compute the weights σ/sub n/ of the different graphs in the 1/N expansion - essentially from unitarity

  16. Indian scales and inventories.

    Venkatesan, S

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines. PMID:21836709

  17. Galactic-scale civilization

    Kuiper, T. B. H.

    1980-01-01

    Evolutionary arguments are presented in favor of the existence of civilization on a galactic scale. Patterns of physical, chemical, biological, social and cultural evolution leading to increasing levels of complexity are pointed out and explained thermodynamically in terms of the maximization of free energy dissipation in the environment of the organized system. The possibility of the evolution of a global and then a galactic human civilization is considered, and probabilities that the galaxy is presently in its colonization state and that life could have evolved to its present state on earth are discussed. Fermi's paradox of the absence of extraterrestrials in light of the probability of their existence is noted, and a variety of possible explanations is indicated. Finally, it is argued that although mankind may be the first occurrence of intelligence in the galaxy, it is unjustified to presume that this is so.

  18. Biological scaling and physics

    A R P Rau

    2002-09-01

    Kleiber’s law in biology states that the specific metabolic rate (metabolic rate per unit mass) scales as -1/4 in terms of the mass of the organism. A long-standing puzzle is the (- 1/4) power in place of the usual expectation of (- 1/3) based on the surface to volume ratio in three-dimensions. While recent papers by physicists have focused exclusively on geometry in attempting to explain the puzzle, we consider here a specific law of physics that governs fluid flow to show how the (- 1/4) power arises under certain conditions. More generally, such a line of approach that identifies a specific physical law as involved and then examines the implications of a power law may illuminate better the role of physics in biology.

  19. Large Scale Solar Heating

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  20. Excitable Scale Free Networks

    Copelli, Mauro

    2007-01-01

    When a simple excitable system is continuously stimulated by a Poissonian external source, the response function (mean activity versus stimulus rate) generally shows a linear saturating shape. This is experimentally verified in some classes of sensory neurons, which accordingly present a small dynamic range (defined as the interval of stimulus intensity which can be appropriately coded by the mean activity of the excitable element), usually about one or two decades only. The brain, on the other hand, can handle a significantly broader range of stimulus intensity, and a collective phenomenon involving the interaction among excitable neurons has been suggested to account for the enhancement of the dynamic range. Since the role of the pattern of such interactions is still unclear, here we investigate the performance of a scale-free (SF) network topology in this dynamic range problem. Specifically, we study the transfer function of disordered SF networks of excitable Greenberg-Hastings cellular automata. We obser...

  1. Engines at molecular scales

    Krishnan, R; Krishnan, Raishma

    2004-01-01

    In recent literature there has been a lot of interest in the phenomena of noise induced transport in the absence of an average bias occurring in spatially periodic systems far from equilibrium. One of the main motivations in this area is to understand the mechanism behind the operation of biological motors at molecular scale. These molecular motors convert chemical energy available during the hydrolysis of ATP into mechanical motion to transport cargo and vesicles in living cells with very high reliability, adaptability and efficiency in a very noisy environment. The basic principle behind such a motion, namely the Brownian ratchet principle, has applications in nanotechnology as novel nanoparticle separation devices. Also, the mechanism of ratchet operation finds applications in game theory. Here, we briefly focus on the physical concepts underlying the constructive role of noise in assisting transport at a molecular level. The nature of particle currents, the energetic efficiency of these motors, the entrop...

  2. Scaling MongoDB

    Chodorow, Kristina

    2011-01-01

    Create a MongoDB cluster that will to grow to meet the needs of your application. With this short and concise book, you'll get guidelines for setting up and using clusters to store a large volume of data, and learn how to access the data efficiently. In the process, you'll understand how to make your application work with a distributed database system. Scaling MongoDB will help you: Set up a MongoDB cluster through shardingWork with a cluster to query and update dataOperate, monitor, and backup your clusterPlan your application to deal with outages By following the advice in this book, you'l

  3. Scaling up: Assessing social impacts at the macro-scale

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  4. The Fundamental Scale of Descriptions

    Febres, Gerardo

    2014-01-01

    The complexity of a system description is a function of the entropy of its symbolic description. Prior to computing the entropy of the system description, an observation scale has to be assumed. In natural language texts, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, is only a presumption. This study depicts the notion of the Description Fundamental Scale as a set of symbols which serves to analyze the essence a language structure. The concept of Fundamental Scale is tested using English and MIDI music texts by means of an algorithm developed to search for a set of symbols, which minimizes the system observed entropy, and therefore best expresses the fundamental scale of the language employed. Test results show that it is possible to find the Fundamental Scale of some languages. The concept of Fundamental Scale, and the method for its determination, emerges as an interesting tool to fac...

  5. KNO scaling 30 years later

    KNO scaling, i.e. the collapse of multiplicity distributions Pn onto a universal scaling curve manifests when Pn is expressed as distribution of the standardized multiplicity (n - c)/λ with c and λ being location and scale parameters governed by leading particle effects and the growth of average multiplicity. At very high energies, strong violation of KNO scaling behavior is observed (pp-bar) and expected to occur (e+e-). This challenges one to introduce novel, physically well motivated and preferably simple scaling rules obeyed by high-energy data. One possibility what I find useful and which satisfies the above requirements is the repetition of the original scaling prescription (shifting and rescaling) in Mellin space, that is, for the multiplicity moments' rank. This scaling principle will be discussed here, illustrating its capabilities both on model predictions and on real data

  6. Solar system to scale

    Gerwig López, Susanne

    2016-04-01

    One of the most important successes in astronomical observations has been to determine the limit of the Solar System. It is said that the first man able to measure the distance Earth-Sun with only a very slight mistake, in the second century BC, was the wise Greek man Aristarco de Samos. Thanks to Newtońs law of universal gravitation, it was possible to measure, with a little margin of error, the distances between the Sun and the planets. Twelve-year old students are very interested in everything related to the universe. However, it seems too difficult to imagine and understand the real distances among the different celestial bodies. To learn the differences among the inner and outer planets and how far away the outer ones are, I have considered to make my pupils work on the sizes and the distances in our solar system constructing it to scale. The purpose is to reproduce our solar system to scale on a cardboard. The procedure is very easy and simple. Students of first year of ESO (12 year-old) receive the instructions in a sheet of paper (things they need: a black cardboard, a pair of scissors, colored pencils, a ruler, adhesive tape, glue, the photocopies of the planets and satellites, the measurements they have to use). In another photocopy they get the pictures of the edge of the sun, the planets, dwarf planets and some satellites, which they have to color, cut and stick on the cardboard. This activity is planned for both Spanish and bilingual learning students as a science project. Depending on the group, they will receive these instructions in Spanish or in English. When the time is over, the students bring their works on their cardboard to the class. They obtain a final mark: passing, good or excellent, depending on the accuracy of the measurements, the position of all the celestial bodies, the asteroids belts, personal contributions, etc. If any of the students has not followed the instructions they get the chance to remake it again properly, in order not

  7. A multi scale model for small scale plasticity

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  8. Loops: Twisting and Scaling

    Walsh, R. W.

    2004-01-01

    Loop-like structures are the fundamental magnetic building blocks of the solar atmosphere. Recent space-based EUV and X-ray satellite observations (from Yohkoh SOHO and TRACE) have challenged the view that these features are simply static gravitationally stratified plasma pipes. Rather it is now surmised that each loop may consist of a bundle of fine plasma threads that are twisted around one another and can brighten independently. This invited review will outline the latest developments in ""untangling"" the topology of these features through three dimensional magnetohydrodynamic modelling and how their properties are being deduced through spectroscopic observations coupled to theoretical scaling laws. In particular recent interest has centred on how the observed thermal profile along loops can be employed as a tool to diagnose any localised energy input to the structure and hence constrain the presence of a particular coronal heating mechanism. The dynamic nature of loops will be highlighted and the implications of superior resolution plasma thread observations (whether spatial temporal or spectral) from future space missions (SolarB STEREO SDO and Solar Orbiter) will be discussed.

  9. Large scale traffic simulations

    Nagel, K.; Barrett, C.L. [Los Alamos National Lab., NM (United States)]|[Santa Fe Institute, NM (United States); Rickert, M. [Los Alamos National Lab., NM (United States)]|[Universitaet zu Koeln (Germany)

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  10. Scaling of structural failure

    Bazant, Z.P. [Northwestern Univ., Evanston, IL (United States); Chen, Er-Ping [Sandia National Lab., Albuquerque, NM (United States)

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  11. Large scale tracking algorithms.

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Analyzing the Feasibility of Teaching Critical Reading Strategies with the Abbreviated Version of English Literary Work%批判性阅读策略教学的可行性分析--以英文名著简写本为例

    董美珍

    2015-01-01

    批判性阅读作为英语教学中培养学生批判性思维的重要途径,得到了国内外学者的关注。本文以英文名著简写本为阅读材料,以高中生为教学对象,探讨适用于小说体裁的批判性阅读策略,以期帮助学生开展批判性阅读,改善批判性思维。%Critical Reading, the important approach of developing students’ critical thinking in English teaching, has gained wide atten-tion from the scholars at home and abroad. This article aims at selecting the appropriate Critical Reading strategies which can be applied in the fictions chosen from the abbreviated version of English literary work and are suitable for High school students in English lesson. The strategies can help the students read critically and improve the students’ critical thinking.

  13. Report on the draft of the law No. 1253 concerning the Revamping and Expanding Domestic Electricity Supply. Volume III. Appendices and Table of abbreviations; Rapport sur le projet de loi (no. 1253) relatif a la modernisation et au developpement du service public de l'electricite. Tome III. Annexes et Table des sigles

    Bataille, Christian [Assemblee Nationale, Paris (France)

    1999-02-11

    The third volume of the Report on behalf of the Production and Exchange Commission on the draft of the law No. 1253 concerning the Revamping and Expanding Domestic Electricity Supply contains Appendices. The appendix number 1 presents the directive 96/92 CE of the European Parliament and Council of 19 December 1996, concerning common rules referring to the electricity internal market. It contains the chapters titled: 1. Field of application and definitions; 2. General rules for sector organization; 3. Production; 4. Exploitation of the transport grid; 5. Exploitation of the distribution grid; 6. Accounting dissociation and transparency; 7. Organization of the grid access; 8. Final dispositions. The appendix number 2 gives the law no. 46 - 628 of 8 April, modified, on the nationalization of the electricity and gas. The third appendix reproduces Decree no. 55 - 662 of 20 May 1955 concerning relationships between the establishments aimed by the articles 2 and 23 of the law of 8 April 1946 and the autonomous producers of electric energy. The appendix number 4 contains the notification of State Council of 7 July 1994 regarding the diversification of EDF and GDF activities. The fifth appendix is a chronological list of the European negotiations concerning the opening of the electricity market (1987 -1997). Finally, a list of following abbreviations is given: ART, ATR, CNES, CRE, CTE, DNN, FACE, FPE, GRT, IEG, INB, PPI, RAG and SICAE.

  14. Contact kinematics of biomimetic scales

    Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate

  15. Scaling structure loads for SMA

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam [KEPCO ENC, Yongin (Korea, Republic of)

    2012-10-15

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies.

  16. International Symposia on Scale Modeling

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  17. A Study of Developing Scale

    AKBAYIRLI, Yadigar; Aydin, Betül

    2013-01-01

    The focus of the study is to construct a reliable and valid scale measuring the competitiveness attitude of the individuals. For the initial ilems, item analysis and factor analysis (basic components) studies are actualized. The scale is conceptualized as a unipolar scale and Cronbach alpha, inner consistency, Harst technique and test retest reliability studies are completed. Validity studies are held in five successive states and satisfactory results are attained.Key words: Competitiveness, ...

  18. Temperature scaling concept of MOSFET

    Masu, K.; Tsubouchi, K.

    1994-01-01

    Lowering both the threshold voltage (Vth) and subthreshold swing (S) at the same time is essentially required for 0.1µm and below 0.1µm MOSFETs with low supply voltage. In this paper, we discuss a temperature scaling concept of MOSFET and the device characteristics of the fabricated 77K MOSFETs. In the temperature scaling concept, the physical quantities relating to potential are scaled with operation temperature, while the dimensional quantities are constant. The distribution of mobile carri...

  19. Generic maximum likely scale selection

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based on......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus in...

  20. Institutions and the Scale Effect

    Trew, Alex

    2009-01-01

    Growth models which imply a scale effect are commonly refuted on the basis of empir?ical evidence. A focus on the extent of the market as opposed to the scale of the country has led recent studies to reconsider the role that country scale plays when conditioning on other factors. We consider a variant of a simple learning by doing model to account for the potential role for institutions in determining the strength ¨C and direction ¨C of the scale effect. Using cross-country data, we find a ...

  1. Scale invariance and renormalization group

    Scale invariance enabled the understanding of cooperative phenomena and the study of elementary interactions, such as phase transition phenomena, the Curie critical temperature and spin rearrangement in crystals. The renormalization group method, due to K. Wilson in 1971, allowed for the study of collective phenomena, using an iterative process from smaller scales to larger scales, leading to universal properties and the description of matter state transitions or long polymer behaviour; it also enabled to reconsider the quantum electrodynamic theory and its relations to time and distance scales

  2. Statistical inference across time scales

    Duval, Céline

    2011-01-01

    We investigate statistical inference across time scales. We take as toy model the estimation of the intensity of a discretely observed compound Poisson process with symmetric Bernoulli jumps. We have data at different time scales: microscopic, intermediate and macroscopic. We quantify the smooth statistical transition from a microscopic Poissonian regime to a macroscopic Gaussian regime. The classical quadratic variation estimator is efficient in both microscopic and macroscopic scales but surprisingly shows a substantial loss of information in the intermediate scale that can be explicitly related to the sampling rate. We discuss the implications of these findings beyond this idealised framework.

  3. Relating urban scaling, fundamental allometry, and density scaling

    Rybski, Diego

    2016-01-01

    We study the connection between urban scaling, fundamental allometry (between city population and city area), and per capita vs.\\ population density scaling. From simple analytical derivations we obtain the relation between the 3 involved exponents. We discuss particular cases and ranges of the exponents which we illustrate in a "phase diagram". As we show, the results are consistent with previous work.

  4. Drift Scale THM Model

    J. Rutqvist

    2004-10-07

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because

  5. Scaling of the AP600 containment large scale test facility

    The Large Scale Test (LST) Facility is an integral effects test facility which provides experimental data for validation of heat and mass transfer correlations used in WGOTHIC, the analysis code for the AP600 containment atmosphere. The LST tests consist of a series of pressurization transients in a 1/8th scale containment vessel with simulated internal heat sinks and passive containment cooling system (PCCS). These pressurization transients provide a range of transient as well as steady state data. To establish the applicability of the LST data as an integral effects facility to the AP600 containment; top-down, system-level scaling and bottom-up scaling similar to that described by Zuber (Hierarchical, Two-Tiered Scaling Methodology, 1991) were used to license the AP600 containment. Top-down, system-level scaling applied to the AP600 containment and the LST facility is the focus of this paper. Since pressure is the parameter of primary importance for the AP600 containment, the top-down, system-level scaling effort focuses on physical phenomena or processes such as heat and mass transfer. A detailed order of magnitude analysis by Spencer et al. (1997) has shown break mass-energy release, convection heat transfer, and mass transfer (condensation and evaporation) to be the dominant phenomena associated with the AP600 passive containment. To capture the aggregate influence (on pressure) of these dominant phenomena inside the AP600 containment, a simplified Rate of Pressure Change (RPC) equation is used. Variables in this equation are normalized using reference values associated with containment. The resulting coefficients in the non-dimensional RPC equation represent scaling groups which are numerically evaluated for AP600 and the LST. By comparing the magnitude of the ratios of the scaling groups for AP600 and LST, the applicability of LST data to AP600 is quantitatively assessed to judge whether the test facility is adequately scaled. Scaling ratios within the

  6. Voice, Schooling, Inequality, and Scale

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  7. Large-scale data analytics

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  8. A Scale of Mobbing Impacts

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  9. Possible Scales of New Physics

    Dine, Michael

    1999-01-01

    The biggest question in beyond the standard model physics is what are the scales of new physics. Ideas about scales, as well as experimental evidence and constraints, are surveyed for a variety of possible forms of new physics: supersymmetry, neutrino masses, unification, and superstring theory.

  10. Scaling and small-scale structure in cosmic string networks

    We examine the scaling properties of an evolving network of strings in Minkowski spacetime and study the evolution of length scales in terms of a three-scale model proposed by Austin, Copeland, and Kibble (ACK). We find good qualitative and some quantitative agreement between the model and our simulations. We also investigate small-scale structure by altering the minimum allowed size for loop production Ec. Certain quantities depend significantly on this parameter: for example, the scaling density can vary by a factor of 2 or more with increasing Ec. Small-scale structure as defined by ACK disappears if no restrictions are placed on loop production, and the fractal dimension of the string changes smoothly from 2 to 1 as the resolution scale is decreased. Loops are nearly all produced at the lattice cutoff. We suggest that the lattice cutoff should be interpreted as corresponding to the string width, and that in a real network loops are actually produced with this size. This leads to a radically different string scenario, with particle production rather than gravitational radiation being the dominant mode of energy dissipation. At the very least, a better understanding of the discretization effects in all simulations of cosmic strings is called for. copyright 1997 The American Physical Society

  11. Multi-scale brain networks

    Betzel, Richard F

    2016-01-01

    The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales -- of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuros...

  12. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  13. Scale effect on overland flow connectivity at the plot scale

    A. Peñuela

    2012-06-01

    Full Text Available A major challenge in present-day hydrological sciences is to enhance the performance of existing distributed hydrological models through a better description of subgrid processes, in particular the subgrid connectivity of flow paths. The relative surface connection function (RSC was proposed by Antoine et al. (2009 as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outflow boundary (C as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrogram at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function. For this purpose, digital elevation data of a real field (9 × 3 m and three synthetic fields (6 × 6 m with contrasting hydrological responses were used, and the RSC function was calculated at different scales by changing the length (l or width (w of the field. Border effects, at different extents depending on the microtopography, were observed for the smaller scales, when decreasing l or w, which resulted in a strong decrease or increase of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing w. On the contrary, a remarkable scale effect was observed in the RSC function when changing l. In general, for a given degree of filling of the depression storage, C decreased as l increased. This change in C was inversely proportional to the change in l. This observation applied only up to approx. 50–70

  14. Universal finite-size scaling amplitudes in anisotropic scaling

    Phenomenological scaling arguments suggest the existence of universal amplitudes in the finite-size scaling of certain correlation lengths in strongly anisotropic or dynamical phase transitions. For equilibrium systems, provided that translation invariance and hyperscaling are valid, the Privman-Fisher scaling form of isotropic equilibrium phase transitions is readily generalized. For non-equilibrium systems, universality is shown analytically for directed percolation and is tested numerically in the annihilation-coagulation model and in the pair contact process with diffusion. In these models, for both periodic and free boundary conditions, the universality of the finite-size scaling amplitude of the leading relaxation time is checked. Amplitude universality reveals strong transient effects along the active-inactive transition line in the pair contact process. (author)

  15. A Scale to Measure Superstition

    Md. M. Huque

    2007-01-01

    Full Text Available The purpose of the study was to construct a scale to measure superstition in a rural setting. A total of 31 statements or items expressing superstition were collected through reviewing relevant literature, consultation with extension experts, social scientists, progressive farmers and local leaders Statements were carefully examined and edited as per the criteria of Edwards[1]. The statements were employed to the rating by a battery of judges selected from Tilli union under Saturia upazila of Manikganj. Scale values (S and inter-quartile range value (Q were computed for these statements. Twenty two statements were selected for preparation of draft scale considering their scale and inter-quartile range value. The draft scale was administered on 100 randomly selected maize farmers of four villages of Tilli union under Saturia upazila. Critical ratio (t was calculated for each of the statements. Finally, 20 statements having t≥1.75 were retained in the scale. Both reliability and validity of the scale were ascertained.

  16. Multiple scale mesh free analysis

    Recent developments of mesh free and multi-scale methods and their applications in applied mechanics are surveyed. Three major methodologies are reviewed. First, smoothed particle hydrodynamics (SPH) is discussed as a representative of a non-local kernel, strong form collocation approach. Second, mesh-free Galerkin methods, which have been active research area in recent years, are reviewed. Third, some applications of molecular dynamics (MD) in applied mechanics are discussed. The emphases of this survey are placed on simulations of finite deformations, fracture, shear bands, multi-scale methods, and nano-scale mechanics. Refs. 13 (author)

  17. Japanese large-scale interferometers

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D

  18. Urban transfer entropy across scales

    Murcio, Roberto; Gershenson, Carlos; Batty, Michael

    2015-01-01

    The morphology of urban agglomeration is studied here in the context of information exchange between different spatio-temporal scales. Cities are multidimensional non-linear phenomena, so understanding the relationships and connectivity between scales is important in determining how the interplay of local/regional urban policies may affect the distribution of urban settlements. In order to quantify these relationships, we follow an information theoretic approach using the concept of Transfer Entropy. Our analysis is based on a stochastic urban fractal model, which mimics urban growing settlements and migration waves. The results indicate how different policies could affect urban morphology in terms of the information generated across geographical scales.

  19. Psychometric Properties of Work-Related Behavior and Experience Patterns (AVEM) Scale

    Gencer, R. Timucin; Boyacioglu, Hayal; Kiremitci, Olcay; Dogan, Birol

    2010-01-01

    "Work-Related Behaviour and Experience Patterns" (AVEM) has been developed with the intention of determining the occupation related behaviour and lifestyle models of professionals. This study has been conducted to test the validity and reliability of MEDYAM, the abbreviated Turkish equivalent of AVEM. 373 teachers from 10 different primary and…

  20. Local Scale Invariance and Inflation

    Singh, Naveen K

    2016-01-01

    We study the inflation and the cosmological perturbations generated during the inflation in a local scale invariant model. The local scale invariant model introduces a vector field $S_{\\mu}$ in this theory. In this paper, for simplicity, we consider the temporal part of the vector field $S_t$. We show that the temporal part is associated with the slow roll parameter of scalar field. Due to local scale invariance, we have a gauge degree of freedom. In a particular gauge, we show that the local scale invariance provides sufficient number of e-foldings for the inflation. Finally, we estimate the power spectrum of scalar perturbation in terms of the parameters of the theory.

  1. Scaling laws of turbulent dynamos

    Fauve, Stephan; Petrelis, Francois

    2007-01-01

    We consider magnetic fields generated by homogeneous isotropic and parity invariant turbulent flows. We show that simple scaling laws for dynamo threshold, magnetic energy and Ohmic dissipation can be obtained depending on the value of the magnetic Prandtl number.

  2. Dimensional scaling in chemical physics

    Avery, John; Goscinski, Osvaldo

    1993-01-01

    Dimensional scaling offers a new approach to quantum dynamical correlations. This is the first book dealing with dimensional scaling methods in the quantum theory of atoms and molecules. Appropriately, it is a multiauthor production, derived chiefly from papers presented at a workshop held in June 1991 at the Ørsted Institute in Copenhagen. Although focused on dimensional scaling, the volume includes contributions on other unorthodox methods for treating nonseparable dynamical problems and electronic correlation. In shaping the book, the editors serve three needs: an introductory tutorial for this still fledgling field; a guide to the literature; and an inventory of current research results and prospects. Part I treats basic aspects of dimensional scaling. Addressed to readers entirely unfamiliar with the subject, it provides both a qualitative overview, and a tour of elementary quantum mechanics. Part II surveys the research frontier. The eight chapters exemplify current techniques and outline results. Part...

  3. Pilot Scale Advanced Fogging Demonstration

    Demmer, Rick L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fox, Don T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Archiblad, Kip E. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-01-01

    Experiments in 2006 developed a useful fog solution using three different chemical constituents. Optimization of the fog recipe and use of commercially available equipment were identified as needs that had not been addressed. During 2012 development work it was noted that low concentrations of the components hampered coverage and drying in the United Kingdom’s National Nuclear Laboratory’s testing much more so than was evident in the 2006 tests. In fiscal year 2014 the Idaho National Laboratory undertook a systematic optimization of the fogging formulation and conducted a non-radioactive, pilot scale demonstration using commercially available fogging equipment. While not as sophisticated as the equipment used in earlier testing, the new approach is much less expensive and readily available for smaller scale operations. Pilot scale testing was important to validate new equipment of an appropriate scale, optimize the chemistry of the fogging solution, and to realize the conceptual approach.

  4. Fluid dynamics: Swimming across scales

    Baumgart, Johannes; Friedrich, Benjamin M.

    2014-10-01

    The myriad creatures that inhabit the waters of our planet all swim using different mechanisms. Now, a simple relation links key physical observables of underwater locomotion, on scales ranging from millimetres to tens of metres.

  5. Hidden scale invariance of metals

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.;

    2015-01-01

    of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were......Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant...

  6. Scale issues in remote sensing

    Weng, Qihao

    2014-01-01

    This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.

  7. Development of Dyadic Relationship Scale

    AVCI, Özlem HASKAN

    2014-01-01

    Problem Statement: The rise of premarital studies brings along questions about the evaluation of effectiveness of educational programs developed for preparing young individuals for marriage and family life. Purpose of Study: The purpose of this study is to develop Dyadic Relationship Scale for university students. This study introduces Dyadic Relationship Scale (DRS) developed on the basis of Turkish culture. Methods: Validity and reliability studies for the DRS were conducted with the partic...

  8. Measurement scale for colour perception

    Benoit, Eric

    2012-01-01

    The colour, with the particularity to be defined simultaneously as a physical quantity and as a psychophysical quantity, is one of the concepts that can link hard sciences and behavioural sciences. From the viewpoint of behavioural sciences colours are basically measured with nominal scales, and in hard science colours are measured with interval scales. Our hypothesis is that the main relation that must be preserved during colour measurement is a metric. We suggest then that colours must be m...

  9. Scaling exponents of star polymers

    von Ferber, Christian; Holovatch, Yurij

    2002-01-01

    We review recent results of the field theoretical renormalization group analysis on the scaling properties of star polymers. We give a brief account of how the numerical values of the exponents governing the scaling of star polymers were obtained as well as provide some examples of the phenomena governed by these exponents. In particular we treat the interaction between star polymers in a good solvent, the Brownian motion near absorbing polymers, and diffusion-controlled reactions involving p...

  10. Content adaptive screen image scaling

    Zhai, Yao; Wang, Qifei; Lu, Yan; Li, Shipeng

    2015-01-01

    This paper proposes an efficient content adaptive screen image scaling scheme for the real-time screen applications like remote desktop and screen sharing. In the proposed screen scaling scheme, a screen content classification step is first introduced to classify the screen image into text and pictorial regions. Afterward, we propose an adaptive shift linear interpolation algorithm to predict the new pixel values with the shift offset adapted to the content type of each pixel. The shift offse...

  11. Normalization of emotion control scale

    Hojatoolah Tahmasebian

    2014-09-01

    Full Text Available Background: Emotion control skill teaches the individuals how to identify their emotions and how to express and control them in various situations. The aim of this study was to normalize and measure the internal and external validity and reliability of emotion control test. Methods: This standardization study was carried out on a statistical society, including all pupils, students, teachers, nurses and university professors in Kermanshah in 2012, using Williams’ emotion control scale. The subjects included 1,500 (810 females and 690 males people who were selected by stratified random sampling. Williams (1997 emotion control scale, was used to collect the required data. Emotional Control Scale is a tool for measuring the degree of control people have over their emotions. This scale has four subscales, including anger, depressed mood, anxiety and positive affect. The collected data were analyzed by SPSS software using correlation and Cronbach's alpha tests. Results: The results of internal consistency of the questionnaire reported by Cronbach's alpha indicated an acceptable internal consistency for emotional control scale, and the correlation between the subscales of the test and between the items of the questionnaire was significant at 0.01 confidence level. Conclusion: The validity of emotion control scale among the pupils, students, teachers, nurses and teachers in Iran has an acceptable range, and the test itemswere correlated with each other, thereby making them appropriate for measuring emotion control.

  12. The Internet Gaming Disorder Scale.

    Lemmens, Jeroen S; Valkenburg, Patti M; Gentile, Douglas A

    2015-06-01

    Recently, the American Psychiatric Association included Internet gaming disorder (IGD) in the appendix of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The main aim of the current study was to test the reliability and validity of 4 survey instruments to measure IGD on the basis of the 9 criteria from the DSM-5: a long (27-item) and short (9-item) polytomous scale and a long (27-item) and short (9-item) dichotomous scale. The psychometric properties of these scales were tested among a representative sample of 2,444 Dutch adolescents and adults, ages 13-40 years. Confirmatory factor analyses demonstrated that the structural validity (i.e., the dimensional structure) of all scales was satisfactory. Both types of assessment (polytomous and dichotomous) were also reliable (i.e., internally consistent) and showed good criterion-related validity, as indicated by positive correlations with time spent playing games, loneliness, and aggression and negative correlations with self-esteem, prosocial behavior, and life satisfaction. The dichotomous 9-item IGD scale showed solid psychometric properties and was the most practical scale for diagnostic purposes. Latent class analysis of this dichotomous scale indicated that 3 groups could be discerned: normal gamers, risky gamers, and disordered gamers. On the basis of the number of people in this last group, the prevalence of IGD among 13- through 40-year-olds in the Netherlands is approximately 4%. If the DSM-5 threshold for diagnosis (experiencing 5 or more criteria) is applied, the prevalence of disordered gamers is more than 5%. PMID:25558970

  13. Re-scaling Landscape. Re-scaling Identity

    Julia Sulina

    2012-05-01

    Full Text Available To understand the bonds cultural groups living in Estonia have with their cultural landscape and why they identify themselves with a particular territory (region, the general process of presenting the landscape role in their identity needs to be analysed. Scales of landscape and regional identity of cultural groups are examined as belonging to different historical social formation periods, including nowadays, also taking into account the identity and physical setting relationship, as well as the results of questionnaires and previous studies. The tendency is that becoming more open the society is influenced by globalisation, new technologies and freedom of movement, thus changing both the identities and landscapes scales.

  14. Scalings and relative scalings in the Navier-Stokes turbulence

    High-resolution direct numerical simulations of 3D Navier-Stokes turbulence with normal viscosity and hyperviscosity are carried out. It is found that the inertial-range statistics, both the scalings and the probability density functions, are independent of the dissipation mechanism, while the near-dissipation-range fluctuations show significant structural differences. Nevertheless, the relative scalings expressing the dependence of the moments at different orders are universal, and show unambiguous departure from the Kolmogorov 1941 description, including the 2/3 law for the kinetic energy. Implications for numerical modeling of turbulence are discussed. copyright 1996 The American Physical Society

  15. Scaling Effect In Trade Network

    Konar, M.; Lin, X.; Rushforth, R.; Ruddell, B. L.; Reimer, J.

    2015-12-01

    Scaling is an important issue in the physical sciences. Economic trade is increasingly of interest to the scientific community due to the natural resources (e.g. water, carbon, nutrients, etc.) embodied in traded commodities. Trade refers to the spatial and temporal redistribution of commodities, and is typically measured annually between countries. However, commodity exchange networks occur at many different scales, though data availability at finer temporal and spatial resolution is rare. Exchange networks may prove an important adaptation measure to cope with future climate and economic shocks. As such, it is essential to understand how commodity exchange networks scale, so that we can understand opportunities and roadblocks to the spatial and temporal redistribution of goods and services. To this end, we present an empirical analysis of trade systems across three spatial scales: global, sub-national in the United States, and county-scale in the United States. We compare and contrast the network properties, the self-sufficiency ratio, and performance of the gravity model of trade for these three exchange systems.

  16. Strength scaling in fiber composites

    Kellas, Sotiris; Morton, John

    1991-01-01

    A research program was initiated to study and isolate the factors responsible for scale effects in the tensile strength of graphite/epoxy composite laminates. Four layups were chosen with appropriate stacking sequences so as to highlight individual and interacting failure modes. Four scale sizes were selected for investigation including full scale size, 3/4, 2/4, and 1/4, with n = to 4, 3, 2, and 1, respectively. The full scale specimen sizes was 32 piles thick as compared to 24, 16, and 8 piles for the 3/4, 2/4, and 1/4 specimen sizes respectively. Results were obtained in the form of tensile strength, stress-strain curves and damage development. Problems associated with strength degradation with increasing specimen sizes are isolated and discussed. Inconsistencies associated with strain measurements were also identified. Enchanced X-ray radiography was employed for damage evaluation, following step loading. It was shown that fiber dominated layups were less sensitive to scaling effects compared to the matrix dominated layups.

  17. Hidden scale invariance of metals

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.; Pedersen, Ulf R.

    2015-11-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general "hidden" scale invariance of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant inverse power-law (IPL) pair interactions. However, crystal packings of several transition metals (V, Cr, Mn, Fe, Nb, Mo, Ta, W, and Hg), most post-transition metals (Ga, In, Sn, and Tl), and the metalloids Si and Ge cannot be explained by the IPL assumption. The virial-energy correlation coefficients of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules.

  18. Featured Invention: Laser Scaling Device

    Dunn, Carol Anne

    2008-01-01

    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  19. Characteristic Scales in Galaxy Formation

    Dekel, A

    2004-01-01

    Recent data, e.g. from SDSS and 2dF, reveal a robust bi-modality in the distribution of galaxy properties, with a characteristic transition scale at stellar mass M_*~3x10^{10} Msun (near L_*), corresponding to virial velocity V~100 km/s. Smaller galaxies tend to be blue disks of young populations. They define a "fundamental line" of decreasing surface brightness, metallicity and velocity with decreasing M_*, which extends to the smallest dwarf galaxies. Galaxies above the critical scale are dominated by red spheroids of old populations, with roughly constant high surface brightens and metallicity, and they tend to host AGNs. A minimum in the virial M/L is obtained at the same magic scale. This bi-modality can be the combined imprint of several different physical processes. On smaller scales, disks are built by cold flows, and supernova feedback is effective in regulating star formation. On larger scales, the infalling gas is heated by a virial shock and star formation can be suppressed by AGN feedback. Anothe...

  20. Scales of Natural Flood Management

    Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg

    2016-04-01

    The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.

  1. Single-Scale Natural SUSY

    Randall, Lisa

    2012-01-01

    We consider the prospects for natural SUSY models consistent with current data. Recent constraints make the standard paradigm unnatural so we consider what could be a minimal extension consistent with what we now know. The most promising such scenarios extend the MSSM with new tree-level Higgs interactions that can lift its mass to at least 125 GeV and also allow for flavor-dependent soft terms so that the third generation squarks are lighter than current bounds on the first and second generation squarks. We argue that a common feature of almost all such models is the need for a new scale near 10 TeV, such as a scale of Higgsing or confinement of a new gauge group. We consider the question whether such a model can naturally derive from a single mass scale associated with supersymmetry breaking. Most such models simply postulate new scales, leaving their proximity to the scale of MSSM soft terms a mystery. This coincidence problem may be thought of as a mild tuning, analogous to the usual mu problem. We find t...

  2. The scaling of attention networks

    Wang, Cheng-Jun; Wu, Lingfei

    2016-04-01

    We use clicks as a proxy of collective attention and construct networks to study the temporal dynamics of attention. In particular we collect the browsing records of millions of users on 1000 Web forums in two months. In the constructed networks, nodes are threads and edges represent the switch of users between threads in an hour. The investigated network properties include the number of threads N, the number of users UV, and the number of clicks, PV. We find scaling functions PV ∼ UV θ1, PV ∼N θ3, and UV ∼N θ2, in which the scaling exponents are always greater than 1. This means that (1) the studied networks maintain a self-similar flow structure in time, i.e., large networks are simply the scale-up versions of small networks; and (2) large networks are more "productive", in the sense that an average user would generate more clicks in the larger systems. We propose a revised version of Zipf's law to quantify the time-invariant flow structure of attention networks and relate it to the observed scaling properties. We also demonstrate the applied consequences of our research: forum-classification based on scaling properties.

  3. Visions of Atomic Scale Tomography

    Kelly, T. F. [Cameca Instruments; Miller, Michael K [ORNL; Rajan, Krishna [Iowa State University; Ringer, S. P. [University of Sydney, Australia

    2012-01-01

    A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?

  4. Flavor hierarchies from dynamical scales

    Panico, Giuliano

    2016-01-01

    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, $\\bar ff$. This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in $\\Delta F=2$ and $\\Delta F=1$ flavor observables that could be seen in future experiments.

  5. Critical Multicultural Education Competencies Scale: A Scale Development Study

    Acar-Ciftci, Yasemin

    2016-01-01

    The purpose of this study is to develop a scale in order to identify the critical mutlicultural education competencies of teachers. For this reason, first of all, drawing on the knowledge in the literature, a new conceptual framework was created with deductive method based on critical theory, critical race theory and critical multicultural…

  6. Fish scale development: Hair today, teeth and scales yesterday?

    Sharpe, P T

    2001-09-18

    A group of genes in the tumour necrosis factor signalling pathway are mutated in humans and mice with ectodermal dysplasias--a failure of hair and tooth development. A mutation has now been identified in one of these genes, ectodysplasin-A receptor, in the teleost fish Medaka, that results in a failure of scale formation. PMID:11566120

  7. Optical properties of bud scales and protochlorophyll(ide) forms in leaf primordia of closed and opened buds.

    Solymosi, Katalin; Böddi, Béla

    2006-08-01

    The transmission spectra of bud scales of 14 woody species and the 77 K fluorescence emission spectra of the innermost leaf primordia of closed and opened buds of 37 woody species were studied. Pigment concentrations were determined in some species. Bud scales had low transmittance between 400 and 680 nm with a local minimum around 680 nm. Transmittance increased steeply above 680 nm and was > 80% in the 700-800 nm spectral region. Significant protochlorophyllide (Pchlide) accumulation was observed in leaf primordia of tightly packed, closed buds with relatively thick, dark bud scales. In common ash (Fraxinus excelsior L.) and Hungarian ash (Fraxinus angustifolia Vahl.), the innermost leaf primordia of the closed buds contained protochlorophyll (Pchl) and Pchlide (abbreviated as Pchl(ide)), but no chlorophyll. We observed Pchl(ide) forms with emission maxima at 633, 643 and 655 nm in these leaves. Complete transformation of Pchlide(655) (protochlorophyllide form with maximum emission at 655 nm) into Chlide(692) (chlorophyllide form with maximum emission at 692 nm) occurred after irradiation for 10 s. The innermost leaf primordia of the buds of four species (flowering ash (Fraxinus ornus L.), horse chestnut (Aesculus hippocastanum L.), tree of heaven (Ailanthus altissima P. Mill.) and common walnut (Juglans regia L.)) contained Pchl(ide)(633), Pchl(ide)(643) and Pchlide(655) as well as an emission band at 688 nm corresponding to a chlorophyll form. The Pchlide(655) was fully photoactive in these species. The outermost leaf primordia of these four species and the innermost leaf primordia of 28 other species contained all of the above described Pchl(ide) forms in various ratios but in small amounts. In addition, Chl forms were present and the main bands in the fluorescence emission spectra were at 690 or 740 nm, or both. The results indicate that Pchl(ide) accumulation occurs in leaf primordia in near darkness inside the tightly closed buds, where the bud scales and

  8. Computational applications of DNA physical scales

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren;

    1998-01-01

    The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...

  9. Dynamics of convective scale interaction

    Purdom, James F. W.; Sinclair, Peter C.

    1988-01-01

    Several of the mesoscale dynamic and thermodynamic aspects of convective scale interaction are examined. An explanation of how sounding data can be coupled with satellite observed cumulus development in the warm sector and the arc cloud line's time evolution to develop a short range forecast of expected convective intensity along an arc cloud line. The formative, mature and dissipating stages of the arc cloud line life cycle are discussed. Specific properties of convective scale interaction are presented and the relationship between arc cloud lines and tornado producing thunderstorms is considered.

  10. Large scale biomimetic membrane arrays

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg;

    2009-01-01

    peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays......To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......, and furthermore demonstrate that the design can conveniently be scaled up to support planar lipid bilayers in large square-centimeter partition arrays....

  11. Image Filtering via Generalized Scale

    De Souza, Andre; Udupa, Jayaram K.; Madabhushi, Anant

    2007-01-01

    In medical imaging, low signal-to-noise ratio (SNR) and/or contrast-to-noise ratio (CNR) often cause many image processing algorithms to perform poorly. Postacquisition image filtering is an important off-line image processing approach widely employed to enhance the SNR and CNR. A major drawback of many filtering techniques is image degradation by diffusing/blurring edges and/or fine structures. In this paper, we introduce a scale-based filtering method that employs scale-dependent diffusion ...

  12. Learning From the Furniture Scale

    Hvejsel, Marie Frier; Kirkegaard, Poul Henning

    2016-01-01

    Given its proximity to the human body, the furniture scale holds a particular potential in grasping the fundamental aesthetic potential of architecture to address its inhabitants by means of spatial ‘gestures’. Likewise, it holds a technical germ in realizing this potential given its immediate...... tangibility allowing experimentation with the ‘principles’ of architectural construction. In present paper we explore this dual tectonic potential of the furniture scale as an epistemological foundation in architectural education. In this matter, we discuss the conduct of a master-level course where we...

  13. Characteristic Scales in Galaxy Formation

    Dekel, Avishai

    2004-01-01

    Recent data, e.g. from SDSS and 2dF, reveal a robust bi-modality in the distribution of galaxy properties, with a characteristic transition scale at stellar mass M_*~3x10^{10} Msun (near L_*), corresponding to virial velocity V~100 km/s. Smaller galaxies tend to be blue disks of young populations. They define a "fundamental line" of decreasing surface brightness, metallicity and velocity with decreasing M_*, which extends to the smallest dwarf galaxies. Galaxies above the critical scale are d...

  14. Energy scale in inclusive spectra

    Basing on a model, valid in a limited domain of the phase space, it is shown that there is a universal dependence of the inclusive spectra that is not related to the types of initial and detected particles. The only dependence on the reaction quantum numbers is that present in the scale coefficient of the total energy. The presented experimental data provide with an evidence to that the scale coefficient is universal in the whole region of the variables and its value is related to the behaviour of spectra in the central region

  15. Energy scale in inclusive spectra

    Likhoded, A.K.; Tolstenkov, A.N.

    1976-07-01

    It is shown, on the basis of a model that is valid in a certain limited phase-space region, that a universal relation exists for the inclusive spectra which is not connected with the type of the initial and detected particles. The entire dependence on the quantum numbers of the reaction is contained in a redefined scale coefficient for the total energy. The experimental data presented favor the assumption that the scale coefficient is universal in the entire range of the variables and that its value is connected with the behavior of the spectra in the central region. (AIP)

  16. 7 CFR 1951.852 - Definitions and abbreviations.

    2010-01-01

    ... the intermediary for the benefit of the ultimate recipient. (10) Working capital. The excess of current assets over current liabilities. It identifies the liquid portion of total enterprise capital which constitutes a margin or buffer for meeting obligations within the ordinary operating cycle of...

  17. An abbreviated history of the National Wlk Refuge

    US Fish and Wildlife Service, Department of the Interior — This report is a timeline of history and the management of the National Elk Refuge from 1909 to 1984. It includes a short summary and charts relating to elk...

  18. 40 CFR 86.1804-01 - Acronyms and abbreviations.

    2010-07-01

    ... ALVW—Adjusted Loaded Vehicle Weight. API—American Petroleum Institute. ASTM—American Society for...—Certification Short Test. cu. in.—Cubic inch(es). CVS—Constant volume sampler. DDV—Durability Data Vehicle. deg.—Degree(s). DNPH—2,4-dinitrophenylhydrazine. EDV—Emission Data Vehicle. EP—End point. ETW—Equivalent...

  19. 7 CFR 761.2 - Abbreviations and definitions.

    2010-01-01

    ... cash flow budget may be completed either for a 12-month period, a typical production cycle, or the life..., or consume chattel security and the planned use of any proceeds during a specific production cycle... and management ability who is generally charged a higher interest rate by conventional...

  20. 7 CFR 4279.2 - Definitions and abbreviations.

    2010-01-01

    ... concept of a failure to act, but also not acting in a timely manner, or acting in a manner contrary to the... median household income at or below the poverty line for a family of four; has a median household income... or more have income at or below the poverty line. Promissory Note. Evidence of debt. “Note”...

  1. 7 CFR 1980.302 - Definitions and abbreviations.

    2010-01-01

    ... who (and whose spouse) has had no present ownership in a principal residence during the 3 year period... established and the trust is not revocable by, or under the control of, any member of the household, so long... excess of the consideration received therefore. In the case of a disposition as part of a separation...

  2. An abbreviated history of the ear: from Renaissance to present.

    Hachmeister, Jorge E.

    2003-01-01

    In this article we discuss important discoveries in relation to the anatomy and physiology of the ear from Renaissance to present. Before the Renaissance, there was a paucity of knowledge of the anatomy of the ear, because of the relative inaccessibility of the temporal bone and the general perception that human dissections should not be conducted. It was not until the sixteenth century that the middle ear was described with detail. Further progress would be made between the sixteenth and eig...

  3. 40 CFR 1033.905 - Symbols, acronyms, and abbreviations.

    2010-07-01

    ... POLLUTION CONTROLS CONTROL OF EMISSIONS FROM LOCOMOTIVES Definitions and Other Reference Information § 1033... diesel. MWmegawatt. N2Onitrous oxide. NISTNational Institute of Standards and Technology. NMHCnonmethane hydrocarbons. NOXoxides of nitrogen. PMparticulate matter. rpmrevolutions per minute. SAESociety of...

  4. 40 CFR 1042.905 - Symbols, acronyms, and abbreviations.

    2010-07-01

    ... POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE MARINE COMPRESSION-IGNITION ENGINES AND VESSELS... Organization. kPakilopascals. kWkilowatts. Lliters. LTRLimited Testing Region. N2Onitrous oxide. NARANational Archives and Records Administration. NMHCnonmethane hydrocarbons. NOXoxides of nitrogen (NO and...

  5. 7 CFR 772.2 - Abbreviations and Definitions.

    2010-01-01

    ... Associations and Irrigation and Drainage Associations. Entity: Cooperative, corporation, partnership, joint operation, trust, or limited liability company. Graduation: The requirement contained in loan documents that... Recreation loans to individuals. Member: Any individual who has an ownership interest in the entity which...

  6. 49 CFR 171.8 - Definitions and abbreviations.

    2010-10-01

    ... support of exploration or production of offshore mineral or energy resources. Operator means a person who... device allowing the contents to be ejected by the gas. Aggregate lithium content means the sum of the grams of lithium content or equivalent lithium content contained by the cells comprising a...

  7. 32 CFR Appendix B to Part 806 - Abbreviations and Acronyms

    2010-07-01

    ...—American Standard Code for Information Interchange CFR—Code of Federal Regulations DFAS—Defense Finance and... Responsibility OMB—Office of Management and Budget OPR—Office of Primary Responsibility PA—Privacy Act PAO—Public... SSN—Social Security Number USAF—United States Air Force U.S.C.—United States Code WWW—World Wide Web...

  8. 40 CFR 90.5 - Acronyms and abbreviations.

    2010-07-01

    ... Section 90.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... monoxide CO2—Carbon dioxide EPA—Environmental Protection Agency FTP—Federal Test Procedure g/kW-hr—grams... Enforcement Auditing SI—spark-ignition U.S.C.—United States Code VOC—Volatile organic compounds...

  9. 40 CFR 91.4 - Acronyms and abbreviations.

    2010-07-01

    ... Section 91.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... dioxide EPA—Environmental Protection Agency FEL—Family Emission Limit g/kw-hr—grams per kilowatt hour HC... minute SAE—Society of Automotive Engineers SEA—Selective Enforcement Auditing SI—Spark-ignition...

  10. 40 CFR 89.3 - Acronyms and abbreviations.

    2010-07-01

    ... Section 89.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... dioxide EGR Exhaust gas recirculation EPA Environmental Protection Agency FEL Family emission limit FTP... Selective Enforcement Auditing SI Spark-ignition THC Total hydrocarbon U.S.C. United States Code...

  11. 38 CFR 21.8010 - Definitions and abbreviations.

    2010-07-01

    ... birth defect means the same as defined at § 3.815(c)(3) of this title. Eligible child means, as... an individual as defined at § 3.815(c)(2) of this title who has a covered birth defect other than a birth defect described in § 3.815(a)(2). Employment assistance means employment counseling,...

  12. 40 CFR 60.4103 - Measurements, abbreviations, and acronyms.

    2010-07-01

    ... Times for Coal-Fired Electric Steam Generating Units Hg Budget Trading Program General Provisions § 60... part are defined as follows: Btu—British thermal unit. CO2—carbon dioxide. H2O—water. Hg—mercury....

  13. 48 CFR 302.7000 - Common HHSAR acronyms and abbreviations.

    2010-10-01

    ...-3(b)(3). CDC Centers for Disease Control and Prevention 301.270(b). CFR Code of Federal Regulations... listed in the table. An example is DCIS (Departmental Contracts Information System) cited in subpart...

  14. 14 CFR 1.2 - Abbreviations and symbols.

    2010-01-01

    ... affecting § 1.2, see the List of CFR Sections Affected, which appears in the Finding Aids section of the... above ground level. ALS means approach light system. APU means auxiliary power unit. ASR means...

  15. Real-time sampling of reasons for hedonic food consumption: further validation of the Palatable Eating Motives Scale.

    Boggiano, Mary M; Wenger, Lowell E; Turan, Bulent; Tatum, Mindy M; Sylvester, Maria D; Morgan, Phillip R; Morse, Kathryn E; Burgess, Emilee E

    2015-01-01

    Highly palatable foods play a salient role in obesity and binge-eating, and if habitually eaten to deal with intrinsic and extrinsic factors unrelated to metabolic need, may compromise adaptive coping and interpersonal skills. This study used event sampling methodology (ESM) to examine whether individuals who report eating palatable foods primarily to cope, to enhance reward, to be social, or to conform, as measured by the Palatable Eating Motives Scale (PEMS), actually eat these foods primarily for the motive(s) they report on the PEMS. Secondly this study examined if the previously reported ability of the PEMS Coping motive to predict BMI would replicate if the real-time (ESM-reported) coping motive was used to predict BMI. A total of 1691 palatable eating events were collected from 169 college students over 4 days. Each event included the day, time, and types of tasty foods or drinks consumed followed by a survey that included an abbreviated version of the PEMS, hunger as an additional possible motive, and a question assessing general perceived stress during the eating event. Two-levels mixed modeling confirmed that ESM-reported motives correlated most strongly with their respective PEMS motives and that all were negatively associated with eating for hunger. While stress surrounding the eating event was strongly associated with the ESM-coping motive, its inclusion in the model as a predictor of this motive did not abolish the significant association between ESM and PEMS Coping scores. Regression models confirmed that scores on the ESM-coping motive predicted BMI. These findings provide ecological validity for the PEMS to identify true-to-life motives for consuming palatable foods. This further adds to the utility of the PEMS in individualizing, and hence improving, treatment strategies for obesity, binge-eating, dietary nutrition, coping, reward acquisition, and psychosocial skills. PMID:26082744

  16. The New Environmental Paradigm Scale.

    Albrecht, Don; And Others

    1982-01-01

    Reports the reliability, validity, and unidimensionality of New Environmental Paradigm (NEP) scale, an instrument designed to measure how people feel about nature. Based on statewide samples of farmers (N=348) and metropolitan residents (N=407) of Iowa, the NEP was determined to be valid, reliable, and multidimensional, measuring three distinct…

  17. Modifiers and Perceived Stress Scale.

    Linn, Margaret W.

    1986-01-01

    The Modifiers and Perceived Stress Scale measures stressful life events by number and amount of perceived stresses and provides scores for variables such as anticipation of events, responsibility for events, and amount of social support from family and friends in coping with each event that modify the way stress is perceived. (Author)

  18. Global scale predictability of floods

    Weerts, Albrecht; Gijsbers, Peter; Sperna Weiland, Frederiek

    2016-04-01

    Flood (and storm surge) forecasting at the continental and global scale has only become possible in recent years (Emmerton et al., 2016; Verlaan et al., 2015) due to the availability of meteorological forecast, global scale precipitation products and global scale hydrologic and hydrodynamic models. Deltares has setup GLOFFIS a research-oriented multi model operational flood forecasting system based on Delft-FEWS in an open experimental ICT facility called Id-Lab. In GLOFFIS both the W3RA and PCRGLOB-WB model are run in ensemble mode using GEFS and ECMWF-EPS (latency 2 days). GLOFFIS will be used for experiments into predictability of floods (and droughts) and their dependency on initial state estimation, meteorological forcing and the hydrologic model used. Here we present initial results of verification of the ensemble flood forecasts derived with the GLOFFIS system. Emmerton, R., Stephens, L., Pappenberger, F., Pagano, T., Weerts, A., Wood, A. Salamon, P., Brown, J., Hjerdt, N., Donnelly, C., Cloke, H. Continental and Global Scale Flood Forecasting Systems, WIREs Water (accepted), 2016 Verlaan M, De Kleermaeker S, Buckman L. GLOSSIS: Global storm surge forecasting and information system 2015, Australasian Coasts & Ports Conference, 15-18 September 2015,Auckland, New Zealand.

  19. Scaling Laws of Polyelectrolyte Adsorption

    Borukhov, I.; Andelman, D.; Orland, H.

    1997-01-01

    Adsorption of charged polymers (polyelectrolytes) from a semi-dilute solution to a charged surface is investigated theoretically. We obtain simple scaling laws for (i) the amount of polymer adsorbed to the surface, Gamma, and (ii) the width of the adsorbed layer D, as function of the fractional charge per monomer p and the salt concentration c_b. For strongly charged polyelectrolytes (p

  20. Hydrodynamic aspects of shark scales

    Raschi, W. G.; Musick, J. A.

    1986-03-01

    Ridge morphometrices on placoid scales from 12 galeoid shark species were examined in order to evaluate their potential value for frictional drag reduction. The geometry of the shark scales is similar to longitudinal grooved surfaces (riblets) that have been previously shown to give 8 percent skin-friction reduction for turbulent boundary layers. The present study of the shark scales was undertaken to determine if the physical dimensions of the ridges on the shark scales are of the right magnitude to be used by the sharks for drag reduction based on previous riblet work. The results indicate that the ridge heights and spacings are normally maintained between the predicted optimal values proposed for voluntary and burst swimming speeds throughout the individual's ontogeny. Moreover, the species which might be considered to be the faster posses smaller and more closely spaced ridges that based on the riblet work would suggest a greater frictional drag reduction value at the high swimming speeds, as compared to their more sluggish counterparts.

  1. Fractional signal processing: scale conversion

    Ortigueira, M.D.; J.C. Matos; Piedade, M. S.

    2001-01-01

    Scale conversion of discrete-time signals are studied taking as base the fractional discrete-time system theory. Some simulation results to illustrate the behaviour of the algorithms will be presented. A new algorithm for performing the zoom transform is also described.

  2. TeV Scale Leptogenesis

    Dev, P S Bhupal

    2015-01-01

    This is a mini-review on the mechanism of leptogenesis, with a special emphasis on low-scale leptogenesis models which are testable in foreseeable laboratory experiments at Energy and Intensity frontiers. We also stress the importance of flavor effects in the calculation of the lepton asymmetry and the necessity of a flavor-covariant framework to consistently capture these effects.

  3. Symmetric Differentiation on Time Scales

    da Cruz, Artur M. C. Brito; Martins, Natalia; Delfim F. M. Torres

    2012-01-01

    We define a symmetric derivative on an arbitrary nonempty closed subset of the real numbers and derive some of its properties. It is shown that real-valued functions defined on time scales that are neither delta nor nabla differentiable can be symmetric differentiable.

  4. Metrology at the nano scale

    Progress in nano technology relies on ever more accurate measurements of quantities such as distance, force and current industry has long depended on accurate measurement. In the 19th century, for example, the performance of steam engines was seriously limited by inaccurately made components, a situation that was transformed by Henry Maudsley's screw micrometer calliper. And early in the 20th century, the development of telegraphy relied on improved standards of electrical resistance. Before this, each country had its own standards and cross border communication was difficult. The same is true today of nano technology if it is to be fully exploited by industry. Principles of measurement that work well at the macroscopic level often become completely unworkable at the nano metre scale - about 100 nm and below. Imaging, for example, is not possible on this scale using optical microscopes, and it is virtually impossible to weigh a nano metre-scale object with any accuracy. In addition to needing more accurate measurements, nano technology also often requires a greater variety of measurements than conventional technology. For example, standard techniques used to make microchips generally need accurate length measurements, but the manufacture of electronics at the molecular scale requires magnetic, electrical, mechanical and chemical measurements as well. (U.K.)

  5. Scaling properties of marathon races

    Alvarez-Ramirez, Jose; Rodriguez, Eduardo

    2006-06-01

    Some regularities in popular marathon races are identified in this paper. It is found for high-performance participants (i.e., racing times in the range [2:15,3:15] h), the average velocity as a function of the marathoner's ranking behaves as a power-law, which may be suggesting the presence of critical phenomena. Elite marathoners with racing times below 2:15 h can be considered as outliers with respect to this behavior. For the main marathon pack (i.e., racing times in the range [3:00,6:00] h), the average velocity as a function of the marathoner's ranking behaves linearly. For this racing times, the interpersonal velocity, defined as the difference of velocities between consecutive runners, displays a continuum of scaling behavior ranging from uncorrelated noise for small scales to correlated 1/f-noise for large scales. It is a matter of fact that 1/f-noise is characterized by correlations extended over a wide range of scales, a clear indication of some sort of cooperative effect.

  6. Scale invariance and superfluid turbulence

    Sen, Siddhartha, E-mail: siddhartha.sen@tcd.ie [CRANN, Trinity College Dublin, Dublin 2 (Ireland); R.K. Mission Vivekananda University, Belur 711 202, West Bengal (India); Ray, Koushik, E-mail: koushik@iacs.res.in [Department of Theoretical Physics, Indian Association for the Cultivation of Science, Calcutta 700 032 (India)

    2013-11-11

    We construct a Schroedinger field theory invariant under local spatial scaling. It is shown to provide an effective theory of superfluid turbulence by deriving, analytically, the observed Kolmogorov 5/3 law and to lead to a Biot–Savart interaction between the observed filament excitations of the system as well.

  7. Scaling up of renewable chemicals.

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. PMID:26874264

  8. SCALING METHODS IN SOIL PHYSICS

    Soil physical properties are needed to understand and manage natural systems spanning an extremely wide range of scales. Much of soil data are obtained from small soil samples and cores, monoliths, or small field plots, yet the goal is to reconstruct soil physical properties across fields, watershed...

  9. Aggregation on finite ordinal scales by scale independent functions

    Marichal, J. L.; Mesiar, Radko

    Vol. 2. Roma: University La Sapienza, 2004 - (Bouchon-Meunier, B.; Coletti, G.; Yager, R.), s. 1243-1250 ISBN 88-87242-54-2. [IPMU 2004 /10./. Perugia (IT), 04.07.2004-09.07.2004] R&D Projects: GA ČR GA402/04/1026 Institutional research plan: CEZ:AV0Z1075907 Keywords : aggregation functions * finite ordinal scales * order invariant functions Subject RIV: BA - General Mathematics

  10. Scale Scale Economies in Nonprofit Provision, Technology Adoption and Entry

    Scharf, Kimberley

    2011-01-01

    We study competition between nonprofit providers that supply a collective service through increasing-returns-to-scale technologies under conditions of free entry. When providers adopt a not-for-profit mission, the absence of a residual claimant can impede entry, protecting the position of an inefficient incumbent. Moreover, when providers supply goods that are at least partly public in nature, they may be unable to sustain the adoption of more efficient technologies that feature fixed costs, ...

  11. Safety of oral glutamine in the abbreviation of preoperative fasting: a double-blind, controlled, randomized clinical trial Seguridad de la glutamina oral en la abreviación del ayuno preoperatorio: un ensayo clínico doble ciego, controlado, aleatorizado

    D. Borges Dock-Nascimento

    2011-02-01

    Full Text Available Introduction: No study so far has tested a beverage containing glutamine 2 h before anesthesia in patients undergoing surgery. Objectives: The aim of the study was to investigate: 1 the safety of the abbreviation of preoperative fasting to 2 h with a carbohydrate-L-glutamine-rich drink; and 2 the residual gastric volume (RGV measured after the induction of anesthesia for laparoscopic cholecystectomies. Methods: Randomized controlled trial with 56 women (42 (17-65 years-old submitted to elective laparoscopic cholecystectomy. Patients were randomized to receive either conventional preoperative fasting of 8 hours (fasted group, n = 12 or one of three different beverages drunk in the evening before surgery (400 mL and 2 hours before the initiation of anesthesia (200 mL. The beverages were water (placebo group, n = 12, 12.5% (240 mOsm/L maltodextrine (carbohydrate group, n = 12 or the latter in addition to 50 g (40 g in the evening drink and 10g in the morning drink of L-glutamine (glutamine group, n = 14. A 20 F nasogastric tube was inserted immediately after the induction of general anesthesia to aspirate and measure the RGV. Results: Fifty patients completed the study. None of the patients had either regurgitation during the induction of anesthesia or postoperative complications. The median (range of RGV was 6 (0-80 mL. The RGV was similar (p = 0.29 between glutamine group (4.5 [0-15] mL, carbohydrate group (7.0 [0-80] mL, placebo group (8.5 [0-50] mL, and fasted group (5.0 [0-50] mL. Conclusion: The abbreviation of preoperative fasting to 2 h with carbohydrate and L-glutamine is safe and does not increase the RGV during induction of anesthesia.Introducción: Ningún estudio hasta el momento ha investigado una bebida que contiene glutamina 2 h antes de la anestesia en pacientes sometidos a cirugía. Objetivos: El objetivo del estudio fue investigar: 1 la seguridad de la abreviación del ayuno preoperatorio para 2 h con una bebida conteniendo

  12. Incorporating Pore-Scale Data in Field-Scale Uncertainty Quantification: A Multi-Scale Bayesian Approach

    Icardi, M.

    2014-12-01

    Pore-scale modeling is recently become an important tool for a deeper understanding of complex transport phenomena in porous media. However its direct usage for field-scale processes is still hindered by limited predictive capabilities. This is due to the large uncertainties in the micro-scale parameters, in the pore geometries, in the limited number of available samples, and in the numerical errors. These issues are often overlooked because it is usually thought that the computational cost of pore-scale simulation prohibits an extensive uncertainty quantification study with large number of samples. In this work we propose an computational tool to estimate statistics of pore-scale quantities. The algorithm is based on (i) an efficient automatic CFD solver for pore-scale simulations, (ii) a multi-scale Bayesian theoretical framework, and (iii) a generalized multilevel Monte Carlo to speed up the statistical computations. Exploiting the variance reduction of the multi-level and multi-scale representation, we demonstrate the feasibility of the forward and inverse uncertainty quantification problems. The former consists in quantifying the effect of micro-scale heterogeneities and parametric uncertainties on macro-scale upscaled quantities. Given some prior information on the pore-scale structures, the latter can be applied to (i) assess the validity and estimate uncertainties of macro-scale models for a wide range of micro-scale properties, (ii) match macro-scale results with the underlying pore-scale properties.

  13. Scaling of Information in Turbulence

    Granero-Belinchon, Carlos; Garnier, Nicolas B

    2016-01-01

    We propose a new perspective on Turbulence using Information Theory. We compute the entropy rate of a turbulent velocity signal and we particularly focus on its dependence on the scale. We first report how the entropy rate is able to describe the distribution of information amongst scales, and how one can use it to isolate the injection, inertial and dissipative ranges, in perfect agreement with the Batchelor model and with a fBM model. In a second stage, we design a conditioning procedure in order to finely probe the asymmetries in the statistics that are responsible for the energy cascade. Our approach is very generic and can be applied to any multiscale complex system.

  14. THE MODERN RACISM SCALE: PSYCHOMETRIC

    MANUEL CÁRDENAS

    2007-08-01

    Full Text Available An adaption of McConahay, Harder and Batts’ (1981 moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc., as well as with other forms of prejudice (gender stereotypes and homophobia. The sample consistedof 120 participants, students of psychology, resident in the city of Antofagasta (a geographical zone with a highnumber of Latin-American inmigrants. Our findings show that the scale seems to be a reliable instrument to measurethe prejudice towards Bolivian immigrants in our social environment. Likewise, important differences among thesubjects are detected with high and low scores in the psychosocial variables used.

  15. Scaling: Lost in the smog

    Louf, Rémi

    2014-01-01

    In this commentary we discuss the validity of scaling laws and their relevance for understanding urban systems and helping policy makers. We show how the recent controversy about the scaling of CO2 transport-related emissions with population size, where different authors reach contradictory conclusions, is symptomatic of the lack of understanding of the underlying mechanisms. In particular, we highlight different sources of errors, ranging from incorrect estimate of CO2 to problems related with the definition of cities. We argue here that while data are necessary to build of a new science of cities, they are not enough: they have to go hand in hand with a theoretical understanding of the main processes. This effort of building models whose predictions agree with data is the prerequisite for a science of cities. In the meantime, policy advice are, at best, a shot in the dark.

  16. Optimal scales in weighted networks

    Garlaschelli, Diego; Fink, Thomas M A; Caldarelli, Guido

    2013-01-01

    The analysis of networks characterized by links with heterogeneous intensity or weight suffers from two long-standing problems of arbitrariness. On one hand, the definitions of topological properties introduced for binary graphs can be generalized in non-unique ways to weighted networks. On the other hand, even when a definition is given, there is no natural choice of the (optimal) scale of link intensities (e.g. the money unit in economic networks). Here we show that these two seemingly independent problems can be regarded as intimately related, and propose a common solution to both. Using a formalism that we recently proposed in order to map a weighted network to an ensemble of binary graphs, we introduce an information-theoretic approach leading to the least biased generalization of binary properties to weighted networks, and at the same time fixing the optimal scale of link intensities. We illustrate our method on various social and economic networks.

  17. Frequency scaling for angle gathers

    Zuberi, M. A H

    2014-01-01

    Angle gathers provide an extra dimension to analyze the velocity after migration. Space-shift and time shift-imaging conditions are two methods used to obtain angle gathers, but both are reasonably expensive. By scaling the time-lag axis of the time-shifted images, the computational cost of the time shift imaging condition can be considerably reduced. In imaging and more so Full waveform inversion, frequencydomain Helmholtz solvers are used more often to solve for the wavefields than conventional time domain extrapolators. In such cases, we do not need to extend the image, instead we scale the frequency axis of the frequency domain image to obtain the angle gathers more efficiently. Application on synthetic data demonstrate such features.

  18. Scaling in public transport networks

    C. von Ferber

    2005-01-01

    Full Text Available We analyse the statistical properties of public transport networks. These networks are defined by a set of public transport routes (bus lines and the stations serviced by these. For larger networks these appear to possess a scale-free structure, as it is demonstrated e.g. by the Zipf law distribution of the number of routes servicing a given station or for the distribution of the number of stations which can be visited from a chosen one without changing the means of transport. Moreover, a rather particular feature of the public transport network is that many routes service common subsets of stations. We discuss the possibility of new scaling laws that govern intrinsic properties of such subsets.

  19. Plasmonic Scaling of Superconducting Metamaterials

    Kurter, C.; Abrahams, J.; Shvets, G.; Anlage, Steven M.

    2013-01-01

    Superconducting metamaterials are utilized to study the approach to the plasmonic limit simply by tuning temperature to modify the superfluid density, and thus the superfluid plasma frequency. We examine the persistence of artificial magnetism in a metamaterial made with superconductors in the plasmonic limit, and compare to the electromagnetic behavior of normal metals as a function of frequency as the plasma frequency is approached from below. Spiral-shaped Nb thin film meta-atoms of scaled...

  20. Parental Practices Scale for Children

    Laura Hernández-Guzmán; Manuel González Montesinos; Graciela Bermúdez-Ornelas; Miguel-Ángel Freyre; Alcázar-Olán, Raúl J.

    2013-01-01

    Confirmatory factor analysis conducted in a sample of 706 children 7 to 16 years of age, 354 girls and 352 boys, revealed a 5-factor solution (Rejection, Corporal Punishment, Support, Responsiveness, Warmth). Results supported the measurement model of the Parental Practices Scale for Children, which evaluates childrens perception of parental practices associated to offspring emotional adjustment. This finding was replicated in a second study (N=233, 126 girls and 107 boys). The measure demons...

  1. SENSATION SEEKING SCALE: INDIAN ADAPTATION

    Basu, Debasish; Verma, Vijoy K.; Malhotra, Savita; Malhotra, Anil

    1993-01-01

    SUMMARY Sensation seeking refers to a biologically based personality dimension defined as the need for varied, novel and complex sensations and experiences, and the willingness to take physical and social risks for the sake of such experiences. Although researched worldwide for nearly three decades now, there is to date no published Indian study utilizing the concept of sensation seeking. This paper describes adaptation of the Sensation Seeking Scale for the Indian population. After due modif...

  2. Global scale groundwater flow model

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  3. Scaling in Athletic World Records

    Savaglio, Sandra; Carbone, Vincenzo

    2000-01-01

    World records in athletics provide a measure of physical as well as physiological human performance. Here we analyse running records and show that the mean speed as a function of race time can be described by two scaling laws that have a breakpoint at about 150-170 seconds (corresponding to the ~1,000 m race). We interpret this as being the transition time between anaerobic and aerobic energy expenditure by athletes.

  4. Virtualized Traffic at Metropolitan Scales

    Wilkie, David; Sewall, Jason; Li, Weizi; Lin, Ming C.

    2015-01-01

    Few phenomena are more ubiquitous than traffic in urban scenes, and few are more significant economically, socially, or environmentally. Many virtual-reality applications and systems, including virtual globes and immersive multi-player worlds that are often set in a large-scale modern or futuristic setting, feature traffic systems. Virtual-reality models can also aid in addressing the challenges of real-world traffic – the ever-present gridlock and congestion in cities worldwide: traffic engi...

  5. THE MODERN RACISM SCALE: PSYCHOMETRIC

    MANUEL CÁRDENAS

    2007-01-01

    An adaption of McConahay, Harder and Batts’ (1981) moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity) are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc.), as well as with other forms of prejudice (gender stereotypes and homophobia). The sample consistedof 120 participants, students of psychol...

  6. Large-scale solar heat

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  7. Testing gravity on Large Scales

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  8. Scaling Exponents in Financial Markets

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  9. Scaling Group Testing Similarity Search

    Iscen, Ahmet; Amsaleg, Laurent; Furon, Teddy

    2016-01-01

    The large dimensionality of modern image feature vectors, up to thousands of dimensions, is challenging the high dimensional indexing techniques. Traditional approaches fail at returning good quality results within a response time that is usable in practice. However, similarity search techniques inspired by the group testing framework have recently been proposed in an attempt to specifically defeat the curse of dimensionality. Yet, group testing does not scale and fails at indexing very large...

  10. Local magnitude scale in Slovenia

    J. Bajc; Zaplotnik, Ž.; Živčić, M.; M. Čarman

    2013-01-01

    In the paper a calibration study of the local magnitude scale in Slovenia is presented. The Seismology and Geology Office of the Slovenian Environment Agency routinely reports the magnitudes MLV of the earthquakes recorded by the Slovenian seismic stations. The magnitudes are computed from the maximum vertical component of the ground velocity with the magnitude equation that was derived some thirty years ago by regression analysis of the magnitudes recorded by a Wood-Ander...

  11. SCALe-invariant Integral Surfaces

    Zanni, C.; A. Bernhardt; Quiblier, M.; Cani, M.-P.

    2013-01-01

    Extraction of skeletons from solid shapes has attracted quite a lot of attention, but less attention was paid so far to the reverse operation: generating smooth surfaces from skeletons and local radius information. Convolution surfaces, i.e. implicit surfaces generated by integrating a smoothing kernel along a skeleton, were developed to do so. However, they failed to reconstruct prescribed radii and were unable to model large shapes with fine details. This work introduces SCALe-invariant Int...

  12. Development of a Facebook Addiction Scale.

    Andreassen, Cecilie Schou; Torsheim, Torbjørn; Brunborg, Geir Scott; Pallesen, Ståle

    2012-04-01

    The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO-FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times. PMID:22662404

  13. A laboratory scale fundamental time?

    Mendes, R.V. [Instituto para a Investigacao Interdisciplinar, CMAF, Lisboa (Portugal); Instituto Superior Tecnico, IPFN - EURATOM/IST Association, Lisboa (Portugal)

    2012-11-15

    The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and {Dirac_h} are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)

  14. Temporal scaling in information propagation

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  15. A laboratory scale fundamental time?

    The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and ℎ are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)

  16. Mechanically reliable scales and coatings

    Tortorelli, P.F.; Alexander, K.B. [Oak Ridge National Lab., TN (United States)

    1995-06-01

    In many high-temperature fossil energy systems, corrosion and deleterious environmental effects arising from reactions with reactive gases and condensible products often compromise materials performance and, as a consequence, degrade operating efficiencies. Protection of materials from such reactions is best afforded by the formation of stable surface oxides (either as deposited coatings or thermally grown scales) that are slowly reacting, continuous, dense, and adherent to the substrate. However, the ability of normally brittle ceramic films and coatings to provide such protection has long been problematical, particularly for applications involving numerous or severe high-temperature thermal cycles or very aggressive (for example, sulfidizing) environments. A satisfactory understanding of how scale and coating integrity and adherence are improved by compositional, microstructural, and processing modifications is lacking. Therefore, to address this issue, the present work is intended to define the relationships between substrate characteristics (composition, microstructure, and mechanical behavior) and the structure and protective properties of deposited oxide coatings and/or thermally grown scales. Such information is crucial to the optimization of the chemical, interfacial, and mechanical properties of the protective oxides on high-temperature materials through control of processing and composition and directly supports the development of corrosion-resistant, high-temperature materials for improved energy and environmental control systems.

  17. Scaling theory of polymer thermodiffusion

    Bringuier, E.

    2010-11-01

    The motion of a linear polymer chain in a good solvent under a temperature gradient is examined theoretically by breaking up the flexible chain into Brownian rigid rods, and writing down an equation of motion for each rod. The motion is driven by two forces. The first one is Waldmann’s thermophoretic force (stemming from the departure of the solvent’s molecular-velocity distribution from Maxwell’s equilibrium distribution) which here is extrapolated to a dense medium. The second force is due to the fact that the viscous friction varies with position owing to the temperature gradient, which brings an important correction to the Stokes law of friction. We use scaling considerations relying upon disparate length scales and omitting non-universal numerical prefactors. The present scaling theory is compared with recent experiments on the thermodiffusion of polymers and is shown to account for (i) the existence of both signs of the thermodiffusion coefficient of long chains, (ii) the order of magnitude of the coefficient, (iii) its independence of the chain length in the high-polymer limit and (iv) its dependence on the solvent viscosity.

  18. Large scale cluster computing workshop

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  19. Models of large scale structure

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, Ω> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.)

  20. Multi-scale modelling in computational biomedicine

    Sloot P.M.; Hoekstra A.G.

    2010-01-01

    The inherent complexity of biomedical systems is well recognized; they are multi-scale, multi-science systems, bridging a wide range of temporal and spatial scales. This article reviews the currently emerging field of multi-scale modelling in computational biomedicine. Many exciting multi-scale models exist or are under development. However, an underpinning multi-scale modelling methodology seems to be missing. We propose a direction that complements the classic dynamical systems approach and...

  1. Feature detection with automatic scale selection

    Lindeberg, Tony

    1998-01-01

    The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. Whereas scale-space representation provides a well-founded framework for dealing with this issue by representing image structures at different scales, traditional scale-space theory does not address the problem of how t...

  2. On scale selection for differential operators

    Lindeberg, Tony

    1993-01-01

    Although traditional scale-space theory provides a well-founded framework for dealing with image structures at different scales, it does not directly address the problem of how to selectappropriate scales for further analysis. This paper describes a systematic approach for dealing with this problem---a heuristic principle stating that local extrema over scales of different combinations of normalized scale invariant derivatives are likely candidates to correspond to interesting structures. Sup...

  3. Renewable Energy Perception Scale: Reliability and Validity

    Yakut İpekoğlu, Hilal; İbrahim ÜÇGÜL; YAKUT, Gamze

    2014-01-01

    In this study, a scale was developed in order to determine university students' perceptions of renewable energy sources. The Renewable Energy Perception scale has prepared Likert type and has involved three sub scales, Renewable Energy Knowledge, Renewable Energy Future Vision and Renewable Energy Tendency. These scales were evaluated by analyzing Exploratory Factor Analysis, Cronbach Alfa and Item Total Correlation for reliability and validity of scales. As a result of these analyses were sp...

  4. A scale invariance criterion for LES parametrizations

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  5. Evaluating the impact of farm scale innovation at catchment scale

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  6. Shadows of the Planck scale: Scale dependence of compactification geometry

    By studying the effects of the shape moduli associated with toroidal compactifications, we demonstrate that Planck-sized extra dimensions can cast significant 'shadows' over low-energy physics. These shadows distort our perceptions of the compactification geometry associated with large extra dimensions and place a fundamental limit on our ability to probe the geometry of compactification by measuring Kaluza-Klein states. We also find that compactification geometry is effectively renormalized as a function of energy scale, with 'renormalization group equations' describing the 'flow' of geometric parameters such as compactification radii and shape angles as functions of energy

  7. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  8. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  9. Proposing a tornado watch scale

    Mason, Jonathan Brock

    This thesis provides an overview of language used in tornado safety recommendations from various sources, along with developing a rubric for scaled tornado safety recommendations, and subsequent development and testing of a tornado watch scale. The rubric is used to evaluate tornado refuge/shelter adequacy responses of Tuscaloosa residents gathered following the April 27, 2011 Tuscaloosa, Alabama EF4 tornado. There was a significant difference in the counts of refuge adequacy for Tuscaloosa residents when holding the locations during the April 27th tornado constant and comparing adequacy ratings for weak (EF0-EF1), strong (EF2-EF3) and violent (EF4-EF5) tornadoes. There was also a significant difference when comparing future tornado refuge plans of those same participants to the adequacy ratings for weak, strong and violent tornadoes. The tornado refuge rubric is then revised into a six-class, hierarchical Tornado Watch Scale (TWS) from Level 0 to Level 5 based on the likelihood of high-impact or low-impact severe weather events containing weak, strong or violent tornadoes. These levels represent maximum expected tornado intensity and include tornado safety recommendations from the tornado refuge rubric. Audio recordings similar to those used in current National Oceanic and Atmospheric Administration (NOAA) weather radio communications were developed to correspond to three levels of the TWS, a current Storm Prediction Center (SPC) tornado watch and a particularly dangerous situation (PDS) tornado watch. These were then used in interviews of Alabama residents to determine how changes to the information contained in the watch statements would affect each participant's tornado safety actions and perception of event danger. Results from interview participants (n=38) indicate a strong preference (97.37%) for the TWS when compared to current tornado watch and PDS tornado watch statements. Results also show the TWS elicits more adequate safety decisions from participants

  10. Almost Periodic Time Scales and Almost Periodic Functions on Time Scales

    Yongkun Li; Bing Li

    2015-01-01

    We propose some new concepts of almost periodic time scales and almost periodic functions on time scales and give some basic properties of these new types of almost periodic time scales and almost periodic functions on time scales. We also give some comments on a recent paper by Wang and Agarwal (2014) concerning a new almost periodic time scale.

  11. Drift-Scale Radionuclide Transport

    J. Houseworth

    2004-09-22

    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water

  12. The Scales of Gravitational Lensing

    De Paolis, Francesco; Ingrosso, Gabriele; Manni, Luigi; Nucita, Achille; Strafella, Francesco

    2016-01-01

    After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge) of supermassive black holes in the center of ours and nearby galaxies.

  13. The Scales of Gravitational Lensing

    Francesco De Paolis

    2016-03-01

    Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.

  14. Scaling Aspects of Lymphocyte Trafficking

    Perelson, Alan S.; Wiegel, Frederik W.

    2008-01-01

    We consider the long lived pool of B and T cells that recirculate through blood, tissues and the lymphatic system of an animal with body mass M. We derive scaling rules (allometric relations) for: (1) the rate of production of mature lymphocytes; (2) the accumulation of lymphocytes in the tissues; (3) the flux of lymphocytes through the lymphatic system; (4) the number of lymph nodes, (5) the number of lymphocytes per clone within a lymph node, and (6) the total number of lymphocytes within a...

  15. Time Scales in Spectator Fragmentation

    Schwarz, C; Bassini, R; Begemann-Blaich, M L; Gaff-Ejakov, S J; Gourio, D; Gross, C; Imme, G; Iori, I; Kleinevoss, U; Kunde, G J; Kunze, W D; Lynen, U; Maddalena, V; Mahi, M; Möhlenkamp, T; Moroni, A; Müller, W F J; Nociforo, C; Ocker, B; Odeh, T; Petruzzelli, F; Pochodzalla, J; Raciti, G; Riccobene, G; Romano, F; Saija, A; Schnittker, M; Schüttauf, A; Seidel, W; Serfling, V; Sfienti, C; Trautmann, W; Trzcinski, A; Verde, G; Wörner, A; Hong Fei Xi; Zwieglinski, B

    2001-01-01

    Proton-proton correlations and correlations of p-alpha, d-alpha, and t-alpha from spectator decays following Au + Au collisions at 1000 AMeV have been measured with an highly efficient detector hodoscope. The constructed correlation functions indicate a moderate expansion and low breakup densities similar to assumptions made in statistical multifragmentation models. In agreement with a volume breakup rather short time scales were deduced employing directional cuts in proton-proton correlations. PACS numbers: 25.70.Pq, 21.65.+f, 25.70.Mn

  16. Scaling of interfacial jump conditions

    To model the behavior of a nuclear reactor accurately is needed to have balance models that take into account the different phenomena occurring in the reactor. These balances have to be coupled together through boundary conditions. The boundary conditions have been studied and different treatments have been given to the interface. In this paper is a brief description of some of the interfacial jump conditions that have been proposed in recent years. Also, the scaling of an interfacial jump condition is proposed, for coupling the different materials that are in contact within a nuclear reactor. (Author)

  17. Development of Sport Courage Scale

    Konter, Erkut; Ng, Johan

    2012-01-01

    While theory and practice of sport have much to say about fear, stress and anxiety, they have little to say about courage. Therefore, the purpose of this study was to develop a Sport Courage Scale. Data were collected from two groups of male and female athletes aged from 13 to 22 in different individual and team sports. The first set of data (N = 380) was analyzed by exploratory factor analysis, and the second set of data (N = 388) was analyzed by confirmatory factor analysis. Analyses reveal...

  18. JavaScript at scale

    Boduch, Adam

    2015-01-01

    Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready

  19. Drift-Scale Radionuclide Transport

    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water. The invert is the structure constructed in a drift to provide the floor of the

  20. Scale-free convection theory

    Pasetto, Stefano; Chiosi, Cesare; Cropper, Mark; Grebel, Eva K.

    2015-08-01

    Convection is one of the fundamental mechanism to transport energy, e.g., in planetology, oceanography as well as in astrophysics where stellar structure customarily described by the mixing-length theory, which makes use of the mixing-length scale parameter to express the convective flux, velocity, and temperature gradients of the convective elements and stellar medium. The mixing-length scale is taken to be proportional to the local pressure scale height of the star, and the proportionality factor (the mixing-length parameter) must be determined by comparing the stellar models to some calibrator, usually the Sun.No strong arguments exist to claim that the mixing-length parameter is the same in all stars and all evolutionary phases. Because of this, all stellar models in literature are hampered by this basic uncertainty.In a recent paper (Pasetto et al 2014) we presented the first fully analytical scale-free theory of convection that does not require the mixing-length parameter. Our self-consistent analytical formulation of convection determines all the properties of convection as a function of the physical behaviour of the convective elements themselves and the surrounding medium (being it a either a star, an ocean, a primordial planet). The new theory of convection is formulated starting from a conventional solution of the Navier-Stokes/Euler equations, i.e. the Bernoulli equation for a perfect fluid, but expressed in a non-inertial reference frame co-moving with the convective elements. In our formalism, the motion of convective cells inside convective-unstable layers is fully determined by a new system of equations for convection in a non-local and time dependent formalism.We obtained an analytical, non-local, time-dependent solution for the convective energy transport that does not depend on any free parameter. The predictions of the new theory in astrophysical environment are compared with those from the standard mixing-length paradigm in stars with

  1. Choosing a scale for measuring perceived prominence

    Jensen, Christian; Tøndering, John

    2005-01-01

    Three different scales which have been used to measure perceived prominence are evaluated in a perceptual experiment. Average scores of raters using a multi-level (31-point) scale, a simple binary (2-point) scale and an intermediate 4-point scale are almost identical. The potentially finer...... gradation possible with the multi-level scale(s) is compensated for by having multiple listeners, which is a also a requirement for obtaining reliable data. In other words, a high number of levels is neither a sufficient nor a necessary requirement. Overall the best results were obtained using the 4-point...

  2. Determination of heavy metals in fish scales

    Hana Nováková

    2010-12-01

    Full Text Available The outcomes from measurements of amount of selected elements in the fish scales of common carp are presented. Concentrations in the scales were identified and differences between storage of heavy metals in exposed and covered part of scale were studied. The spatial distribution of elements on the fish scale´s surface layer was measured by Laser Ablation–Inductively Coupled Plasma–Mass Spectrometry (LA–ICP–MS. The average amount of elements in the dissolved scales was quantified by ICP–MS. The fine structure of fish scales was visualized by phase–contrast Synchrotron radiation (SR microradiography.

  3. Análise da eficácia do metilfenidato usando a versão abreviada do questionário de conners em transtorno de déficit de atenção/hiperatividade Analysis of the methylphenidate's efficacy using the abbreviated version Conners' questionnaire in attention deficit hyperactivity disorder

    Ênio Roberto de Andrade

    2004-03-01

    Full Text Available O transtorno de déficit de atenção/hiperatividade (TDAH é quadro diagnóstico bastante complexo, de início precoce, com evolução crônica que repercute em diversos contextos. Três a 5% das crianças em idade escolar apresentam esse transtorno. O questionário de Conners vem sendo utilizado como instrumento de levantamento epidemiológico para o TDAH. OBJETIVO: Este estudo visa a utilização deste instrumento para a análise da eficácia do tratamento com metilfenidato em crianças com TDAH. MÉTODO: Foram selecionadas 21 crianças do gênero masculino, com TDAH do tipo combinado, idade cronológica entre sete anos completos a 10 anos e 11 meses e todos foram tratados com metilfenidato. A versão abreviada do questionário de Conners para pais e professores foi aplicada em dois momentos: um antes da medicação e outro entre seis a oito meses após o seu início. RESULTADOS: Obteve-se redução na pontuação no questionário de Conners em todas as crianças com TDAH concomitante à melhora clínica. CONCLUSÃO: Observou-se que o questionário de Conners mostrou-se útil não só como auxílio diagnóstico, mas também como um instrumento de avaliação da eficácia do tratamento do TDAH.The attention deficit/hyperactivity disorder (ADHD is a complex diagnosis. Its installation is precocious, with chronic evolution that impacts on the subject's performance in several contexts. Three to 5% of the school aged children present this disorder. The Conners' questionnaire has been used as an instrument of epidemiological survey for ADHD children. OBJECTIVE: This study aimed to investigate if this instrument could be an useful tool for the analysis of the methylphenidate's treatment efficacy in ADHD's children. METHOD: Twenty-one male children were selected, with ADHD combined type, chronological ages ranging from seven to 10 years and 11 months and all were treated with methylphenidate. The Conners' questionnaire abbreviated version for parents

  4. Comparison of GFR calculation methods:99m Tc-DTPA renal dynamic imaging, abbreviated MDRD and CKD-EPI equations in patients with chronic kidney disease%99mTc-DTPA肾动态显像、简化MDRD方程及CKD-EPI方程评价慢性肾脏病患者肾功能的比较

    曾凤伟; 李建芳; 谢良骏; 张峰; 彭小林; 吴春兴; 程木华

    2014-01-01

    Objective To compare the applications of renal dynamic imaging with 99m Tc-DTPA ( Gates′met-hod), abbreviated MDRD (aMDRD) equation and CKD-EPI equation in renal function assessment in the patients with chronic kidney disease ( CKD) .Methods A total of 159 adult patients with CKD were included .The glomerular filtra-tion rate (GFR) obtained by Gates′method, aMDRD equation and CKD-EPI equation against 99mTc-DTPA dual plasma clearance ( rGFR) with Pearson correlational analyses and Bland -Altman method were performed .Results A statistical-ly significant correlation was found between rGFR and Gates′method (r=0.900, P0.05).Conclusion All three methods underestimate the rGFR, the new CDK-EPI equa-tion performs no better than aMDRD equation .Both equations can′t provide as much accuracy as Gates′method.%目的:比较99m Tc-DTPA肾动态显像( Gates′法)、简化MDRD方程及CKD-EPI方程评价慢性肾脏病( CKD)患者肾小球滤过率( GFR)的价值。方法选择CKD成人患者159例,以双血浆法测定的GFR( rGFR)作为参照,应用Pearson相关分析及Bland-Altman分析比较Gates′法、简化MDRD方程及CKD-EPI方程估算的GFR(eGFR)的相关性、一致性及准确性。结果 Gates′法估测的eGFR与rGFR呈正相关(r=0.900,P<0.001),高于简化MDRD方程(r=0.816,P<0.001)及CKD-EPI方程(r=0.825,P<0.001)。3种方法都低估了rGFR值,但是Gates′法的偏差最小[-3.4 mL/(min·1.73 m2)]、精确度最高[11.8 mL/(min·1.73 m2)]、±30%的准确性最高(71.7%)。除了CKD1~2期,Gates′法±30%的准确性均高于简化MDRD方程及CKD-EPI方程( P<0.05)。简化MDRD方程与CKD-EPI方程准确性比较差异无统计学意义(P>0.05)。结论3种方法均低估了rGFR。 CKD-EPI方程并不优于简化MDRD方程。 Gates′法的准确性高于其他两种方程。

  5. Three Scales of Acephalous Organization

    Victor MacGill

    2016-04-01

    Full Text Available Dominance-based hierarchies have been taken for granted as the way we structure our organizations, but they are a part of a paradigm that has put our whole existence in peril. There is an urgent need to explore alternative paradigms that take us away from dystopic futures towards preferred, life enhancing paradigms based on wellbeing. One of the alternative ways of organizing ourselves that avoids much of the structural violence of existing organizations is the acephalous group (operating without any structured, ongoing leadership. Decision making becomes distributed, transitory and self-selecting. Such groups are not always appropriate and have their strengths and weaknesses, but they can be a more effective, humane way of organizing ourselves and can open windows to new ways of being. Acephalous groups operate at many different scales and adapt their structure accordingly. For this reason, a comparison of small, medium and large-scale acephalous groups reveals some of the dynamics involved in acephalous functioning and provides a useful overview of these emergent forms of organization and foreshadows the role they may play in future.

  6. The SCALE-UP Project

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  7. Scaling analysis of stock markets

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  8. Creating Large Scale Database Servers

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  9. Scaling characteristics of topographic depressions

    Le, P. V.; Kumar, P.

    2013-12-01

    Topographic depressions, areas of no lateral surface flow, are ubiquitous characteristic of land surface that control many ecosystem and biogeochemical processes. Landscapes with high density of depressions increase the surface storage capacity, whereas lower depression density increase runoff, thus influencing soil moisture states, hydrologic connectivity and the climate--soil--vegetation interactions. With the widespread availability of high resolution LiDAR based digital elevation model (lDEM) data, it is now possible to identify and characterize the structure of the spatial distribution of topographic depressions for incorporation in ecohydrologic and biogeochemical studies. Here we use lDEM data to document the prevalence and patterns of topographic depressions across five different landscapes in the United States and quantitatively characterize the distribution of attributes, such as surface area, storage volume, and the distance to the nearest neighbor. Through the use of a depression identification algorithm, we show that these distribution attributes follow scaling laws indicative of a fractal structure in which a large fraction of land surface areas can consist of high number of topographic depressions, accounting for 4 to 200 mm of depression storage. This implies that the impacts of small-scale topographic depressions in the fractal landscapes on the redistribution of surface energy fluxes, evaporation, and hydrologic connectivity are quite significant.

  10. The Regret/Disappointment Scale

    Francesco Marcatto

    2008-01-01

    Full Text Available The present article investigates the effectiveness of methods traditionally used to distinguish between the emotions of regret and disappointment and presents a new method --- the Regret and Disappointment Scale (RDS --- for assessing the two emotions in decision making research. The validity of the RDS was tested in three studies. Study 1 used two scenarios, one prototypical of regret and the other of disappointment, to test and compare traditional methods (``How much regret do you feel'' and ``How much disappointment do you feel'' with the RDS. Results showed that only the RDS clearly differentiated between the constructs of regret and disappointment. Study 2 confirmed the validity of the RDS in a real-life scenario, in which both feelings of regret and disappointment could be experienced. Study 2 also demonstrated that the RDS can discriminate between regret and disappointment with results similar to those obtained by using a context-specific scale. Study 3 showed the advantages of the RDS over the traditional methods in gambling situations commonly used in decision making research, and provided evidence for the convergent validity of the RDS.

  11. The Weak Scale from BBN

    Hall, Lawrence J; Ruderman, Joshua T

    2014-01-01

    The measured values of the weak scale, $v$, and the first generation masses, $m_{u,d,e}$, are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of $(m_u,m_d,m_e,v)$, catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of $(1.4,1.3,2.5,\\sim5)$, respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as $v$ is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of $m_{u,d,e}$ that yield both hydrogen and heavy nuclei stability. For very large values of $v$ a new regime is entered where weak in...

  12. Gamma scale chemistry progress report

    Economides, M.; Estabrook, E.; Joy, E.F. [and others

    1948-06-01

    This report considers the work done during the year ending June 30, 1948, present work being done and future plans on the determination of formulas, methods of preparation, and properties of as many compounds of postum as possible. An experimental approach to such a research problem on the element postum requires that procedures which may be used deal with ultramicro quantities of material. Such procedures on an ultramicro or gamma scale require special techniques by personnel trained in manipulating these small quantities of radioactive material. Equipment which may be used varies with the experiment considered. Often new apparatus must be developed or equipment previously developed and used in some other experiment must be modified. This generalized research problem is subdivided in the {open_quotes}Research Problems Outline{close_quotes}. The presentation of a survey of these research problems with reference to the outline for the year ending June 30, 1948 is a critical review of the work done by the Gamma Scale Chemistry Group as well as a consideration of future plans. The course which these future plans may follow will depend upon information which may be obtained when carrying out planned experiments.

  13. Scaling analysis of affinity propagation.

    Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

    2010-06-01

    We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for ss*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set. PMID:20866473

  14. Unified Theory of Allometric Scaling

    Silva, J K L; Silva, P R; Silva, Jafferson K. L. da; Barbosa, Lauro A.; Silva, Paulo Roberto

    2006-01-01

    A general simple theory for the allometric scaling is developed in the $d+1$-dimensional space ($d$ biological lengths and a physiological time) of metabolic states of organisms. It is assumed that natural selection shaped the metabolic states in such a way that the mass and energy $d+1$-densities are size-invariant quantities (independent of body mass). The different metabolic states (basal and maximum) are described by considering that the biological lengths and the physiological time are related by different transport processes of energy and mass. In the basal metabolism, transportation occurs by balistic and diffusion processes. In $d=3$, the 3/4 law occurs if the balistic movement is the dominant process, while the 2/3 law appears when both transport processes are equivalent. Accelerated movement during the biological time is related to the maximum aerobic sustained metabolism, which is characterized by the scaling exponent $2d/(2d+1)$ (6/7 in $d=3$). The results are in good agreement with empirical data...

  15. Goethite Bench-scale and Large-scale Preparation Tests

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate (99TcO4-) can be reduced and captured into a solid solution of α-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for 99Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO4-) to Tc(Iv)by reaction with the ferrous ion, Fe2+-Fe2+ is oxidized to Fe3+ - in

  16. Goethite Bench-scale and Large-scale Preparation Tests

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  17. Dimensional analysis, scaling and fractals

    Dimensional analysis refers to the study of the dimensions that characterize physical entities, like mass, force and energy. Classical mechanics is based on three fundamental entities, with dimensions MLT, the mass M, the length L and the time T. The combination of these entities gives rise to derived entities, like volume, speed and force, of dimensions L3, LT-1, MLT-2, respectively. In other areas of physics, four other fundamental entities are defined, among them the temperature θ and the electrical current I. The parameters that characterize physical phenomena are related among themselves by laws, in general of quantitative nature, in which they appear as measures of the considered physical entities. The measure of an entity is the result of its comparison with another one, of the same type, called unit. Maps are also drawn in scale, for example, in a scale of 1:10,000, 1 cm2 of paper can represent 10,000 m2 in the field. Entities that differ in scale cannot be compared in a simple way. Fractal geometry, in contrast to the Euclidean geometry, admits fractional dimensions. The term fractal is defined in Mandelbrot (1982) as coming from the Latin fractus, derived from frangere which signifies to break, to form irregular fragments. The term fractal is opposite to the term algebra (from the Arabic: jabara) which means to join, to put together the parts. For Mandelbrot, fractals are non topologic objects, that is, objects which have as their dimension a real, non integer number, which exceeds the topologic dimension. For the topologic objects, or Euclidean forms, the dimension is an integer (0 for the point, 1 for a line, 2 for a surface, and 3 for a volume). The fractal dimension of Mandelbrot is a measure of the degree of irregularity of the object under consideration. It is related to the speed by which the estimate of the measure of an object increases as the measurement scale decreases. An object normally taken as uni-dimensional, like a piece of a straight

  18. The Adaptive Multi-scale Simulation Infrastructure

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  19. Handbook of Large-Scale Random Networks

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  20. Scale Interaction in a California precipitation event

    Leach, M. J., LLNL

    1997-09-01

    Heavy rains and severe flooding frequently plaque California. The heavy rains are most often associated with large scale cyclonic and frontal systems, where large scale dynamics and large moisture influx from the tropical Pacific interact. however, the complex topography along the west coast also interacts with the large scale influences, producing local areas with heavier precipitation. In this paper, we look at some of the local interactions with the large scale.

  1. The scaling problems in service quality evaluation:

    Gallo, Michele

    2007-01-01

    In service quality evaluation we have to treat data having different kinds of scales. In order to obtain a measure of the service quality level a conventional ordinal rating scale for each attribute of a service is used. Moreover additional information on the customers or on the objective characteristics of the service is available (interval, ordinal and or categorical scale). In the latter the importance or weight assigned to the different items must be also considered (compositional scale)....

  2. Improving the Factor Structure of Psychological Scales

    Zhang, Xijuan; Savalei, Victoria

    2015-01-01

    Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structu...

  3. Calculating the scale elasticity in DEA models.

    Førsund, Finn R.; Lennart Hjalmarsson

    2002-01-01

    In economics scale properties of a production function is charcterised by the value of the scale elasticity. In the field of efficiency studies this is also a valid approach for the frontier production function. It has no good meaning to talk about scale properties of inefficient observations. In the DEA literature a qualitative characterisation is most common. The contribution of the paper is to apply the concept of scale elasticity from multi output production theory in economics to the pie...

  4. Determination of heavy metals in fish scales

    Hana Nováková; Markéta Holá; Jozef Kaiser; Jiří Kalvoda; Viktor Kanický*

    2010-01-01

    The outcomes from measurements of amount of selected elements in the fish scales of common carp are presented. Concentrations in the scales were identified and differences between storage of heavy metals in exposed and covered part of scale were studied. The spatial distribution of elements on the fish scale´s surface layer was measured by Laser Ablation–Inductively Coupled Plasma–Mass Spectrometry (LA–ICP–MS). The average amount of elements in the dissolved scal...

  5. Broken scaling in the Forest Fire Model

    Pruessner, Gunnar; Jensen, Henrik Jeldtoft

    2002-01-01

    We investigate the scaling behavior of the cluster size distribution in the Drossel-Schwabl Forest Fire model (DS-FFM) by means of large scale numerical simulations, partly on (massively) parallel machines. It turns out that simple scaling is clearly violated, as already pointed out by Grassberger [P. Grassberger, J. Phys. A: Math. Gen. 26, 2081 (1993)], but largely ignored in the literature. Most surprisingly the statistics not seems to be described by a universal scaling function, and the s...

  6. Architecture of Large-Scale Systems

    Koschel, Arne; Astrova, Irina; Deutschkämer, Elena; Ester, Jacob; Feldmann, Johannes

    2013-01-01

    In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.

  7. Lethality of Suicide Attempt Rating Scale.

    Smith, K.; And Others

    1984-01-01

    Presents an 11-point scale for measuring the degree of lethality of suicide attempts. The scale has nine example "anchors" and uses the relative lethality of an extensive table of drugs. The scale can be used reliably by nonmedical personnel with no prior training. (Author/BL)

  8. Corrections to scaling at the Anderson transition

    Slevin, Keith; Ohtsuki, Tomi

    1998-01-01

    We report a numerical analysis of corrections to finite size scaling at the Anderson transition due to irrelevant scaling variables and non-linearities of the scaling variables. By taking proper account of these corrections, the universality of the critical exponent for the orthogonal universality class for three different distributions of the random potential is convincingly demonstrated.

  9. An Aesthetic Value Scale of the Rorschach.

    Insua, Ana Maria

    1981-01-01

    An aesthetic value scale of the Rorschach cards was built by the successive interval method. This scale was compared with the ratings obtained by means of the Semantic Differential Scales and was found to successfully differentiate sexes in their judgment of card attractiveness. (Author)

  10. Size Scaling in Visual Pattern Recognition

    Larsen, Axel; Bundesen, Claus

    1978-01-01

    Human visual recognition on the basis of shape but regardless of size was investigated by reaction time methods. Results suggested two processes of size scaling: mental-image transformation and perceptual-scale transformation. Image transformation accounted for matching performance based on visual short-term memory, whereas scale transformation…

  11. Three scales of motions associated with tornadoes

    This dissertation explores three scales of motion commonly associated with tornadoes, and the interaction of these scales: the tornado cyclone, the tornado, and the suction vortex. The goal of the research is to specify in detail the character and interaction of these scales of motion to explain tornadic phenomena

  12. Inflation, large scale structure and particle physics

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  13. Mechanics over micro and nano scales

    Chakraborty, Suman

    2011-01-01

    Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems

  14. Large-scale river regulation

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  15. Mirages in galaxy scaling relations

    Mosenkov, A V; Reshetnikov, V P

    2014-01-01

    We analyzed several basic correlations between structural parameters of galaxies. The data were taken from various samples in different passbands which are available in the literature. We discuss disc scaling relations as well as some debatable issues concerning the so-called Photometric Plane for bulges and elliptical galaxies in different forms and various versions of the famous Kormendy relation. We show that some of the correlations under discussion are artificial (self-correlations), while others truly reveal some new essential details of the structural properties of galaxies. Our main results are as follows: (1) At present, we can not conclude that faint stellar discs are, on average, more thin than discs in high surface brightness galaxies. The ``central surface brightness -- thickness'' correlation appears only as a consequence of the transparent exponential disc model to describe real galaxy discs. (2) The Photometric Plane appears to have no independent physical sense. Various forms of this plane ar...

  16. Testing gravity on Large Scales

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  17. Relating Biophysical Properties Across Scales

    Flenner, Elijah; Neagu, Adrian; Kosztin, Ioan; Forgacs, Gabor

    2007-01-01

    A distinguishing feature of a multicellular living system is that it operates at various scales, from the intracellular to organismal. Very little is known at present on how tissue level properties are related to cell and subcellular properties. Modern measurement techniques provide quantitative results at both the intracellular and tissue level, but not on the connection between these. In the present work we outline a framework to address this connection. We specifically concentrate on the morphogenetic process of tissue fusion, by following the coalescence of two contiguous multicellular aggregates. The time evolution of this process can accurately be described by the theory of viscous liquids. We also study fusion by Monte Carlo simulations and a novel Cellular Particle Dynamics (CPD) model, which is similar to the earlier introduced Subcellular Element Model (Newman, 2005). Using the combination of experiments, theory and modeling we are able to relate the measured tissue level biophysical quantities to s...

  18. Small-scale classification schemes

    Hertzum, Morten

    2004-01-01

    important means of discretely balancing the contractual aspect of requirements engineering against facilitating the users in an open-ended search for their system requirements. The requirements classification is analysed in terms of the complementary concepts of boundary objects and coordination mechanisms......Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...

  19. Matter perturbations in scaling cosmology

    Fuño, A. Romero; Hipólito-Ricaldi, W. S.; Zimdahl, W.

    2016-04-01

    A suitable nonlinear interaction between dark matter with an energy density ρM and dark energy with an energy density ρX is known to give rise to a non-canonical scaling ρM ∝ ρXa-ξ, where ξ is a parameter which generally deviates from ξ = 3. Here, we present a covariant generalization of this class of models and investigate the corresponding perturbation dynamics. The resulting matter power spectrum for the special case of a time-varying Lambda model is compared with data from the Sloan Digital Sky Survey (SDSS) DR9 catalogue (Ahn et al.). We find a best-fitting value of ξ = 3.25 which corresponds to a decay of dark matter into the cosmological term. Our results are compatible with the Lambda Cold Dark Matter model at the 2σ confidence level.

  20. Black generation using lightness scaling

    Cholewo, Tomasz J.

    1999-12-01

    This paper describes a method for constructing a lookup table relating a 3D CMY coordinate system to CMYK colorant amounts in a way that maximizes the utilization of the printer gamut volume. The method is based on an assumption, satisfied by most printers, that adding a black colorant to any combination of CMY colorants does not result in a color with more chroma. Therefore the CMYK gamut can be obtained from the CMY gamut by expanding it towards lower lightness values. Use of black colorant on the gray axis is enforced by modifying the initial distribution of CMY points through an approximate black generation transform. Lightness values of a resulting set of points in CIELAB space are scaled to fill the four-color gamut volume. The output CMYK values corresponding to the modified CIELAB colors are found by inverting a printer model. This last step determines a specific black use rate which can depend on the region of the color space.

  1. Hypoallometric scaling in international collaborations

    Hsiehchen, David; Espinoza, Magdalena; Hsieh, Antony

    2016-02-01

    Collaboration is a vital process and dominant theme in knowledge production, although the effectiveness of policies directed at promoting multinational research remains ambiguous. We examined approximately 24 million research articles published over four decades and demonstrated that the scaling of international publications to research productivity for each country obeys a universal and conserved sublinear power law. Inefficient mechanisms in transborder team dynamics or organization as well as increasing opportunity costs may contribute to the disproportionate growth of international collaboration rates with increasing productivity among nations. Given the constrained growth of international relationships, our findings advocate a greater emphasis on the qualitative aspects of collaborations, such as with whom partnerships are forged, particularly when assessing research and policy outcomes.

  2. Bacterial Communities: Interactions to Scale

    Stubbendieck, Reed M.; Vargas-Bautista, Carol; Straight, Paul D.

    2016-01-01

    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  3. Conference on Large Scale Optimization

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  4. Large Scale Nanolaminate Deformable Mirror

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  5. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats. PMID:16150721

  6. Scaling solutions for Dilaton Quantum Gravity

    Henz, Tobias; Wetterich, Christof

    2016-01-01

    Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale $k$ is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of $k$. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  7. Weak scale: Dynamical determination versus accidental stabilization

    Triantaphyllou, George

    2001-01-01

    Is it a mere accident that the weak scale is exactly so much smaller than the Planck scale, and at the same time exactly so much larger than the QCD scale? Or are the experimentally-measured values of the corresponding gauge couplings enough to help us determine dynamically these energy scales? And if nature has indeed offered us the possibility of such a determination, why dismiss it and fix these scales instead by means of arbitrary parameters within a multitude of jejune theoretical framew...

  8. Computational applications of DNA structural scales

    Baldi, P.; Chauvin, Y.; Brunak, Søren; Gorodkin, Jan; Pedersen, Anders Gorm

    1998-01-01

    Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show that...... models (HMMs). The scales are applied to HMMs of human promoter sequences, revealing a number of significant differences between regions upstream and downstream of the transcriptional start-point. Finally, we show (with some qualifications) that such scales are, by and large, independent, and therefore...

  9. Scaling Region in Desynchronous Coupled Chaotic Maps

    LI Xiao-Wen; XUE Yu; SHI Peng-Liang; HU Gang

    2005-01-01

    The largest Lyapunov exponent and the Lyapunov spectrum of a coupled map lattice are studied when the system state is desynchronous chaos. In the large system size limit a scaling region is found in the parameter space where the largest Lyapunov exponent is independent of the system size and the coupling strength. Some scaling relation between the Lyapunov spectrum distributions for different coupling strengths is found when the coupling strengths are taken in the scaling parameter region. The existence of the scaling domain and the scaling relation of Lyapunov spectra there are heuristically explained.

  10. Development of music lesson attitude scale

    Serpil Umuzdaş

    2012-01-01

    The purpose of this study is to develop a scale to measure the attitudes towards music lessons for students of primary schools.  Processing steps in the development of a scale have been explained. In the process of developing the scale first of all a literature review was conducted and studied on similar scales. A pool of 35 items was prepared with the help of information from literature and contributions of professionals.  The scale was applied to 692 students. Factor analysis has been used ...

  11. Effects of scale on internal blast measurements

    Granholm, R.; Sandusky, H.; Lee, R.

    2014-05-01

    This paper presents a comparative study between large and small-scale internal blast experiments with the goal of using the small-scale analog for energetic performance evaluation. In the small-scale experiment, highly confined explosive samples detonator while enclosed in a 3-liter chamber. Large-scale tests up to 23 kg were unconfined and released in a chamber with a factor of 60,000 increase in volume. The comparative metric in these experiments is peak quasi-static overpressure, with the explosive sample expressed as sample energy/chamber volume, which normalizes measured pressures across scale. Small-scale measured pressures were always lower than the large-scale measurements, because of heat-loss to the high confinement inherent in the small-scale apparatus. This heat-loss can be quantified and used to correct the small-scale pressure measurements. In some cases the heat-loss was large enough to quench reaction of lower energy samples. These results suggest that small-scale internal blast tests do correlate with their large-scale counterparts, provided that heat-loss to confinement can be measured, and that less reactive or lower energy samples are not quenched by heat-loss.

  12. Scaling Analysis for PWR Steam Generator

    Li, Yuquan [State Nuclear Power Technology R and D Center, Beijing (China); Ye, Zishen [Tsinghua University, Beijing (China)

    2011-08-15

    To test the nuclear power plant safety system performance and verify the relative safety analysis code, a widely used approach is to design and construct a scaled model based on a scaling methodology. For a pressurized water reactor (PWR), the SG scaling analysis is important before designing a scale model which is expected to well simulate the system response to the accident. In this work, a review of the transient process in SG during a loss of coolant accident (LOCA) is first presented, and then a brief natural circulation scaling analysis is performed to get the basic SG scaling design rules. The U-tube scaling design shows the scaling will enlarge the thermal center height ratio while keeping the length ratio when the scale model uses a different diameter ratio and the height ratio, which causes distortion in natural circulation simulation. And then, by the heat transfer scaling analysis, a relation between the U-tube diameter ratio and model height ratio is obtained, and it shows the diameter ratio decreases with the decreasing model height ratio. In the end, the SG transition from the heat sink to the heat source is analyzed, and the results show the SG secondary inventory and the total material heat capacity need to be properly scaled to represent the transition correctly.

  13. Scaling Analysis for PWR Steam Generator

    To test the nuclear power plant safety system performance and verify the relative safety analysis code, a widely used approach is to design and construct a scaled model based on a scaling methodology. For a pressurized water reactor (PWR), the SG scaling analysis is important before designing a scale model which is expected to well simulate the system response to the accident. In this work, a review of the transient process in SG during a loss of coolant accident (LOCA) is first presented, and then a brief natural circulation scaling analysis is performed to get the basic SG scaling design rules. The U-tube scaling design shows the scaling will enlarge the thermal center height ratio while keeping the length ratio when the scale model uses a different diameter ratio and the height ratio, which causes distortion in natural circulation simulation. And then, by the heat transfer scaling analysis, a relation between the U-tube diameter ratio and model height ratio is obtained, and it shows the diameter ratio decreases with the decreasing model height ratio. In the end, the SG transition from the heat sink to the heat source is analyzed, and the results show the SG secondary inventory and the total material heat capacity need to be properly scaled to represent the transition correctly

  14. Fluctuation scaling, Taylor's law, and crime.

    Quentin S Hanley

    Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  15. Effects of scale on internal blast measurements

    This paper presents a comparative study between large and small-scale internal blast experiments with the goal of using the small-scale analog for energetic performance evaluation. In the small-scale experiment, highly confined explosive samples <0.5 g were subjected to the output from a PETN detonator while enclosed in a 3-liter chamber. Large-scale tests up to 23 kg were unconfined and released in a chamber with a factor of 60,000 increase in volume. The comparative metric in these experiments is peak quasi-static overpressure, with the explosive sample expressed as sample energy/chamber volume, which normalizes measured pressures across scale. Small-scale measured pressures were always lower than the large-scale measurements, because of heat-loss to the high confinement inherent in the small-scale apparatus. This heat-loss can be quantified and used to correct the small-scale pressure measurements. In some cases the heat-loss was large enough to quench reaction of lower energy samples. These results suggest that small-scale internal blast tests do correlate with their large-scale counterparts, provided that heat-loss to confinement can be measured, and that less reactive or lower energy samples are not quenched by heat-loss.

  16. MULTI-SCALE GAUSSIAN PROCESSES MODEL

    Zhou Yatong; Zhang Taiyi; Li Xiaohe

    2006-01-01

    A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.

  17. Scaling Laws in the Distribution of Galaxies

    Jones, B J T; Saar, E; Trimble, V; Jones, Bernard J. T.; Martinez, Vicent J.; Saar, Enn; Trimble, Virginia

    2004-01-01

    Research done during the previous century established our Standard Cosmological Model. There are many details still to be filled in, but few would seriously doubt the basic premise. Past surveys have revealed that the large-scale distribution of galaxies in the Universe is far from random: it is highly structured over a vast range of scales. To describe cosmic structures, we need to build mathematically quantifiable descriptions of structure. Identifying where scaling laws apply and the nature of those scaling laws is an important part of understanding which physical mechanisms have been responsible for the organization of clusters, superclusters of galaxies and the voids between them. Finding where these scaling laws are broken is equally important since this indicates the transition to different underlying physics. In describing scaling laws we are helped by making analogies with fractals: mathematical constructs that can possess a wide variety of scaling properties. We must beware, however, of saying that ...

  18. Invariant relationships deriving from classical scaling transformations

    Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.

  19. Human mobility patterns at the smallest scales

    Lind, Pedro G

    2014-01-01

    We present a study on human mobility at small spatial scales. Differently from large scale mobility, recently studied through dollar-bill tracking and mobile phone data sets within one big country or continent, we report Brownian features of human mobility at smaller scales. In particular, the scaling exponents found at the smallest scales is typically close to one-half, differently from the larger values for the exponent characterizing mobility at larger scales. We carefully analyze $12$-month data of the Eduroam database within the Portuguese university of Minho. A full procedure is introduced with the aim of properly characterizing the human mobility within the network of access points composing the wireless system of the university. In particular, measures of flux are introduced for estimating a distance between access points. This distance is typically non-euclidean, since the spatial constraints at such small scales distort the continuum space on which human mobility occurs. Since two different exponent...

  20. DEVELOPMENT OF A VISUAL MATH LITERACY SELF EFFICACY PERCEPTION SCALE (VMLSEPS FOR ELEMENTARY STUDENTS

    Mehmet BEKDEMİR

    2012-03-01

    mathematics, an 58-item draft scale that can measure the self-efficacy of elementary students concerning their visual math literacy was prepared by a group of six experts, among which four were elementary school math teachers and two were faculty members at department of mathematics, by taking into account the majority of votes; and it was decided to entitle the scale as the Visual Math Literacy Self-efficacy Perception Scale, and abbreviated as VMLSEPS. Researchers decided to prepare a Likert type scale because it is a direct and the easiest way to determine VMLSEPS’ self-efficacy perceptions of visual math literacy, and in order to make it more sensitive and useful 5 point scale was developed. Out of the 58 items, 19 were stated as negative and the rest, 39 items, were stated as positive. Having obtained necessary permissions, the scale was tested on 428 students at 6th, 7th, and 8th grades in related schools. The test took a course hour. It was revealed that students can answer the scale generally between 15 to 25 minutes. VMLSEPS scale was basically analyzed regarding its extent and structural validity. Considering the structural validity of the VMLSEPS, experts were consulted if visual math literacy of each statement is related to its self-efficacy perceptions or sub-dimensions, and if elementary education mathematic program and students’ levels are congruent; and related corrections were made in accordance with experts’ opinions. Factor analysis was done for the structural validity of scale. For the adequacy of the sample, KMO value was calculated as 0. 959 and this value showed that the adequacy of the selected sample is perfect. Bartlett-Sphericity test was conducted in order to determine if the sample provides a normal range, and the value of significance was found as .000. The value of significance showed that the sample provides a normal range within the population. At the end of the factor analysis the items were clustered around three factors,

  1. Implications from dimensionless parameter scaling experiments

    The dimensionless parameter scaling approach is increasingly useful for predicting future tokamak performance and guiding theoretical models of energy transport. Experiments to determine the ρ* (gyroradius normalized to plasma size) scaling have been carried out in many regimes. The electron ρ* scaling is always ''gyro-Bohm'', while the ion ρ* scaling varied with regime. The ion variation is correlated with both density scale length (L mode, H mode) and current profile. The ion ρ* scaling in the low-q, H-mode regime is gyro-Bohm, which is the most favorable confinement scaling observed. New experiments in β scaling and collisionality scaling have been carried out in low-q discharges in both L mode and H mode. In L mode, global analysis shows that there is a slightly unfavorable β dependence (β-0.1) and no ν* dependence. In H-mode, global analysis finds a weak β dependence (β0.1) and an unfavorable dependence on ν*. The lack of significant β scaling spans the range of βN from 0.25 to 2.0. The very small β dependence in L mode and H mode is in contradiction with the standard global scaling relations. This contradiction in H mode may be indicative of the impact on the H-mode database of low-n tearing instabilities which are observed at slightly higher βN in the β scaling experiments. The measured β and ν* scalings explain the weak density dependence observed in engineering parameter scans. It also points to the power of the dimensionless parameter approach, since it is possible to obtain a definitive size scaling from experiments on a single tokamak

  2. Reviving large-scale projects

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  3. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  4. Do Plot Scale Studies Yield Useful Data When Assessing Field Scale Practices?

    Plot scale data has been used to develop models used to assess field and watershed scale nutrient losses. The objective of this study was to determine if phosphorus (P) loss results from plot scale rainfall simulation studies are “directionally correct” when compared to field scale P losses. Two fie...

  5. Universal scaling in sports ranking

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)

  6. Mirages in galaxy scaling relations

    Mosenkov, A. V.; Sotnikova, N. Ya.; Reshetnikov, V. P.

    2014-06-01

    We analysed several basic correlations between structural parameters of galaxies. The data were taken from various samples in different passbands which are available in the literature. We discuss disc scaling relations as well as some debatable issues concerning the so-called Photometric Plane for bulges and elliptical galaxies in different forms and various versions of the famous Kormendy relation. We show that some of the correlations under discussion are artificial (self-correlations), while others truly reveal some new essential details of the structural properties of galaxies. Our main results are as follows: At present, we cannot conclude that faint stellar discs are, on average, more thin than discs in high surface brightness galaxies. The `central surface brightness-thickness' correlation appears only as a consequence of the transparent exponential disc model to describe real galaxy discs. The Photometric Plane appears to have no independent physical sense. Various forms of this plane are merely sophisticated versions of the Kormendy relation or of the self-relation involving the central surface brightness of a bulge/elliptical galaxy and the Sérsic index n. The Kormendy relation is a physical correlation presumably reflecting the difference in the origin of bright and faint ellipticals and bulges. We present arguments that involve creating artificial samples to prove our main idea.

  7. Scaling Agile Infrastructure to People

    Jones, B; Traylen, S; Arias, N Barrientos

    2015-01-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...

  8. Scaling Agile Infrastructure to People

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  9. Universal scaling in sports ranking

    Deng, Weibing; Cai, Xu; Bulou, Alain; Wang, Qiuping A

    2011-01-01

    Ranking is a ubiquitous phenomenon in the human society. By clicking the web pages of Forbes, you may find all kinds of rankings, such as world's most powerful people, world's richest people, top-paid tennis stars, and so on and so forth. Herewith, we study a specific kind, sports ranking systems in which players' scores and prize money are calculated based on their performances in attending various tournaments. A typical example is tennis. It is found that the distributions of both scores and prize money follow universal power laws, with exponents nearly identical for most sports fields. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player will top the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simul...

  10. Harmonic regression and scale stability.

    Lee, Yi-Hsuan; Haberman, Shelby J

    2013-10-01

    Monitoring a very frequently administered educational test with a relatively short history of stable operation imposes a number of challenges. Test scores usually vary by season, and the frequency of administration of such educational tests is also seasonal. Although it is important to react to unreasonable changes in the distributions of test scores in a timely fashion, it is not a simple matter to ascertain what sort of distribution is really unusual. Many commonly used approaches for seasonal adjustment are designed for time series with evenly spaced observations that span many years and, therefore, are inappropriate for data from such educational tests. Harmonic regression, a seasonal-adjustment method, can be useful in monitoring scale stability when the number of years available is limited and when the observations are unevenly spaced. Additional forms of adjustments can be included to account for variability in test scores due to different sources of population variations. To illustrate, real data are considered from an international language assessment. PMID:24092490

  11. Plasma opening switch conduction scaling

    Plasma opening switch (POS) experiments performed on the Hawk generator [Commisso et al., Phys. Fluids B 4, 2368 (1992)] (750 kA, 1.2 μs) determine the dependence of the conduction current and conduction time on plasma density, electrode dimensions, and current rise rate. The experiments indicate that for a range of parameters, conduction is controlled by magnetohydrodynamic (MHD) distortion of the plasma, resulting in a low density region where opening can occur, possibly by erosion. The MHD distortion corresponds to an axial translation of the plasma center-of-mass by half the initial plasma length, leading to a simple scaling relation between the conduction current and time, and the injected plasma density and POS electrode dimensions that is applicable to a large number of POS experiments. For smaller currents and conduction times, the Hawk data suggest a non-MHD conduction limit that may correspond to electromagnetohydrodynamic (EMH) field penetration through the POS plasma. copyright 1995 American Institute of Physics

  12. The scaling issue: scientific opportunities

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  13. The development of basketball attitude scale

    Erman Öncü

    2012-07-01

    Full Text Available The objective of this study is to develop a valid and reliable scaling tool to scale attitudes of university students’ towards basketball. Initiating this study, the basic starting point is necessity of examining of the scientific processes which are effective in the formation of attitudes towards basketball is one of today's the most popular sports. Therefore, in consideration with the fact that it is particularly necessary to determine the factors affecting attitudes, a study was carried out to develop and make available ‘Basketball Attitude Scale’. In 2005-2006 spring term, 76 female and 45 male, totally 121 university students from different universities in Ankara participated into the study. In the exploratory factor analysis conducted concerning 35 items in the scale in order to check validity of scale; it was determined that the scale is a 2-factor scale and the number of items was reduced to 21. Item load values are between 0,631 and 0,864. In order to test reliability of the scale, Cronbach Alfa reliability and two-halves test correlation (Spearman Brown coefficients were checked and the said values were found as 0,95 and 0,93 respectively. As a conclusion; it was determined in the outcome of the study of validity and reliability that the scale prepared for scaling attitudes of university students’ towards basketball is a practicable scaling tool.

  14. Learning scale-variant and scale-invariant features for deep image classification

    van Noord, Nanne; Postma, Eric

    2016-01-01

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial scales. Previous work attempting to deal with such scale variations focused on encouraging scale-invariant CNN representations. However, scale-invariant representations are incomplete representations of ...

  15. Scaling: From quanta to nuclear reactors

    Zuber, Novak, E-mail: rohatgi@bnl.go [703 New Mark Esplanade, Rockville, MD 20850 (United States)

    2010-08-15

    This paper has three objectives. The first objective is to show how the Einstein-de Broglie equation (EdB) can be extended to model and scale, via fractional scaling, both conservative and dissipative processes ranging in scale from quanta to nuclear reactors. The paper also discusses how and why a single equation and associated fractional scaling method generate for each process of change the corresponding scaling criterion. The versatility and capability of fractional scaling are demonstrated by applying it to: (a) particle dynamics, (b) conservative (Bernoulli) and dissipative (hydraulic jump) flows, (c) viscous and turbulent flows through rough and smooth pipes, and (d) momentum diffusion in a semi-infinite medium. The capability of fractional scaling to scale a process over a vast range of temporal and spatial scales is demonstrated by applying it to fluctuating processes. The application shows that the modeling of fluctuations in fluid mechanics is analogous to that in relativistic quantum field theory. Thus, Kolmogorov dissipation frequency and length are the analogs of the characteristic time and length of quantum fluctuations. The paper briefly discusses the applicability of the fractional scaling approach (FSA) to nanotechnology and biology. It also notes the analogy between FSA and the approach used to scale polymers. These applications demonstrate the power of scaling as well as the validity of Pierre-Gilles de Gennes' ideas concerning scaling, analogies and simplicity. They also demonstrate the usefulness and efficiency of his approach to solving scientific problems. The second objective is to note and discuss the benefits of applying FSA to NPP technology. The third objective is to present a state of the art assessment of thermal-hydraulics (T/H) capabilities and needs relevant to NPP.

  16. Geological analysis and FT dating of the large-scale risht-lateral strike-slip movement of the Red River fault zone

    XIANG HongFa; WAN JingLin; HAN ZhuJun; GUO ShunMin; ZHANG WanXia; CHEN LiChun; DONG XingQuan

    2007-01-01

    Tectonically,the large-scale right-lateral strike-slip movement along the Red River fault zone is characterized at its late phase with the southeastward extension and deformation of the Northwestern Yunnan normal fault depression on its northern segment,and the dextral shear displacement on its central-southern segment.Research of the relations between stratum deformation and fault movement on the typical fault segments,such as Jianchuan,southeast Midu,Yuanjiang River,Yuanyang,etc.since the Miocene Epoch shows that there are two times dextral faulting dominated by normal shearing occurring along the Red River fault zone since the Miocene Epoch.The fission track dating (abbreviated to FT dating,the same below) is conducted on apatite samples collected from the above fault segments and relating to these movements.Based on the measured single grain's age and the confined track length,we choose the Laslet annealing model to retrieve the thermal history of the samples,and the results show that the fault zone experienced two times obvious shear displacement,one in 5.5 ±1.5 MaBP and the other in 2.1±0.8 MaBP.The central-southern segment sees two intensive uplifts of mountain mass in the Yuanjiang River-Yuanyang region at 3.6-3.8 MaBP and 1.6-2.3 MaBP,which correspond to the above-mentioned two dextral normal displacement events since the late Miocene Epoch.

  17. Thermodynamic scaling behavior in genechips

    Van Hummelen Paul

    2009-01-01

    Full Text Available Abstract Background Affymetrix Genechips are characterized by probe pairs, a perfect match (PM and a mismatch (MM probe differing by a single nucleotide. Most of the data preprocessing algorithms neglect MM signals, as it was shown that MMs cannot be used as estimators of the non-specific hybridization as originally proposed by Affymetrix. The aim of this paper is to study in detail on a large number of experiments the behavior of the average PM/MM ratio. This is taken as an indicator of the quality of the hybridization and, when compared between different chip series, of the quality of the chip design. Results About 250 different GeneChip hybridizations performed at the VIB Microarray Facility for Homo sapiens, Drosophila melanogaster, and Arabidopsis thaliana were analyzed. The investigation of such a large set of data from the same source minimizes systematic experimental variations that may arise from differences in protocols or from different laboratories. The PM/MM ratios are derived theoretically from thermodynamic laws and a link is made with the sequence of PM and MM probe, more specifically with their central nucleotide triplets. Conclusion The PM/MM ratios subdivided according to the different central nucleotides triplets follow qualitatively those deduced from the hybridization free energies in solution. It is shown also that the PM and MM histograms are related by a simple scale transformation, in agreement with what is to be expected from hybridization thermodynamics. Different quantitative behavior is observed on the different chip organisms analyzed, suggesting that some organism chips have superior probe design compared to others.

  18. Large-Scale Information Systems

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  19. A abreviação do jejum pré-operatório para duas horas com carboidratos aumenta o risco anestésico? ¿La Reducción del ayuno preoperatorio en dos horas con carbohidratos aumenta el riesgo anestésico? Does abbreviation of preoperative fasting to two hours with carbohydrates increase the anesthetic risk?

    Kátia Gomes Bezerra de Oliveira

    2009-10-01

    procedimiento quirúrgico. La recolección de datos fue prospectiva sin que los profesionales del servicio lo supieran. Se observó el tiempo de ayuno preoperatorio y las complicaciones anestésicas relacionadas con el corto tiempo de ayuno (broncoaspiración. RESULTADOS: Se evaluaron 375 pacientes, siendo de ellos 174 hombres (un 46,4% y 201 mujeres (un 53,6%, entre 18 y 90 años de edad. El tiempo promedio de ayuno preoperatorio fue de cuatro horas, variando de 2 a 20 horas. No se registró ningún caso de broncoaspiración durante los procedimientos. El tiempo de ayuno fue mayor (p BACKGROUND AND OBJECTIVES: The objective of the present study was to evaluate the incidence of possible anesthetic complications related with the abbreviation of preoperative fasting to two hours with a solution of 12.5% dextrinomaltose within the ACERTO (from the Portuguese for Acceleration of Total Postoperative Recovery project. METHODS: All patients undergoing different types of digestive tract and abdominal wall surgeries within a new protocol of perioperative conducts, established by the ACERTO project, between August 2005 and December 2007 were evaluated. All patients received oral nutritional supplementation (12.5% dextrinomaltose six and two hours before the procedure. Data were collected prospectively without the knowledge of the professionals in the department. The length of preoperative fasting and anesthetic complications related with the short fasting time (pulmonary aspiration were recorded. RESULTS: Three hundred and seventy five patients, 174 male (46.4% and 201 female (53.6%, ages 18 to 90 years, were evaluated. The mean preoperative fasting time was four hours, ranging from two to 20 hours. Pulmonary aspiration was not observed during the procedures. The length of fasting was longer (p < 0.01 when combined anesthesia (blockade + general was used. CONCLUSIONS: Adopting the multidisciplinary preoperative measures of the ACERTO project was not associated with any preoperative

  20. A cumulative scale of severe sexual sadism.

    Nitschke, Joachim; Osterheider, Michael; Mokros, Andreas

    2009-09-01

    The article assesses the scale properties of the criterion set for severe sexual sadism in a sample of male forensic patients (N = 100). Half of the sample consists of sexual sadists; the remainder is sampled at random from the general group of nonsadistic sex offenders. Eleven of 17 criteria (plus the additional item of inserting objects into the victim's bodily orifices) of Marshall, Kennedy, Yates, and Serran's list form a cumulative scale. More specifically, this scale comprises all the 5 core criteria that Marshall and his colleagues considered particularly relevant. The resulting 11-item scale of severe sexual sadism is highly reliable (r(tt) = .93) and represents a strong scale (H = .83) of the Guttman type (coefficient of reproducibility = .97). The 11-item scale distinguishes perfectly between sexual sadists and nonsadistic sex offenders in the sample. PMID:19605691