WorldWideScience

Sample records for macroseismic scale ems-98

  1. The 7 and 11 May 1984 earthquakes in Abruzzo-Latium (Central Italy): reappraisal of the existing macroseismic datasets according to the EMS98

    Science.gov (United States)

    Graziani, Laura; Tertulliani, Andrea; Maramai, Alessandra; Rossi, Antonio; Arcoraci, Luca

    2017-09-01

    The aim of this paper is to provide a complete and reliable macroseismic knowledge of the events that stroke a large area in Central Italy on 7 and 11 May 1984. Previous studies, together with original accounts integrated with new and unpublished information, have been gathered and examined in order to re-evaluate macroseismic intensities in terms of the European Macroseismic Scale (EMS98). New intensity maps have been compiled; the total number of localities with available information for both the shocks increases from 1254 of the previous study to 1576. On the basis of the new dataset, the macroseismic magnitude of the first shock is MW 5.6 which is lower than the previous macroseismic computation (MW 5.7). Moreover, the topic of assessing macroseismic intensity in the presence of multiple shocks has been also investigated, proposing an unconventional approach to presenting the macroseismic data: an overall picture of the cumulative effects produced by all the seismic sequence is given to support a partial but faithful reconstruction of the second shock. This approach is inspired by the common experience in interpreting historical seismic sequences and gives a picture of the impact of the 1984 events on the territory.

  2. Analysis of the 2016 Amatrice earthquake macroseismic data

    Directory of Open Access Journals (Sweden)

    Lorenzo Hofer

    2016-12-01

    Full Text Available On August 24, 2016, a sudden MW 6.0 seismic event hit central Italy, causing 298 victims and significant damage to residential buildings and cultural heritage. In the days following the mainshock, a macroseismic survey was conducted by teams of the University of Padova, according to the European Macroseismic Scale (EMS98. In this contribution, a critical analysis of the collected macroseismic data is presented and some comparisons were performed with the recent 2012 Emilia sequence.

  3. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    Science.gov (United States)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  4. Preliminary macroseismic survey of the 2016 Amatrice seismic sequence

    Directory of Open Access Journals (Sweden)

    Mariano Angelo Zanini

    2016-11-01

    Full Text Available After the recent destructive L’Aquila 2009 and Emilia-Romagna 2012 earthquakes, a sudden Mw 6.0 seismic event hit Central Italy on August 24, 2016. A low population density characterizes the area but, due to its nighttime occurrence, about 300 victims were registered. This work presents the first preliminary results of a macroseismic survey conducted by two teams of the University of Padova in the territories that suffered major damages. Macroseismic intensities were assessed according to the European Macroseismic Scale (EMS98 for 180 sites.

  5. Advances and Limitations of Modern Macroseismic Data Gathering

    Science.gov (United States)

    Wald, D. J.; Dewey, J. W.; Quitoriano, V. P. R.

    2016-12-01

    All macroseismic data are not created equal. At about the time that the European Macroseismic Scale 1998 (EMS-98; itself a revision of EMS-92) formalized a procedure to account for building vulnerability and damage grade statistics in assigning intensities from traditional field observations, a parallel universe of internet-based intensity reporting was coming online. The divergence of intensities assigned by field reconnaissance and intensities based on volunteered reports poses unique challenges. U.S. Geological Survey's Did You Feel It? (DYFI) and its Italian (National Institute of Geophysics and Volcanology) counterpart use questionnaires based on the traditional format, submitted by volunteers. The Italian strategy uses fuzzy logic to assign integer values of intensity from questionnaire responses, whereas DYFI assigns weights to macroseismic effects and computes real-valued intensities to a 0.1 MMI unit precision. DYFI responses may be grouped together by postal code, or by smaller latitude-longitude boxes; calculated intensities may vary depending on how observations are grouped. New smartphone-based procedures depart further from tradition by asking respondents to select from cartoons corresponding to various intensity levels that best fit their experience. While nearly instantaneous, these thumbnail-based intensities are strictly integer values and do not record specific macroseismic effects. Finally, a recent variation on traditional intensity assignments derives intensities not from field surveys or questionnaires sent to target audiences but rather from media reports, photojournalism, and internet posts that may or may not constitute the representative observations needed for consistent EMS-98 assignments. We review these issues and suggest due-diligence strategies for utilizing varied macroseismic data sets within real-time applications and in quantitative hazard and engineering analyses.

  6. Macroseismic investigation of the 2008-2010 low magnitude seismic swarm in the Brabant Massif, Belgium. The link between macroseismic intensity and geomorphology

    Science.gov (United States)

    Van Noten, Koen; Lecocq, Thomas; Vleminckx, Bart; Camelbeeck, Thierry

    2013-04-01

    Between July 2008 and January 2010 a seismic swarm took place in a region 20 km south of Brussels, Belgium. The sequence started on the 12th of July 2008 with a ML = 2.2 event and was followed the day after by the largest event in the sequence (ML = 3.2). Thanks to a locally installed temporary seismic monitoring system more than 300 low magnitude events, with events as low as ML = -0.7, have been detected. Results of the relocations of the different hypocenters and analysis of the focal mechanisms show that the majority of these earthquakes took place at several km's depth (3 to 6 km) along a (possibly blind) 1.5 km long NW-SE fault (zone) situated in the Cambrian basement rocks of the Brabant Massif. Remarkably, 60 events (0.6 ˜ ML ˜ 3.2) were felt, or heard only sometimes, by the local population. This was detected by the "Did you feel it?" macroseismic inquiries on the ROB seismology website (www.seismology.be). For each event a classical macroseismic intensity map has been constructed based on the average macroseismic intensity of each community. Within a single community, however, the reported macroseismic intensities locally often vary ranging between non-damaging intensities of I and IV (on the EMS-98 scale). Using the average macroseismic intensity of a community therefore often oversimplificates the local intensity, especially in hilly areas in which local site effects could have influenced the impact of the earthquakes at the surface. In this presentation we investigate if the perception of the people of how they experienced the small events (sound, vibrations) was influenced by local geomorphological site effects. First, based on available borehole and outcrop data a sediment thickness map of the Cenozoic and Quaternary cover above the basement rocks of the Brabant Massif is constructed in a 200 km2 area around the different epicenters. Second, several electrical resistivity tomography (ERT) profiles are conducted in order to locally improve the

  7. Critical considerations on the evaluation of macroseismic effects

    Directory of Open Access Journals (Sweden)

    M. C. SPADEA

    1980-06-01

    Full Text Available The definition of some of the standards used for evaluating local effects
    and optimizing the relative macroseismic procedures are critically
    considered, also from the different interpretative points of view to have
    come out of the « Earthquake Catalogue » work group of the Italian Geodynamics
    Projetc (PFG. Particular stress has been laid on the significance
    and reliability of the main macroseismic parameters which depend
    most directly on the investigative criteria used and on their ability to
    characterize efficiently the interaction of earthquakes and environment.
    Essentially, the analysis is of critical considerations and field-observations,
    the fruit of macroseismic investigations carried out prevalently in
    the Calabro-Peloritan Arc region. The seismic intensity, the use of macroseismic
    scales, the investigatory criteria, the macroseismic field and its anomalies are the topics chosen for a study which — even if limited from
    certain aspects — it is hoped will stimulate further thought and evaluations.

  8. Macroseismic intensity evaluation with the <>

    Directory of Open Access Journals (Sweden)

    E. Guidoboni

    1995-06-01

    Full Text Available The use of a macroseismic scale often requires subjective choices and judgments which may produce inhomogeneities and biases in the resulting intensities. To get over this problem it would be necessary to formalize the decision process leading to the estimation of the macroseismic intensity but, on historical records, this is often hindered by the poorness and incompleteness of the aLailable information and by the intrinsical ambiguity of the common language. Moreover. all the intensity scales have always been created and updated to be used <macroseismic intensity which makes use of the <>T. his approach reproduces the taslts performed by the hurnan brain which. taking advantage of the tolerance of imprecision. is able to handle with information bearing only an approximate relation to the data. This allows to understand and make explicit some passes of the evaluation process that are unconsciously followed by the macroseismic exper

  9. Simulation of macroseismic field in Central Greece

    Directory of Open Access Journals (Sweden)

    J. Drakopoulos

    1996-06-01

    Full Text Available The distribution of seismic intensity is generally influenced by major geological and tectonic features and, on a smaller scale, by local geological conditions, such as the type of surface soil, the surface-to-bedrock soil structure in sedimentary basins and the depth of the saturated zone, The present paper attempted to determine the distribution of macroseismic intensities based on published attenuation laws in the area of Central Greece, using the epicentral intensity, magnitude, length and direction of fault and a considerable number of observation sites, for which the above mentioned information is available, The expected intensity values were then compared to those observed in the same sites, from four earthquakes in Volos, Central Greece, for which the fault plane solutions are also known. The deviations of the observed values from the theoretical model were then related to the local geological conditions and the corresponding correction factor determined for each site.

  10. Re-evaluation of the macroseismic effects produced by the March 4, 1977, strong Vrancea earthquake in Romanian territory

    Directory of Open Access Journals (Sweden)

    Aurelian Pantea

    2013-04-01

    Full Text Available In this paper, the macroseismic effects of the subcrustal earthquake in Vrancea (Romania that occurred on March 4, 1977, have been re-evaluated. This was the second strongest seismic event that occurred in this area during the twentieth century, following the event that happened on November 10, 1940. It is thus of importance for our understanding of the seismicity of the Vrancea zone. The earthquake was felt over a large area, which included the territories of the neighboring states, and it produced major damage. Due to its effects, macroseismic studies were developed by Romanian researchers soon after its occurrence, with foreign scientists also involved, such as Medvedev, the founder of the Medvedev-Sponheuer-Karnik (MSK seismic intensity scale. The original macroseismic questionnaires were re-examined, to take into account the recommendations for intensity assessments according to the MSK-64 macroseismic scale used in Romania. After the re-evaluation of the macroseismic field of this earthquake, the intensity dataset was obtained for 1,620 sites in Romanian territory. The re-evaluation was necessary as it has confirmed that the previous macroseismic map was underestimated. On this new map, only the intensity data points are plotted, without tracing the isoseismals.

  11. The 24 August 2016 Amatrice earthquake: macroseismic survey in the damage area and EMS intensity assessment

    Directory of Open Access Journals (Sweden)

    QUEST W.G. :

    2016-11-01

    Full Text Available The 24 August 2016 earthquake very heavily struck the central sector of the Apennines among the Lazio,Umbria, Marche and Abruzzi regions, devastating the town of Amatrice, the nearby villages and other localities along the Tronto valley. In this paper we present the results of the macroseismic field survey carried out using the European Macroseismic Scale (EMS to take the heterogeneity of the building stock into account. We focused on the epicentral area, where geological conditions may also have contributed to the severity of damage. On the whole, we investigated 143 localities; the maximum intensity 10 EMS has been estimated for Amatrice, Pescara del Tronto and some villages in between. The severely damaged area (8-9 EMS covers a strip trending broadly N-S and extending 15 km in length and 5 km in width; minor damage occurred over an area up to 35 km northward from the epicenter.

  12. Macroseismic intensity investigation of the November 2014, M=5.7, Vrancea (Romania crustal earthquake

    Directory of Open Access Journals (Sweden)

    Angela Petruta Constantin

    2016-11-01

    Full Text Available On November 22, 2014 at 21:14:17 local hour (19:14:17 GMT a  ML=5.7 crustal earthquake occurred in the area of Marasesti city of Vrancea county (Romania - the epicenter was located at north latitude 45.87° and east longitude 27.16°, with a focal depth of 39 km. This earthquake is the main shock of a sequence that started with this and lasted until the end of January. During the sequence, characterized by the absence of foreshocks, a number of 75 earthquakes were recorded in 72 hours, the largest of which occurred in the same day with the main shock, at 22:30 (ML= 3.1. The crustal seismicity of Vrancea seismogenic region is characterized by moderate earthquakes with magnitudes that have not exceeded MW 5.9, this value being assigned to an earthquake that occurred in historical times on March 1, 1894 (Romplus catalogue. Immediately after the 2014 earthquake occurrence, the National Institute for Earth Physics (NIEP sent macroseismic questionnaires in all affected areas, in order to define the macroseismic field of ground shaking. According to macroseismic questionnaires survey, the intensity of epicentral area reached VI MSK, and the seismic event was felt in all the extra-Carpathian area. This earthquake caused general panic and minor to moderate damage to the buildings in the epicentral area and the northeast part of country. The main purpose of this paper is to present the macroseismic map of the earthquake based on the MSK-64 intensity scale.

  13. The MCS macroseismic survey of the Emilia 2012 earthquakes

    Directory of Open Access Journals (Sweden)

    Paolo Galli

    2012-10-01

    Full Text Available Most of the inhabitants of northern Italy were woken up during the night of May 20, 2012, by the Mw 6.1 earthquake [QRCMT 2012] that occurred in the eastern Po Plain. The mainshock was preceded a few hours before by a Mw 4.3 shock, and it was followed by a dozen Ml >4 aftershocks in May and June, amongst which 11 had Ml ≥4.5. On May 29, 2012, a second Mw 6.0 mainshock struck roughly the same area [QRCMT 2012], which resulted in further victims, most of whom were caught under the collapse of industrial warehouses. Such earthquakes are an unexpected event in this region, as testified by the lack of local epicenters in the Italian seismic catalog [Rovida et al. 2011: CPTI11 from now] and by the consequent low level of the local seismic classification (seismic zone 3 [DPC 2012]. Apart from the warehouses and hundreds of old, crumbling farmsteads, severe damage was focused on ancient, tall buildings, such as churches, bell towers, castles, towers and palaces. Residential buildings generally suffered only light and/or moderate effects, apart from some exceptional cases. Using the Mercalli–Cancani–Sieberg (MCS scale [Sieberg 1930], we began a macroseismic survey in the early morning of May 20, 2012, that ultimately included visits to almost 200 localities, 52 of which were carried out before the second mainshock. […

  14. Analysis of the impact of fault mechanism radiation patterns on macroseismic fields in the epicentral area of 1998 and 2004 Krn Mountains earthquakes (NW Slovenia).

    Science.gov (United States)

    Gosar, Andrej

    2014-01-01

    Two moderate magnitude (Mw = 5.6 and 5.2) earthquakes in Krn Mountains occurred in 1998 and 2004 which had maximum intensity VII-VIII and VI-VII EMS-98, respectively. Comparison of both macroseismic fields showed unexpected differences in the epicentral area which cannot be explained by site effects. Considerably, different distribution of the highest intensities can be noticed with respect to the strike of the seismogenic fault and in some localities even higher intensities have been estimated for the smaller earthquake. Although hypocentres of both earthquakes were only 2 km apart and were located on the same seismogenic Ravne fault, their focal mechanisms showed a slight difference: almost pure dextral strike-slip for the first event and a strike-slip with small reverse component on a steep fault plane for the second one. Seismotectonically the difference is explained as an active growth of the Ravne fault at its NW end. The radiation patterns of both events were studied to explain their possible impact on the observed variations in macroseismic fields and damage distribution. Radiation amplitude lobes were computed for three orthogonal directions: radial P, SV, and SH. The highest intensities of both earthquakes were systematically observed in directions of four (1998) or two (2004) large amplitude lobes in SH component (which corresponds mainly to Love waves), which have significantly different orientation for both events. On the other hand, radial P direction, which is almost purely symmetrical for the strike-slip mechanism of 1998 event, showed for the 2004 event that its small reverse component of movement has resulted in a very pronounced amplitude lobe in SW direction where two settlements are located which expressed higher intensities in the case of the 2004 event with respect to the 1998 one. Although both macroseismic fields are very complex due to influences of multiple earthquakes, retrofitting activity after 1998, site effects, and sparse

  15. Design and first tests of a Macroseismic Sensor System

    Science.gov (United States)

    Brueckl, Ewald; Polydor, Stefan; Ableitinger, Klaus; Rafeiner-Magor, Walter; Kristufek, Werner; Mertl, Stefan; Lenhardt, Wolfgang

    2017-04-01

    Seismic observatories are located in remote, low-noise areas for good reason and do not probe areas of dense and sensitive infrastructure. Complementary macroseismic data provide dense, qualitative information on ground motion in populated areas. Motivated by the QCN (Quake Catcher Network), a new low-cost sensor system (Macroseismic Sensor System = MSS) has been developed to support the evaluation of macroseismic data with quantitative information on ground movement in populated and industrial areas. Scholars, alumni and teachers from a technical high school contributed substantially to this development within the Sparkling Science project Schools & Quakes and the Citizen Science project QuakeWatch Austria. The MSS uses horizontal 4.5 Hz geophones and 16Bit AD conversion, and 100 Hz sampling, formatting to MiniSeed, and continuous data transmission via LAN or WLAN to a server are controlled by an integrated microcomputer (Raspberry Pi). Real-time generation of shake and source maps (based on proxies of the PGV in successive time windows) allows for differentiation between local seismic events (e.g., traffic noise, shock close to the sensor) and signals from earthquakes or quarry blasts. The inherent noise of the MSS is about 1% of the PGV corresponding to the lower boundary of intensity I = 2, which is below the ambient noise level at stations in highly populated or industrial areas. The MSS is already being tested at locations around a quarry with regular production blasts. An expansion to a local network in the Vienna Basin will be the next step.

  16. Procedure to estimate maximum ground acceleration from macroseismic intensity rating: application to the Lima, Perú data from the October-3-1974-8.1-Mw earthquake

    Directory of Open Access Journals (Sweden)

    L. Ocola

    2008-01-01

    Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale IMS as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the IMS scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.

  17. Application of Environmental Seismic Intensity scale (ESI 2007) to Krn Mountains 1998 Mw = 5.6 earthquake (NW Slovenia) with emphasis on rockfalls

    Science.gov (United States)

    Gosar, A.

    2012-05-01

    The 12 April 1998 Mw = 5.6 Krn Mountains earthquake with a maximum intensity of VII-VIII on the EMS-98 scale caused extensive environmental effects in the Julian Alps. The application of intensity scales based mainly on damage to buildings was limited in the epicentral area, because it is a high mountain area and thus very sparsely populated. On the other hand, the effects on the natural environment were prominent and widespread. These facts and the introduction of a new Environmental Seismic Intensity scale (ESI 2007) motivated a research aimed to evaluate the applicability of ESI 2007 to this event. All environmental effects were described, classified and evaluated by a field survey, analysis of aerial images and analysis of macroseismic questionnaires. These effects include rockfalls, landslides, secondary ground cracks and hydrogeological effects. It was realized that only rockfalls (78 were registered) are widespread enough to be used for intensity assessment, together with the total size of affected area, which is around 180 km2. Rockfalls were classified into five categories according to their volume. The volumes of the two largest rockfalls were quantitatively assessed by comparison of Digital Elevation Models to be 15 × 106 m3 and 3 × 106 m3. Distribution of very large, large and medium size rockfalls has clearly defined an elliptical zone, elongated parallel to the strike of the seismogenic fault, for which the intensity VII-VIII was assessed. This isoseismal line was compared to the tentative EMS-98 isoseism derived from damage-related macroseismic data. The VII-VIII EMS-98 isoseism was defined by four points alone, but a similar elongated shape was obtained. This isoseism is larger than the corresponding ESI 2007 isoseism, but its size is strongly controlled by a single intensity point lying quite far from others, at the location where local amplification is likely. The ESI 2007 scale has proved to be an effective tool for intensity assessment in

  18. Macroseismic survey of the April 6, 2009 L’Aquila earthquake (central Italy)

    Science.gov (United States)

    Camassi, R.; Azzaro, R.; Bernardini, F.; D'Amico, S.; Ercolani, E.; Rossi, A.; Tertulliani, A.; Vecchi, M.; Galli, P.

    2009-12-01

    On April 6, 2009, at 01:33 GMT, central Italy has been hit by a strong earthquake (Ml 5.8, Mw 6.3) representing the mainshock of a seismic sequence of over 20.000 aftershocks recorded in about five months. The event, located in the inner of the Abruzzi region just a few kilometres SW of the town of L’Aquila, has produced destructions and heavy damage in a 30 km wide area and was felt in almost Italy, as far as the coasts of Slovenia, Croatia and Albania. In all, 308 people lost their lives. A macroseismic survey was carried out soon after the earthquake by the QUEST group (QUick Earthquake Survey Team) with the aim to define, for Civil Protection purposes, the damage scenario over a territory which is densely urbanised. Damage generally depended on the high vulnerability of the buildings both for problems related to the old age of the buildings - it is the case of the historical centre of l’Aquila - and to site effects, as in some quarters of the town and in the nearby villages. Rubble-stone and masonry buildings suffered heaviest damage - a lot of old small villages almost entirely collapsed - while reinforced concrete (RC) frame buildings generally experienced moderate structural damage except in particular condition. The macroseismic effects reached intensity IX-X MCS (Mercalli-Cancani-Sieberg scale) at Onna and Castelnuovo, while many others villages reached VIII-IX MCS, amongst which the historical centre of L’Aquila. This town was investigated in detail due to the striking difference of damage between the historical centre and the more recent surrounding areas. In all, more than 300 localities have been investigated (Galli and Camassi, 2009). The earthquake has also provoked effects on natural surroundings (EMERGEO WG, 2009). Two types of phenomena have been detected: (i) surface cracks mainly observed along previously mapped faults and (ii) slope instability processes, such as landslides and secondary fractures. The pattern of macroseismic effects

  19. Towards Coupling of Macroseismic Intensity with Structural Damage Indicators

    Science.gov (United States)

    Kouteva, Mihaela; Boshnakov, Krasimir

    2016-04-01

    Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.

  20. Reevaluation of the macroseismic effects of the 1887 Sonora, Mexico earthquake and its magnitude estimation

    Science.gov (United States)

    Suárez, Gerardo; Hough, Susan E.

    2008-01-01

    The Sonora, Mexico, earthquake of 3 May 1887 occurred a few years before the start of the instrumental era in seismology. We revisit all available accounts of the earthquake and assign Modified Mercalli Intensities (MMI), interpreting and analyzing macroseismic information using the best available modern methods. We find that earlier intensity assignments for this important earthquake were unjustifiably high in many cases. High intensity values were assigned based on accounts of rock falls, soil failure or changes in the water table, which are now known to be very poor indicators of shaking severity and intensity. Nonetheless, reliable accounts reveal that light damage (intensity VI) occurred at distances of up to ~200 km in both Mexico and the United States. The resulting set of 98 reevaluated intensity values is used to draw an isoseismal map of this event. Using the attenuation relation proposed by Bakun (2006b), we estimate an optimal moment magnitude of Mw7.6. Assuming this magnitude is correct, a fact supported independently by documented rupture parameters assuming standard scaling relations, our results support the conclusion that northern Sonora as well as the Basin and Range province are characterized by lower attenuation of intensities than California. However, this appears to be at odds with recent results that Lg attenuation in the Basin and Range province is comparable to that in California.

  1. Procesos geológicos e intensidad macrosísmica Inqua del sismo de Pisco del 15/08/2007, Perú Geological process and INQUA macro-seismic intensity scale of Pisco earthquake 15/08/2007, Perú

    Directory of Open Access Journals (Sweden)

    Bilberto Zavala

    2009-12-01

    the National Institute of Civil Defense totaled 519 casualties and 655 to 679 damaged houses. Cities like Pisco, San Clemente and Tambo de Mora were severely affected, as well as agricultural areas in Pisco and Cañete valleys. The Panamericana highway was considerablely affected. In the National Reserve Paracas many tourist places were destroyed and many secondary roads that connect the coastal area with the high part of the Andes (Ica, Huancavelica and Lima were blocked due to rock falls. Small towns in the Andes placed on ancient landslide deposits were damaged (Laraos, Chocos, Huangascar, Tantará. Seaside resorts, creeks and small docks in the coastal area and some chicken farms were affected by the tsunami. Coseismic and postseismic geological processes were responsible for the damage in a 200 km radius from the epicenter. Ground deformation and lateral spreading happened in Tertiary sediments due to a shallow of the ground water table. Mass movements (rock fall, collapsing and landslides are located between 32 and 198 km from the epicenter, with accumulated volumes of 14,750 m³ (Coastal area and 9, 585 m³ (Andes area. The tsunami waves got 10 m of run up (Yumaque beach and up to 2 km of flooding at the beach zone (Lagunillas beach in the National Reserve of Paracas. The geological and geomorphological descriptions of this process determine that that the Pisco earthquake reached a VII and VIII grade on the INQUA macro-seismic intensity scale.

  2. Transfrontier macroseismic data exchange in NW Europe: examples of non-circular intensity distributions

    Science.gov (United States)

    Van Noten, Koen; Lecocq, Thomas; Hinzen, Klaus-G.; Sira, Christophe; Camelbeeck, Thierry

    2016-04-01

    Macroseismic data acquisition recently received a strong increase in interest due to public crowdsourcing through internet-based inquiries and real-time smartphone applications. Macroseismic analysis of felt earthquakes is important as the perception of people can be used to detect local/regional site effects in areas without instrumentation. We will demonstrate how post-processing macroseismic data improves the quality of real-time intensity evaluation of new events. Instead of using the classic DYFI representation in which internet intensities are averaged per community, we, first, geocoded all individual responses and structure the model area into 100 km2grid cells. Second, the average intensity of all answers within a grid cell is calculated. The resulting macroseismic grid cell distribution shows a less subjective and more homogeneous intensity distribution than the classical irregular community distribution and helps to improve the calculation of intensity attenuation functions. In this presentation, the 'Did You Feel It' (DYFI) macroseismic data of several >M4, e.g. the 2002 ML 4.9 Alsdorf and 2011 ML 4.3 Goch (Germany) and the 2015 ML 4.1 Ramsgate (UK), earthquakes felt in Belgium, Germany, The Netherlands, France, Luxemburg and UK are analysed. Integration of transfrontier DYFI data of the ROB-BNS, KNMI, BCSF and BGS networks results in a particular non-circular, distribution of the macroseismic data in which the felt area for all these examples extends significantly more in E-W than N-S direction. This intensity distribution cannot be explained by geometrical amplitude attenuation alone, but rather illustrates a low-pass filtering effect due to the south-to-north increasing thickness of cover sediments above the London-Brabant Massif. For the studied M4 to M5 earthquakes, the thick sediments attenuate seismic energy at higher frequencies and consequently less people feel the vibrations at the surface. This example of successful macroseismic data exchange

  3. An analytic method for separating local from regional effects on macroseismic intensity

    Directory of Open Access Journals (Sweden)

    C. Gasparini

    1995-06-01

    Full Text Available nterpretation of macroseismic data is hazardous, due to its qualitative nature. This, linked with errors in eval- uation, and the variations of local intensity, makes it difficult to draw valid conclusions. This study presents a statistical method as the basis for distinguishing the diverse components that constitute a macroseismic field. The method is based on the polar transformation of the coordinate system and on the analysis of the fractal di- mension of the intensity values, exposed to the gradually increasing action of a two-dimensional filter. The fractal dimension is shown to be an ideaI parameter with which to measure out the filtering process in order to separate the local components from the regional trend. This method has been applied to two Italian events and to an earthquake which took pIace in the Former Yugoslavian Republic of Macedonia (FYROM.

  4. Do seismologists agree upon epicentre determination from macroseismic data? A survey of ESC Working Group ' Macroseismology'

    Directory of Open Access Journals (Sweden)

    M. Stucchi

    1996-06-01

    Full Text Available In contrast to the case of instrumental data, the procedures for epicentral parameter determination (coordinates and I0 from macroseismic data are not very well established. Although there are some "rules", upon which most seismologists agree (centre of the isoseismal of largest degree, and so on, the practical application of, such rules displays many problems. Therefore, it is commonly seismologists' practice to find their own pro cedures and solutions; this is particularly evident in the more complicated cases, Such as offshore epicentres or, as in many cases of historical earthquakes, poor sets of data. One of the major consequences is that parametric catalogues are not homogeneous with respect to macroseismic parameters; moreover, merging catalogues compiled according to different criteria can introduce high noise in any catalogue built in such a way. In order to survey the current practice of epicentre determination from macroseismic data in Europe, a set of cases was distributed to the participants of the first meeting of the ESC WG "Macroseismology". A comparison of the 15 sets of results provided by 16 authors, who gave their own solutions and the explanation., of the adopted procedures is given, showing that in some cases the ideas and results are rather distant.

  5. Geological and Macroseismic Data For Seismotectonic Purpose: The 1706 Maiella (Abruzzo, Italy) Earthquake Case Study

    Science.gov (United States)

    de Nardis, R.; Pace, B.; Lavecchia, G.; Visini, F.; Boncio, P.

    2008-12-01

    The nature and distribution of the seismicity and of the active structures in central Italy show that the active deformation field is mainly characterised by extension in the axial zone of the Apennines and by co-axial contraction on the frontal part of the belt. In this tectonic context become crucial, from the seismic hazard point of view, the seismotectonic characterization of the major earthquake localised between the two seismotectonic provinces. The 1706 (Io=IX-X), 1933 (Io=VIII-IX) and 1881 (Io=VIII) Maiella earthquakes stroke areas extending outward of the easternmost, NNW-SSE, active normal fault alignment and inward of the N-S oriented active thrust front. These earthquakes have been attributed by some authors to thrust faulting, while partly to upper crust normal faulting and partly to thrust faulting by others. Due to the poor local configuration of the national seismic network, the available seismic instrumental data are inadequate to constrain the active deformation pattern of the Maiella area. On the other hand, the shallow and deep tectonic setting is rather well known and macroseismic data of the afore-mentioned earthquakes available. The present study has been carried on mainly focusing on the 1706 event, following and integrating three methodological steps: a) selection and definition of the likely 3D seismogenic source models; b) evaluation of possible local effects on the macroseismic field data; d) estimation of seismic scenario in terms of macroseismic intensity, calculating synthetic strong motion time histories starting from different configuration and depths of the seismogenic source models, with a stochastic finite-fault modelling of the ground motion. The method involves discretization of fault plane into smaller sub- faults; the contribution from all the sub-faults is summed to produce the synthetic acceleration time history. For each point of the 1706 macroseismic field, peak ground acceleration and velocity were determined in order to

  6. The October 4th, 1983 — Magnitude 4 earthquake in Phlegraean Fields: Macroseismic survey

    Science.gov (United States)

    Branno, A.; Esposito, E. G. I.; Luongo, G.; Marturano, A.; Porfido, S.; Rinaldis, V.

    1984-06-01

    On Oct. 4th, 1983 the area of Phlegraean Fields, near Naples (Southern Italy) was shaked by an earthquake of magnitude ( M L) 4.0 that caused some damage in the town of Pozzuoli and its surroundings. This seismic event was the largest one recorded during the recent (1982 84) inflation episode occurred in the Phlegraean volcanic area, and a detailed macroseismic reconstruction of the event was carried out. Failing macroseismic data on other earthquakes occurred in Phlegraean Fields, the attenuation law of the intensity as a function of the distance as obtained for the Oct. 4th earthquake was compared with those obtained for other volcanic areas in central Italy — i.e., Tolfa, Monte Amiata — in order to check the reliability of the results obtained for Phlegraean Fields. The Blake's model of the earthquake of Oct. 4th, 1983 does not agree with the experimental data because isoseismals contain areas larger than those shown by the model. This result has been interpreted as an effect of energy focusing due to a reflecting layer 6 8 km deep.

  7. Regional macroseismic field and intensity residuals of the August 24, 2016, Mw=6.0 central Italy earthquake

    Directory of Open Access Journals (Sweden)

    Valerio De Rubeis

    2016-11-01

    Full Text Available A macroseismic investigation of the August 24, 2016, Mw=6.0 Central Italy earthquake, was carried out through an online web survey. Data were collected through a macroseismic questionnaire available at the website www.haisentitoilterremoto.it, managed by the Istituto Nazionale di Geofisica e Vulcanologia (INGV. Over 12000 questionnaires were compiled soon after the seismic occurrence, coming from over 2600 municipalities. A statistical analysis was applied to the data collected in order to investigate the spatial distribution of intensity of the earthquake. The macroseismic intensity field (I was described by identifying three main components: an isotropic component (II, a regional anisotropic component (IA and a local random variations parameter (. The anisotropic component highlighted specific and well-defined geographical areas of amplification and attenuation. In general, the area between the Adriatic coast and Apennines Chain was characterized by an amplification of intensity, while the West side of the Apennines showed attenuation, in agreement with the domains found by other works focused on the analysis of instrumental data. Moreover, the regional macroseismic field showed similarities with instrumental PGA data. The results of our analysis confirm the reliability of web questionnaire data.

  8. Predicting the macroseismic intensity from early radiated P wave energy for on-site earthquake early warning in Italy

    Science.gov (United States)

    Brondi, P.; Picozzi, M.; Emolo, A.; Zollo, A.; Mucciarelli, M.

    2015-10-01

    Earthquake Early Warning Systems (EEWS) are potentially effective tools for risk mitigation in active seismic regions. The present study explores the possibility of predicting the macroseismic intensity within EEW timeframes using the squared velocity integral (IV2) measured on the early P wave signals, a proxy for the P wave radiated energy of earthquakes. This study shows that IV2 correlates better than the peak displacement measured on P waves with both the peak ground velocity and the Housner Intensity, with the latter being recognized by engineers as a reliable proxy for damage assessment. Therefore, using the strong motion recordings of the Italian Accelerometric Archive, a novel relationship between the parameter IV2 and the macroseismic intensity (IM) has been derived. The validity of this relationship has been assessed using the strong motion recordings of the Istituto Nazionale di Geofisica e Vulcanologia Strong Motion Data and Osservatorio Sismico delle Strutture databases, as well as, in the case of the MW 6, 29 May 2012 Emilia earthquake (Italy), comparing the predicted intensities with the ones observed after a macroseismic survey. Our results indicate that P wave IV2 can become a key parameter for the design of on-site EEWS, capable of proving real-time predictions of the IM at target sites.

  9. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    Science.gov (United States)

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  10. Transfrontier Macroseismic Data Exchange in Europe: Intensity Assessment of M>4 Earthquakes by a Grid Cell Approach

    Science.gov (United States)

    Van Noten, K.; Lecocq, T.; Sira, C.; Hinzen, K. G.; Camelbeeck, T.

    2016-12-01

    In the US, the USGS is the only institute that gathers macroseismic data through its online "Did You Feel It?" (DYFI) system allowing a homogeneous and consistent intensity assessment. In Europe, however, we face a much more complicated situation. As almost every nation has its own inquiry in their national language(s) and both the EMSC and the USGS run an international DYFI inquiry, responses to European transfrontier-felt seismic events are strongly fragmented across different institutes. To make a realistic ground motion intensity assessment, macroseismic databases need to be merged in a consistent way hereby dealing with duplicated responses, different intensity calculations and legal issues (observer's privacy). In this presentation, we merge macroseismic datasets by a grid cell approach. Instead of using the irregularly-shaped, arbitrary municipal boundaries, we structure the model area into (100 km2) grid cells and assign an intensity value to each grid cell based on all institutional (geocoded) responses in that cell. The resulting macroseismic grid cell distribution shows a less subjective and more homogeneous intensity distribution than the classic community distribution despite less datapoints are used after geocoding the participant's location. The method is demonstrated on the 2011 ML 4.3 (MW 3.7) Goch (Germany) and the 2015 ML 4.2 (MW 3.7) Ramsgate (UK) earthquakes both felt in NW Europe. Integration of data results in a non-circular distribution in which the felt area extends significantly more in E-W than in N-S direction, illustrating a low-pass filtering effect due to the south-to-north increasing thickness of cover sediments above the regional London-Brabant Massif. Ground motions were amplified and attenuated at places with a shallow and deep basement, respectively. To large extend, the shape of the attenuation model derived through the grid cell intensity points is rather similar as the Atkinson and Wald (2007) CEUS prediction. The attenuation

  11. Nature of the macroseismic paradox of the deep-focus earthquake in the Sea of Okhotsk on May 24, 2013 ( M w = 8.3)

    Science.gov (United States)

    Kuzin, I. P.; Lobkovskii, L. I.; Dozorova, K. A.

    2016-08-01

    We analyzed macroseismic data and considered the effect of extremely long range propagation of sensible shocks during the deep-focus earthquake in the Sea of Okhotsk on May 24, 2013 ( M w = 8.3). In order to explain this effect, we formulated and qualitatively solved the problem of superposition of P-waves over the radial mode 0 S 0 of the natural oscillations of the Earth during this earthquake. Our results confirmed the possibility of such an interpretation of the observed macroseismic effect and also allowed us to explain the fact of anomalously low decay of seismic disturbances with distance.

  12. Rapid Estimation of Macroseismic Intensity for On-site Earthquake Early Warning in Italy from Early Radiated Energ

    Science.gov (United States)

    Emolo, A.; Zollo, A.; Brondi, P.; Picozzi, M.; Mucciarelli, M.

    2015-12-01

    Earthquake Early Warning System (EEWS) are effective tools for the risk mitigation in active seismic regions. Recently, a feasibility study of a nation-wide earthquake early warning systems has been conducted for Italy considering the RAN Network and the EEW software platform PRESTo. This work showed that a reliable estimations in terms of magnitude and epicentral localization would be available within 3-4 seconds after the first P-wave arrival. On the other hand, given the RAN's density, a regional EEWS approach would result in a Blind Zone (BZ) of 25-30 km in average. Such BZ dimension would provide lead-times greater than zero only for events having magnitude larger than 6.5. Considering that in Italy also smaller events are capable of generating great losses both in human and economic terms, as dramatically experienced during the recent 2009 L'Aquila (ML 5.9) and 2012 Emilia (ML 5.9) earthquakes, it has become urgent to develop and test on-site approaches. The present study is focused on the development of a new on-site EEW metodology for the estimation of the macroseismic intensity at a target site or area. In this analysis we have used a few thousands of accelerometric traces recorded by RAN related to the largest earthquakes (ML>4) occurred in Italy in the period 1997-2013. The work is focused on the integral EW parameter Squared Velocity Integral (IV2) and on its capability to predict the peak ground velocity PGV and the Housner Intensity IH, as well as from these latters we parameterized a new relation between IV2 and the Macroseismic Intensity. To assess the performance of the developed on-site EEW relation, we used data of the largest events occurred in Italy in the last 6 years recorded by the Osservatorio Sismico delle Strutture, as well as on the recordings of the moderate earthquake reported by INGV Strong Motion Data. The results shows that the macroseismic intensity values predicted by IV2 and the one estimated by PGV and IH are in good agreement.

  13. The role of instrumental versus macroseismic locations for earthquakes of the last century: a discussion based on the seismicity of the North-Western Apennines (Italy

    Directory of Open Access Journals (Sweden)

    S. Solarino

    2005-06-01

    Full Text Available Many seismological observatories began to record and store seismic events in the early years of the twentieth century, contributing to the compilation of very valued databases of both phase pickings and waveforms. However, despite the availability of the instrumental data for some of the events of the last century, an instrumental location for these earthquakes is not always computed; moreover, when available, the macroseismic location is strongly preferred even if the number of points that have been used for it is low or the spatial distribution of the observations is not optimal or homogeneous. In this work I show how I computed an instrumental location for 19 events which occurred in the Garfagnana-Lunigiana region (Northern Tuscany, Italy beginning from 1902. The location routine is based on a Joint Hypocentral Determination in which, starting from a group of master events, the systematic errors that may affect the data are summed up in the corrective factors complementing the velocity propagation model. All non-systematic errors are carefully checked and possibly discarded by going back to the original data, if necessary. The location is then performed using the classic approach of the inverse problem and solved iteratively. The obtained locations are then compared to those already available from other macroseismic studies with the aim to check the role to be attributed to the instrumental locations. The study shows that in most cases the locations match, in particular when considering the different significance of the location parameters, especially for the strongest events: the instrumental location provides the point where the rupture begins, while the macroseismic one is an estimate of the area where the earthquake possibly took place. This paper is not meant to discuss the importance and the necessity of macroseismic data; instead, the aim is to show that instrumental data can be used to obtain locations even for older

  14. Using H/V Spectral Ratio Analysis to Map Sediment Thickness and to Explain Macroseismic Intensity Variation of a Low-Magnitude Seismic Swarm in Central Belgium

    Science.gov (United States)

    Van Noten, K.; Lecocq, T.; Camelbeeck, T.

    2013-12-01

    Between 2008 and 2010, the Royal Observatory of Belgium received numerous ';Did You Feel It'-reports related to a 2-year lasting earthquake swarm at Court-Saint-Etienne, a small town in a hilly area 20 km SE of Brussels, Belgium. These small-magnitude events (-0.7 ≤ ML ≤ 3.2, n = c. 300 events) were recorded both by the permanent seismometer network in Belgium and by a locally installed temporary seismic network deployed in the epicentral area. Relocation of the hypocenters revealed that the seismic swarm can be related to the reactivation of a NW-SE strike-slip fault at 3 to 6 km depth in the basement rocks of the Lower Palaeozoic London-Brabant Massif. This sequence caused a lot of emotion in the region because more than 60 events were felt by the local population. Given the small magnitudes of the seismic swarm, most events were more often heard than felt by the respondents, which is indicative of a local high-frequency earthquake source. At places where the bedrock is at the surface or where it is covered by thin alluvial sediments ( 30 m). In those river valleys that have a considerable alluvial sedimentary cover, macroseismic intensities are again lower. To explain this variation in macroseismic intensity we present a macroseismic analysis of all DYFI-reports related to the 2008-2010 seismic swarm and a pervasive H/V spectral ratio (HVSR) analysis of ambient noise measurements to model the thickness of sediments covering the London-Brabant Massif. The HVSR method is a very powerful tool to map the basement morphology, particularly in regions of unknown subsurface structure. By calculating the soil's fundamental frequency above boreholes, we calibrated the power-law relationship between the fundamental frequency, shear wave velocity and the thickness of sediments. This relationship is useful for places where the sediment thickness is unknown and where the fundamental frequency can be calculated by H/V spectral ratio analysis of ambient noise. In a

  15. Seismological, geodetic, macroseismic and historical context of the 2016 Mw 6.7 Tamenglong (Manipur) India earthquake

    Science.gov (United States)

    Gahalaut, V. K.; Martin, Stacey S.; Srinagesh, D.; Kapil, S. L.; Suresh, G.; Saikia, Saurav; Kumar, Vikas; Dadhich, Harendra; Patel, Aqeel; Prajapati, Sanjay K.; Shukla, H. P.; Gautam, J. L.; Baidya, P. R.; Mandal, Saroj; Jain, Ashish

    2016-10-01

    The 2016 Mw 6.7 Tamenglong earthquake (in the state of Manipur in northeastern India) on 4 January 2016 at 04:35 Indian Standard Time (3 January, 23:05 UTC) was the strongest earthquake to strike Manipur since 1988. Using data from Indian stations, we constrain the hypocentral depth of the mainshock at 59 ± 3.8 km and determine a strike-slip mechanism with a moderate reverse component on a steeply dipping plane. Though coseismic offsets from GPS measurements from four nearby sites were inadequate to provide further constraints on the focal mechanism, they were consistent with the magnitude and hypocentral depth of the earthquake. The epicentre of the mainshock was located 15-km west of the Churachandpur Mao Fault (CMF) but it was unrelated to this structure and was instead a typical intra-slab earthquake within the Indian plate. A strong motion instrument at the Loktak Power Station (LOK), 56-km from the epicentre, recorded a peak ground acceleration (PGA) of 0.027g while a PGA of 0.103g was recorded at Shillong (SHL) at an epicentral distance of 111-km. We also present macroseismic observations from 461 locations in north-eastern India and the adjacent areas for this earthquake. The highest intensities ( 7 EMS) were observed in the Manipur Valley and in the hills to the west while shaking was perceptible as far as Delhi and Jaipur. Lastly, we present a catalogue of 333 felt earthquakes in Manipur from 1588 ± 1 CE to 1955 derived from the royal chronicle of the kings of Manipur known as the Cheitharon Kumpapa, discuss important historical earthquakes in the region, and also estimate intensity magnitudes for the 1852 (MI 6.5 ± 0.8), 1869 (MI 7.1 ± 0.7), 1880 (MI 6.3 ± 0.7) and 2016 (MI 6.8 ± 0.8) earthquakes.

  16. Microzonation of seismic risk in a low-rise Latin American city based on the macroseismic evaluation of the vulnerability of residential buildings: Colima city, México

    Directory of Open Access Journals (Sweden)

    V. M. Zobin

    2010-06-01

    Full Text Available A macroseismic methodology of seismic risk microzonation in a low-rise city based on the vulnerability of residential buildings is proposed and applied to Colima city, Mexico. The seismic risk microzonation for Colima consists of two elements: the mapping of residential blocks according to their vulnerability level and the calculation of an expert-opinion based damage probability matrix (DPM for a given level of earthquake intensity and a given type of residential block. A specified exposure time to the seismic risk for this zonation is equal to the interval between two destructive earthquakes. The damage probability matrices were calculated for three types of urban buildings and five types of residential blocks in Colima. It was shown that only 9% of 1409 residential blocks are able to resist to the Modify Mercalli (MM intensity VII and VIII earthquakes without significant damage. The proposed DPM-2007 is in good accordance with the experimental damage curves based on the macroseismic evaluation of 3332 residential buildings in Colima that was carried out after the 21 January 2003 intensity MM VII earthquake. This methodology and the calculated PDM-2007 curves may be applied also to seismic risk microzonation for many low-rise cities in Latin America, Asia, and Africa.

  17. Site response of the Ganges Basin inferred from re-evaluated macroseismic observations from the 1897 Shillong, 1905 Kangra and 1934 Nepal earthquakes

    Indian Academy of Sciences (India)

    Susan E Hough; Roger Bilham

    2008-11-01

    We analyze previously published geodetic data and intensity values for the = 8.1 Shillong (1897), = 7.8 Kangra (1905), and = 8.2 Nepal/Bihar (1934) earthquakes to investigate the rupture zones of these earthquakes as well as the amplification of ground motions throughout the Punjab, Ganges and Brahmaputra valleys. For each earthquake we subtract the observed MSK intensities from a synthetic intensity derived from an inferred planar rupture model of the earthquake, combined with an attenuation function derived from instrumentally recorded earthquakes. The resulting residuals are contoured to identify regions of anomalous intensity caused primarily by local site effects. Observations indicative of liquefaction are treated separately from other indications of shaking severity lest they inflate inferred residual shaking estimates. Despite this precaution we find that intensites are 1–3 units higher near the major rivers, as well as at the edges of the Ganges basin. We find evidence for a post-critical Moho reflection from the 1897 and 1905 earthquakes that raises intensities 1–2 units at distances of the order of 150 km from the rupture zone, and we find that the 1905 earthquake triggered a substantial subsequent earthquake at Dehra Dun, at a distance of approximately 150 km. Four or more = 8 earthquakes are apparently overdue in the region based on seismic moment summation in the past 500 years. Results from the current study permit anticipated intensities in these future earthquakes to be refined to incorporate site effects derived from dense macroseismic data.

  18. Seismotectonic position of the Kaliningrad September 21, 2004, earthquake

    Science.gov (United States)

    Assinovskaya, B. A.; Ovsov, M. K.

    2008-09-01

    The paper presents an alternative consistent seismotectonic model of the Kaliningrad (Russia) September 21, 2004, earthquake according to which source zones of the two strongest shocks were confined to a N-S fault off the Sambiiskii Peninsula in the Kaliningrad region. A left-lateral deformation fractured a local crustal zone between the town of Yantarnyi and the settlement of Bakalino. The model was constructed with the use of a method developed by the authors for structural analysis of gravity and magnetic data. Initial materials are revised in terms of the EMS-98 macroseismic scale, and modified maps showing the shaking intensity in the NW part of the Sambiiskii Peninsula are compiled.

  19. Reassessment of the historical seismic activity with major impact on S. Miguel Island (Azores

    Directory of Open Access Journals (Sweden)

    D. Silveira

    2003-01-01

    Full Text Available On account of its tectonic setting, both seismic and volcanic events are frequent in the Azores archipelago. During the historical period earthquakes and seismic swarms of tectonic and/or volcanic origin have struck S. Miguel Island causing a significant number of casualties and severe damages. The information present in historical records made possible a new macroseismic analysis of these major events using the European Macroseismic Scale-1998 (EMS-98. Among the strongest earthquakes of tectonic origin that affected S. Miguel Island, six events were selected for this study. The isoseismal maps drawn for these events enabled the identification of areas characterized by anomalous values of seismic intensity, either positive or negative, to constrain epicentre locations and to identify some new seismogenic areas. Regarding seismic activity associated with volcanic phenomena six cases were also selected. For each of the studied cases cumulative intensity values were assessed for each locality. The distribution of local intensity values shows that the effects are not homogeneous within a certain distance from the eruptive centre, the area of major impacts relates with the eruptive style and damages equivalent to high intensities may occur in Furnas and Sete Cidades calderas. Combining all the historical macroseismic data, a maximum intensity map was produced for S. Miguel Island.

  20. Seismic hazard assessment based on the Unified Scaling Law for Earthquakes: the Greater Caucasus

    Science.gov (United States)

    Nekrasova, A.; Kossobokov, V. G.

    2015-12-01

    Losses from natural disasters continue to increase mainly due to poor understanding by majority of scientific community, decision makers and public, the three components of Risk, i.e., Hazard, Exposure, and Vulnerability. Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of knowledge, education, and communication. In fact, Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such knowledge in advance catastrophic events. We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A - B•(M-6) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. The parameters A, B, and C of USLE are used to estimate, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters including macro-seismic intensity. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks (e.g., those based on the density of exposed population). The methodology of seismic hazard and risks assessment based on USLE is illustrated by application to the seismic region of Greater Caucasus.

  1. Historical-critical Re-integration of the SED's Annual Reports into ECOS

    Science.gov (United States)

    Grolimund, Remo; Fäh, Donat

    2014-05-01

    With the establishment of the Swiss Earthquake Commission (SEC) as early as 1878, Switzerland has one of the oldest traditions of continual macroseismic data collection. Due to the use of a single intensity scale (Rossi-Forel) in a period spanning from the early 1880s into the 1960s this dataset might be considered as one of the most temporally extended stock of rather homogeneous macroseismic data. The work of the SEC and the Swiss Seismological Service (SED), established in 1914 as its successor organization, is relatively well documented in a series of annual reports (1879-1963; 1972-1974) which was assumed to have been adequately integrated in the original parametric catalogue version that had been compiled for the first Swiss Seismic Hazard map in the mid-1970s. For pragmatic reasons this earlier period of systematic scientific earthquake observation in Switzerland has thus not been addressed in-depth in the first phases of the historical-critical revision of the Earthquake Catalogue of Switzerland (ECOS-02; ECOS-09) and the reassessment process has mainly been limited to the complete revision of the damaging earthquakes with an assumed epicentral intensity of more than V (EMS-98). In the reassessment of events with a maximum intensity of less than VI (EMS) for the period 1878-1900 we have, however, realized that the wealth of macroseismic data gathered in the annuals is only incompletely integrated in ECOS. Moreover, from a methodological perspective, a significant part of the parametric information from the catalogue version compiled in the 1970s is questionable as the process of their determination is neither documented nor reproducible and thus lacking intersubjective traceability as a key criterion for qualitative research. Empirically, a considerable part of the catalogue data proved to be inconsistent with the critical examination of the information documented in the annual reports for example with regard to the appraisal of certainties. The comparison

  2. Scales, scales and more scales.

    Science.gov (United States)

    Weitzenhoffer, Andre M

    2002-01-01

    This article examines the nature, uses, and limitations of the large variety of existing, so-called, hypnosis scales; that is, instruments that have been proposed for the assessment of hypnotic behavior. Although the major aim of most of the scales ostensively seems to be to assess several aspects of hypnotic states, they are found generally to say little about these and much more about responses to suggestions. The greatest application of these scales is to be found in research, but they also have a limited place in clinical work.

  3. Risk prediction of Critical Infrastructures against extreme natural hazards: local and regional scale analysis

    Science.gov (United States)

    Rosato, Vittorio; Hounjet, Micheline; Burzel, Andreas; Di Pietro, Antonio; Tofani, Alberto; Pollino, Maurizio; Giovinazzi, Sonia

    2016-04-01

    Natural hazard events can induce severe impacts on the built environment; they can hit wide and densely populated areas, where there is a large number of (inter)dependent technological systems whose damages could cause the failure or malfunctioning of further different services, spreading the impacts on wider geographical areas. The EU project CIPRNet (Critical Infrastructures Preparedness and Resilience Research Network) is realizing an unprecedented Decision Support System (DSS) which enables to operationally perform risk prediction on Critical Infrastructures (CI) by predicting the occurrence of natural events (from long term weather to short nowcast predictions, correlating intrinsic vulnerabilities of CI elements with the different events' manifestation strengths, and analysing the resulting Damage Scenario. The Damage Scenario is then transformed into an Impact Scenario, where punctual CI element damages are transformed into micro (local area) or meso (regional) scale Services Outages. At the smaller scale, the DSS simulates detailed city models (where CI dependencies are explicitly accounted for) that are of important input for crisis management organizations whereas, at the regional scale by using approximate System-of-Systems model describing systemic interactions, the focus is on raising awareness. The DSS has allowed to develop a novel simulation framework for predicting earthquakes shake maps originating from a given seismic event, considering the shock wave propagation in inhomogeneous media and the subsequent produced damages by estimating building vulnerabilities on the basis of a phenomenological model [1, 2]. Moreover, in presence of areas containing river basins, when abundant precipitations are expected, the DSS solves the hydrodynamic 1D/2D models of the river basins for predicting the flux runoff and the corresponding flood dynamics. This calculation allows the estimation of the Damage Scenario and triggers the evaluation of the Impact Scenario

  4. Building damage scale proposal from VHR satellite image

    Science.gov (United States)

    Sandu, Constantin; Giulio Tonolo, Fabio; Cotrufo, Silvana; Boccardo, Piero

    2017-04-01

    Natural hazards have a huge impact in terms of economic losses, affected and killed people. Current exploitation of remote sensed images play a fundamental role in the delineation of damages generated by catastrophic events. Institutions like the United Nations and the European Commission designed services that provide information about the impact of disasters rapidly. One of the approach currently used to carry out the damage assessment is based on very high resolution remote sensing imagery (including both aerial and satellite platforms). One of the main focus of the responders, especially in case of events like earthquakes, is on buildings and infrastructures. As far as the buildings are concerned, to date international standard guidelines that provide essential information on how to assess building damages using VHR images still does not exist. The aim of this study is to develop a building damage scale tailored for analyses based on VHR vertical imagery and to propose a standard for the related interpretation guidelines. The task is carried out by comparing the current scales used for damage assessment by the main satellite based emergency mapping services. The study will analyze the datasets produced after the Ecuador (April 2016) and Central Italy(August and October 2016) earthquakes. The results suggest that by using VHR remotely sensed images it is not possible to directly use damage classification scales addressing structural damages (e.g the 5 grades proposed by EMS-98). A fine-tuning of existing damage classes is therefore required and the adoption of an internationally agreed standard should be encouraged, to streamline the use of SEM products generated by different services.

  5. Regional macroseismic field of the 1980 Irpinia earthquake

    Directory of Open Access Journals (Sweden)

    M. C. SPADEA

    1982-06-01

    Full Text Available E presentata un'analisi del campo macrosismico del terremoto irpino del 1980
    che per magnitudo ed estensione dell'area interessata risulta il maggiore evento
    sismico verificatosi in Italia negli ultimi cinquanta anni.
    L'insieme dei dati rilevati mediante indagini dirette e/o a mezzo scheda
    macrosismica consente la definizione dell'intensità sismica in 1286 centri abitati di
    13 regioni.
    Il campo regionale confrontato con i modelli di Blake (Y= 5.0, risulta
    compatibile con i seguenti parametri focali
    I0 = X" MSK ; = 9.99 ± 0.5 MSK ; h,, = 15 Km.
    L'anisotropia del campo regionale è analizzata mediante la determinazione
    dell'attenuazione azimutale dell'intensità (a. z i cui valori estremi risultano
    2 . 0 - IO"3 e 3.9 • IO"3, rispettivamente lungo le direzioni NNW e SW.
    e 3.9 • IO"3, rispettivamente lungo le direzioni NNW e SW.
    L'area mesosismica risulta caratterizzata principalmente da domini strutturali,
    rilevati con il metodo shadow, con direzione appenninica (NW-SE, antiappenninica
    e meridiana (N-S.

  6. XIXth century earthquakes in Belgium, the Netherlands and western Germany

    Science.gov (United States)

    Knuts, Elisabeth; Dost, Bernard; Alexandre, Pierre; Camelbeeck, Thierry

    2014-05-01

    Since the last quarter of the XXth century, the rules of the historic criticism are applied in the study of the past earthquakes thanks to the collaboration between seismologists and historians. Various monographs have already been published on the historic seismicity of Belgium, Netherlands and nearby regions but few about the XIXth century. The list of the shocks arisen in those regions is not clearly established. For the major earthquakes, we can find useful monographs that where published at the time of the events. However there is a lack of information about smaller earthquakes that are mentioned in the Belgian, Dutch, French and German catalogs. For those smaller events it is often not possible to determine the zone of perceptibility. Sometimes we cannot even take for sure that the reported event is a real one. The aim of our study is to overcome this gap. Taking into account the rules of historical criticism, we read all the available bibliography, undertook researches in the archives and made an analysis of the press in order to establish a reliable list of earthquakes. Several categories of sources were used: narrative and administrative sources, contemporaneous studies, letters sent to the scientific institutions and press. We could confirm that 84 earthquakes are real and determine a list of fake earthquakes that are unfortunately present in the traditional catalogs. In the list of fake earthquakes, we highlighted several events that we consider doubtful and that require additional researches, especially several earthquakes in mining zone. We compiled our results as a four-column table providing the date of the earthquake, the supposed epicenter, the number of founded sources and the number of macroseismic datapoints. Based on the macroseismic datapoints, we estimated the intensities for every great phenomenon according to EMS-98 scale. The map of the epicenters indicates that the most active zone in the area during the XIXth century is the Lower Rhine

  7. Regional earthquake loss estimation in the Autonomous Province of Bolzano - South Tyrol (Italy)

    Science.gov (United States)

    Huttenlau, Matthias; Winter, Benjamin

    2013-04-01

    Beside storm events geophysical events cause a majority of natural hazard losses on a global scale. However, in alpine regions with a moderate earthquake risk potential like in the study area and thereupon connected consequences on the collective memory this source of risk is often neglected in contrast to gravitational and hydrological hazards processes. In this context, the comparative analysis of potential disasters and emergencies on a national level in Switzerland (Katarisk study) has shown that earthquakes are the most serious source of risk in general. In order to estimate the potential losses of earthquake events for different return periods and loss dimensions of extreme events the following study was conducted in the Autonomous Province of Bolzano - South Tyrol (Italy). The applied methodology follows the generally accepted risk concept based on the risk components hazard, elements at risk and vulnerability, whereby risk is not defined holistically (direct, indirect, tangible and intangible) but with the risk category losses on buildings and inventory as a general risk proxy. The hazard analysis is based on a regional macroseismic scenario approach. Thereby, the settlement centre of each community (116 communities) is defined as potential epicentre. For each epicentre four different epicentral scenarios (return periods of 98, 475, 975 and 2475 years) are calculated based on the simple but approved and generally accepted attenuation law according to Sponheuer (1960). The relevant input parameters to calculate the epicentral scenarios are (i) the macroseismic intensity and (ii) the focal depth. The considered macroseismic intensities are based on a probabilistic seismic hazard analysis (PSHA) of the Italian earthquake catalogue on a community level (Dipartimento della Protezione Civile). The relevant focal depth are considered as a mean within a defined buffer of the focal depths of the harmonized earthquake catalogues of Italy and Switzerland as well as

  8. Application of the loss estimation tool QLARM in Algeria

    Science.gov (United States)

    Rosset, P.; Trendafiloski, G.; Yelles, K.; Semmane, F.; Wyss, M.

    2009-04-01

    During the last six years, WAPMERR has used Quakeloss for real-time loss estimation for more than 440 earthquakes worldwide. Loss reports, posted with an average delay of 30 minutes, include a map showing the average degree of damage in settlements near the epicenter, the total number of fatalities, the total number of injured, and a detailed list of casualties and damage rates in these settlements. After the M6.7 Boumerdes earthquake in 2003, we reported 1690-3660 fatalities. The official death toll was around 2270. Since the El Asnam earthquake, seismic events in Algeria have killed about 6,000 people, injured more than 20,000 and left more than 300,000 homeless. On average, one earthquake with the potential to kill people (M>5.4) happens every three years in Algeria. In the frame of a collaborative project between WAPMERR and CRAAG, we propose to calibrate our new loss estimation tool QLARM (qlarm.ethz.ch) and estimate human losses for future likely earthquakes in Algeria. The parameters needed for this calculation are the following. (1) Ground motion relation and soil amplification factors (2) distribution of building stock and population into vulnerability classes of the European Macroseismic Scale (EMS-98) as given in the PAGER database and (3) population by settlement. Considering the resolution of the available data, we construct 1) point city models for cases where only summary data for the city are available and, 2) discrete city models when data regarding city districts are available. Damage and losses are calculated using: (a) vulnerability models pertinent to EMS-98 vulnerability classes previously validated with the existing ones in Algeria (Tipaza and Chlef) (b) building collapse models pertinent to Algeria as given in the World Housing Encyclopedia and, (c) casualty matrices pertinent to EMS-98 vulnerability classes assembled from HAZUS casualty rates. As a first trial, we simulated the 2003 Boumerdes earthquake to check the validity of the proposed

  9. Rapid Assessment of Seismic Vulnerability in Palestinian Refugee Camps

    Science.gov (United States)

    Al-Dabbeek, Jalal N.; El-Kelani, Radwan J.

    Studies of historical and recorded earthquakes in Palestine demonstrate that damaging earthquakes are occurring frequently along the Dead Sea Transform: Earthquake of 11 July 1927 (ML 6.2), Earthquake of 11 February 2004 (ML 5.2). In order to reduce seismic vulnerability of buildings, losses in lives, properties and infrastructures, an attempt was made to estimate the percentage of damage degrees and losses at selected refugee camps: Al Ama`ri, Balata and Dhaishe. Assessing the vulnerability classes of building structures was carried out according to the European Macro-Seismic Scale 1998 (EMS-98) and the Fedral Emergency Management Agency (FEMA). The rapid assessment results showed that very heavy structural and non structural damages will occur in the common buildings of the investigated Refugee Camps (many buildings will suffer from damages grades 4 and 5). Bad quality of buildings in terms of design and construction, lack of uniformity, absence of spaces between the building and the limited width of roads will definitely increase the seismic vulnerability under the influence of moderate-strong (M 6-7) earthquakes in the future.

  10. Evaluation and considerations about fundamental periods of damaged reinforced concrete buildings

    Directory of Open Access Journals (Sweden)

    R. Ditommaso

    2013-07-01

    Full Text Available The aim of this paper is an empirical estimation of the fundamental period of reinforced concrete buildings and its variation due to structural and non-structural damage. The 2009 L'Aquila earthquake has highlighted the mismatch between experimental data and code provisions value not only for undamaged buildings but also for the damaged ones. The 6 April 2009 L'Aquila earthquake provided the first opportunity in Italy to estimate the fundamental period of reinforced concrete (RC buildings after a strong seismic sequence. A total of 68 buildings with different characteristics, such as age, height and damage level, have been investigated by performing ambient vibration measurements that provided their fundamental translational period. Four different damage levels were considered according with the definitions by EMS 98 (European Macroseismic Scale, trying to regroup the estimated fundamental periods versus building heights according to damage. The fundamental period of RC buildings estimated for low damage level is equal to the previous relationship obtained in Italy and Europe for undamaged buildings, well below code provisions. When damage levels are higher, the fundamental periods increase, but again with values much lower than those provided by codes. Finally, the authors suggest a possible update of the code formula for the simplified estimation of the fundamental period of vibration for existing RC buildings, taking into account also the inelastic behaviour.

  11. Estimation of earthquake risk curves of physical building damage

    Science.gov (United States)

    Raschke, Mathias; Janouschkowetz, Silke; Fischer, Thomas; Simon, Christian

    2014-05-01

    In this study, a new approach to quantify seismic risks is presented. Here, the earthquake risk curves for the number of buildings with a defined physical damage state are estimated for South Africa. Therein, we define the physical damage states according to the current European macro-seismic intensity scale (EMS-98). The advantage of such kind of risk curve is that its plausibility can be checked more easily than for other types. The earthquake risk curve for physical building damage can be compared with historical damage and their corresponding empirical return periods. The number of damaged buildings from historical events is generally explored and documented in more detail than the corresponding monetary losses. The latter are also influenced by different economic conditions, such as inflation and price hikes. Further on, the monetary risk curve can be derived from the developed risk curve of physical building damage. The earthquake risk curve can also be used for the validation of underlying sub-models such as the hazard and vulnerability modules.

  12. Scale and scaling in soils

    Science.gov (United States)

    Scale is recognized as a central concept in the description of the hierarchical organization of our world. Pressing environmental and societal problems such require an understanding of how processes operate at different scales, and how they can be linked across scales. Soil science as many other dis...

  13. Dynamic evaluation of seismic hazard and risks based on the Unified Scaling Law for Earthquakes

    Science.gov (United States)

    Kossobokov, V. G.; Nekrasova, A.

    2016-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing seismic hazard maps based on the Unified Scaling Law for Earthquakes (USLE), i.e. log N(M,L) = A + B•(6 - M) + C•log L, where N(M,L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L, A characterizes the average annual rate of strong (M = 6) earthquakes, B determines the balance between magnitude ranges, and C estimates the fractal dimension of seismic locus in projection to the Earth surface. The parameters A, B, and C of USLE are used to assess, first, the expected maximum magnitude in a time interval at a seismically prone cell of a uniform grid that cover the region of interest, and then the corresponding expected ground shaking parameters. After a rigorous testing against the available seismic evidences in the past (e.g., the historically reported macro-seismic intensity or paleo data), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures. The hazard maps for a given territory change dramatically, when the methodology is applied to a certain size moving time window, e.g. about a decade long for an intermediate-term regional assessment or exponentially increasing intervals for a daily local strong aftershock forecasting. The of dynamical seismic hazard and risks assessment is illustrated by applications to the territory of Greater Caucasus and Crimea and the two-year series of aftershocks of the 11 October 2008 Kurchaloy, Chechnya earthquake which case-history appears to be encouraging for further systematic testing as potential short-term forecasting tool.

  14. Helicity scalings

    Energy Technology Data Exchange (ETDEWEB)

    Plunian, F [ISTerre, CNRS, Universite Joseph Fourier, Grenoble (France); Lessinnes, T; Carati, D [Physique Statistique et Plasmas, Universite Libre de Bruxelles (Belgium); Stepanov, R, E-mail: Franck.Plunian@ujf-grenoble.fr [Institute of Continuous Media Mechanics of the Russian Academy of Science, Perm (Russian Federation)

    2011-12-22

    Using a helical shell model of turbulence, Chen et al. (2003) showed that both helicity and energy dissipate at the Kolmogorov scale, independently from any helicity input. This is in contradiction with a previous paper by Ditlevsen and Giuliani (2001) in which, using a GOY shell model of turbulence, they found that helicity dissipates at a scale larger than the Kolmogorov scale, and does depend on the helicity input. In a recent paper by Lessinnes et al. (2011), we showed that this discrepancy is due to the fact that in the GOY shell model only one helical mode (+ or -) is present at each scale instead of both modes in the helical shell model. Then, using the GOY model, the near cancellation of the helicity flux between the + and - modes cannot occur at small scales, as it should be in true turbulence. We review the main results with a focus on the numerical procedure needed to obtain accurate statistics.

  15. Framing scales and scaling frames

    NARCIS (Netherlands)

    Lieshout, van M.; Dewulf, A.; Aarts, M.N.C.; Termeer, C.J.A.M.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment o

  16. Similarity Scaling

    Science.gov (United States)

    Schnack, Dalton D.

    In Lecture 10, we introduced a non-dimensional parameter called the Lundquist number, denoted by S. This is just one of many non-dimensional parameters that can appear in the formulations of both hydrodynamics and MHD. These generally express the ratio of the time scale associated with some dissipative process to the time scale associated with either wave propagation or transport by flow. These are important because they define regions in parameter space that separate flows with different physical characteristics. All flows that have the same non-dimensional parameters behave in the same way. This property is called similarity scaling.

  17. Planck Scale to Hubble Scale

    CERN Document Server

    Sidharth, B G

    1998-01-01

    Within the context of the usual semi classical investigation of Planck scale Schwarzchild Black Holes, as in Quantum Gravity, and later attempts at a full Quantum Mechanical description in terms of a Kerr-Newman metric including the spinorial behaviour, we attempt to present a formulation that extends from the Planck scale to the Hubble scale. In the process the so called large number coincidences as also the hitherto inexplicable relations between the pion mass and the Hubble Constant, pointed out by Weinberg, turn out to be natural consequences in a consistent description.

  18. Scaling down

    Directory of Open Access Journals (Sweden)

    Ronald L Breiger

    2015-11-01

    Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.

  19. Scale interactions

    Science.gov (United States)

    Snow, John T.

    Since the time of the first world war, investigation of synoptic processes has been a major focus of atmospheric research. These are the physical processes that drive the continuously evolving pattern of high and low pressure centers and attendant frontal boundaries that are to be seen on continental-scale weather maps. This effort has been motivated both by a spirit of scientific inquiry and by a desire to improve operational weather forecasting by national meteorological services. These national services in turn have supported the development of a global observational network that provides the data required for both operational and research purposes. As a consequence of this research, there now exists a reasonable physical understanding of many of the phenomena found at this synoptic scale. This understanding is reflected in the numerical weather forecast models used by the national services. These have shown significant skill in predicting the evolution of synoptic-scale features for periods extending out to five days.

  20. Scale Holography

    CERN Document Server

    Cembranos, Jose A R; Garay, Luis J

    2016-01-01

    We present a new correspondence between a d-dimensional dynamical system and a whole family of (d+1)-dimensional systems. This new scale-holographic relation is built by the explicit introduction of a dimensionful constant which determines the size of the additional dimension. Scale holography is particularly useful for studying non-local theories, since the equivalent dual system on the higher dimensional manifold can be made to be local, as we illustrate with the specific example of the p-adic string.

  1. Nuclear scales

    Energy Technology Data Exchange (ETDEWEB)

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  2. Nuclear Scales

    CERN Document Server

    Friar, J L

    1998-01-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the $\\pi$-$\\gamma$ force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  3. Scaling CMOS

    Directory of Open Access Journals (Sweden)

    G.A Brown

    2004-01-01

    Full Text Available The scaling of silicon integrated circuits to smaller physical dimensions became a primary activity of advanced device development almost as soon as the basic technology was established. The importance and persistence of this activity is rooted in the confluence of two of the strongest drives governing the business; the push for greater device performance, measured in terms of switching speed, and the desire for greater manufacturing profitability, dependent upon reduced cost per good device built.

  4. Molecular scale

    Directory of Open Access Journals (Sweden)

    Christopher H. Childers

    2016-03-01

    Full Text Available This manuscript demonstrates the molecular scale cure rate dependence of di-functional epoxide based thermoset polymers cured with amines. A series of cure heating ramp rates were used to determine the influence of ramp rate on the glass transition temperature (Tg and sub-Tg transitions and the average free volume hole size in these systems. The networks were comprised of 3,3′-diaminodiphenyl sulfone (33DDS and diglycidyl ether of bisphenol F (DGEBF and were cured at ramp rates ranging from 0.5 to 20 °C/min. Differential scanning calorimetry (DSC and NIR spectroscopy were used to explore the cure ramp rate dependence of the polymer network growth, whereas broadband dielectric spectroscopy (BDS and free volume hole size measurements were used to interrogate networks’ molecular level structural variations upon curing at variable heating ramp rates. It was found that although the Tg of the polymer matrices was similar, the NIR and DSC measurements revealed a strong correlation for how these networks grow in relation to the cure heating ramp rate. The free volume analysis and BDS results for the cured samples suggest differences in the molecular architecture of the matrix polymers due to cure heating rate dependence.

  5. desirability scale

    Directory of Open Access Journals (Sweden)

    Alejandro C. Cosentino

    2008-01-01

    Full Text Available La deseabilidad social es la necesidad de los sujetos de obtener aprobación respondiendo de un modo culturalmente aceptable y apropiado. Uno de los instrumentos más utilizado para medirla es la Marlowe-Crowne Social Desirability Scale (MCSDS, desarrollada por los autores en 1960. Es de frecuente aplicación en diversos tipos de estudios de diferentes áreas de la Psicología y la Medicina. Resulta adecuada tanto para estimar sesgos de respuestas en un sentido socialmente deseable como para operacionalizar constructos psicológicos, tales como de necesidad de aprobación o de defensividad. Es la medida estándar para discriminar entre los estilos de respuesta al estrés del modelo de Weinberger, Schwartz y Davidson (1979. A lo largo del tiempo, diversos autores le han realizado modificaciones tales como: cambios de formato de administración, abreviaciones, traducciones y adaptaciones a diversas culturas. En este estudio se describe el desarrollo de la Escala de Deseabilidad Social de Crowne y Marlowe (EDSCM que es una adaptación argentina de la escala completa MCSDS en su formato original de papel y lápiz. Los datos obtenidos a través de diferentes muestras (estudiantes universitarios, adultos y solicitantes de empleos respaldan que la EDSCM posee adecuadas confiabilidad y validez de constructo, como lo demuestra el estudio de su validez convergente, validez divergente, validez por técnica de instrucciones diferenciales y validez de grupos conocidos. Se sugiere el uso de la EDSCM para investigaciones en diferentes áreas de Psicología y Medicina en poblaciones argentinas.

  6. Temporal and spatial variations of seismicity scaling behavior in Southern México

    Science.gov (United States)

    Alvarez-Ramirez, J.; Echeverria, J. C.; Ortiz-Cruz, A.; Hernandez, E.

    2012-03-01

    R/S analysis is used in this work to investigate the fractal correlations in terms of the Hurst exponent for the 1998-2011 seismicity data in Southern Mexico. This region is the most seismically active area in Mexico, where epicenters for severe earthquakes (e.g., September 19, 1985, Mw = 8.1) causing extensive damage in highly populated areas have been located. By only considering the seismic events that meet the Gutenberg-Ritcher law completeness requirement ( b = 0.97, MGR = 3.6), we found time clustering for scales of about 100 and 135 events. In both cases, a cyclic behavior with dominant spectral components at about one cycle per year is revealed. It is argued that such a one-year cycle could be related to tidal effects in the Pacific coast. Interestingly, it is also found that high-magnitude events ( Mw ≥ 6.0) are more likely to occur under increased interevent correlations with Hurst exponent values H > 0.65. This suggests that major earthquakes can occur when the tectonic stress accumulates in preferential directions. In contrast, the high-magnitude seismic risk is reduced when stresses are uniformly distributed in the tectonic shell. Such cointegration between correlations (i.e., Hurst exponent) and macroseismicity is confirmed for spatial variations of the Hurst exponent. In this way, we found that, using the Hurst exponent standpoint, the former presumed Michoacan and the Guerrero seismic gaps are the riskiest seismic zones. To test this empirical finding, two Southern Mexico local regions with large earthquakes were considered. These are the Atoyac de Alvarez, Guerrero ( Mw = 6.3), and Union Hidalgo, Oaxaca ( Mw = 6.6), events. In addition, we used the Loma Prieta, California, earthquake (October 17, 1989, Mw = 6.9) to show that the high-magnitude earthquakes in the San Andreas Fault region can also be linked to the increments of determinism (quantified in terms of the Hurst exponent) displayed by the stochastic dynamics of the interevent period time

  7. Seismic vulnerability of dwellings at Sete Cidades Volcano (S. Miguel Island, Azores

    Directory of Open Access Journals (Sweden)

    A. Gomes

    2006-01-01

    Full Text Available Since the settlement of S. Miguel Island (Azores, in the XV century, several earthquakes caused important human losses and severe damages on the island. Sete Cidades Volcano area, located in the westernmost part of the island, was attained by strong seismic crises of tectonic and volcanic origin and major events reached a maximum historical intensity of IX (European Macroseismic Scale 1998 in this zone. Aiming to evaluate the impact of a future major earthquakes, a field survey was carried out in ten parishes of Ponta Delgada County, located on the flanks of Sete Cidades volcano and inside it is caldera. A total of 7019 buildings were identified, being 4351 recognized as dwellings. The total number of inhabitants in the studied area is 11429. In this work, dwellings were classified according to their vulnerability to earthquakes (Classes A to F, using the structure types table of the EMS-98, adapted to the types of constructions made in the Azores. It was concluded that 76% (3306 of the houses belong to Class A, and 17% (740 to Class B, which are the classes of higher vulnerability. If the area is affected by a seismic event with intensity IX it is estimated, that 57% (2480 to 77% (3350 of the dwellings will partially or totally collapse and 15% (652 to 25% (1088 will need to be rehabilitated. In this scenario, considering the average of inhabitants per house for each parish, 82% (9372 to 92% (10515 of the population will be affected. The number of deaths, injured and dislodged people will pose severe problems to the civil protection authorities and will cause social and economic disruption in the entire archipelago.

  8. Seismic vulnerability of dwellings at Sete Cidades Volcano (S. Miguel Island, Azores)

    Science.gov (United States)

    Gomes, A.; Gaspar, J. L.; Queiroz, G.

    2006-01-01

    Since the settlement of S. Miguel Island (Azores), in the XV century, several earthquakes caused important human losses and severe damages on the island. Sete Cidades Volcano area, located in the westernmost part of the island, was attained by strong seismic crises of tectonic and volcanic origin and major events reached a maximum historical intensity of IX (European Macroseismic Scale 1998) in this zone. Aiming to evaluate the impact of a future major earthquakes, a field survey was carried out in ten parishes of Ponta Delgada County, located on the flanks of Sete Cidades volcano and inside it is caldera. A total of 7019 buildings were identified, being 4351 recognized as dwellings. The total number of inhabitants in the studied area is 11429. In this work, dwellings were classified according to their vulnerability to earthquakes (Classes A to F), using the structure types table of the EMS-98, adapted to the types of constructions made in the Azores. It was concluded that 76% (3306) of the houses belong to Class A, and 17% (740) to Class B, which are the classes of higher vulnerability. If the area is affected by a seismic event with intensity IX it is estimated, that 57% (2480) to 77% (3350) of the dwellings will partially or totally collapse and 15% (652) to 25% (1088) will need to be rehabilitated. In this scenario, considering the average of inhabitants per house for each parish, 82% (9372) to 92% (10515) of the population will be affected. The number of deaths, injured and dislodged people will pose severe problems to the civil protection authorities and will cause social and economic disruption in the entire archipelago.

  9. Effects of Ground Motion Input on the Derived Fragility Functions: Case study of 2010 Haiti Earthquake

    Science.gov (United States)

    Hancilar, Ufuk; Harmandar, Ebru; Çakti, Eser

    2014-05-01

    Empirical fragility functions are derived by statistical processing of the data on: i) Damaged and undamaged buildings, and ii) Ground motion intensity values at the buildings' locations. This study investigates effects of different ground motion inputs on the derived fragility functions. The previously constructed fragility curves (Hancilar et al. 2013), which rely on specific shaking intensity maps published by the USGS after the 2010 Haiti Earthquake, are compared with the fragility functions computed in the present study. Building data come from field surveys of 6,347 buildings that are grouped with respect to structural material type and number of stories. For damage assessment, the European Macroseismic Scale (EMS-98) damage grades are adopted. The simplest way to account for the variability in ground motion input could have been achieved by employing different ground motion prediction equations (GMPEs) and their standard variations. However, in this work, we prefer to rely on stochastically simulated ground motions of the Haiti earthquake. We employ five different source models available in the literature and calculate the resulting strong ground motion in time domain. In our simulations we also consider the local site effects by published studies on NEHRP site classes and micro-zoning maps of the city of Port-au-Prince. We estimate the regional distributions from the waveforms simulated at the same coordinates that we have damage information from. The estimated spatial distributions of peak ground accelerations and velocities, PGA and PGV respectively, are then used as input to fragility computations. The results show that changing the ground motion input causes significant variability in the resulting fragility functions.

  10. Religious attitude scale: scale development and validation

    OpenAIRE

    Üzeyir Ok

    2011-01-01

    In this paper a scale of religious attitude (in an Islamic tradition) was constructed and its metric properties were introduced on the basis of its tests on two different samples (ns=930 and 388) of university students. It was found that the scale, which was named as Ok-Religious Attitude Scale, recorded high alpha scores (.81 and .91). Both explanatory and confirmatory factor analyses confirm that the scale with its four subscales (cognitive, emotional, behavioural and relational) form an id...

  11. Comparing USGS national seismic hazard maps with internet-based macroseismic intensity observations

    Science.gov (United States)

    Mak, Sum; Schorlemmer, Danijel

    2016-04-01

    Verifying a nationwide seismic hazard assessment using data collected after the assessment has been made (i.e., prospective data) is a direct consistency check of the assessment. We directly compared the predicted rate of ground motion exceedance by the four available versions of the USGS national seismic hazard map (NSHMP, 1996, 2002, 2008, 2014) with the actual observed rate during 2000-2013. The data were prospective to the two earlier versions of NSHMP. We used two sets of somewhat independent data, namely 1) the USGS "Did You Feel It?" (DYFI) intensity reports, 2) instrumental ground motion records extracted from ShakeMap stations. Although both are observed data, they come in different degrees of accuracy. Our results indicated that for California, the predicted and observed hazards were very comparable. The two sets of data gave consistent results, implying robustness. The consistency also encourages the use of DYFI data for hazard verification in the Central and Eastern US (CEUS), where instrumental records are lacking. The result showed that the observed ground-motion exceedance was also consistent with the predicted in CEUS. The primary value of this study is to demonstrate the usefulness of DYFI data, originally designed for community communication instead of scientific analysis, for the purpose of hazard verification.

  12. Multi-scale electromagnetic imaging of the Monte Aquila Fault (Agri Valley, Southern Italy)

    Science.gov (United States)

    Giocoli, Alessandro; Piscitelli, Sabatino; Romano, Gerardo; Balasco, Marianna; Lapenna, Vincenzo; Siniscalchi, Agata

    2010-05-01

    The Agri Valley is a NW-SE trending intermontane basin formed during the Quaternary times along the axial zone of the Southern Apennines thrust belt chain. This basin is about 30 Km long and 12 Km wide and is filled by Quaternary continental deposits, which cover down-thrown pre-Quaternary rocks of the Apennines chain. The Agri Valley was hit by the M 7.0, 1857 Basilicata earthquake (Branno et al., 1985), whose macroseismic field covered a wide sector of the Southern Apennines chain. The latest indications of Late Quaternary faulting processes in Agri Valley were reported in Maschio et al., (2005), which documented a unknown NE-dipping normal fault thanks to the finding of small-scale morphological features of recent tectonic activity. The identified structure was termed Monte Aquila Fault (MAF) and corresponds to the southern strand of the NW-SE trending Monti della Maddalena Fault System (Maschio et al., 2005; Burrato and Valensise, 2007). The NE-dipping MAF consists of a main northern segment, about 10 Km long, and two smaller segments with cumulate length of ~10 Km, thus bringing the total length to ~20 Km. The three segments are arranged in a right-stepping en-echelon pattern and are characterized by subtle geomorphic features. In order to provide more detailed and accurate information about the MAF, a strategy based on the application of complementary investigation tools was employed. In particular, multi-scale electromagnetic investigation, including Electrical Resistivity Tomography (ERT), Ground Penetrating Radar (GPR) and Magnetotelluric (MT) methods, was used to image the MAF from near-surface to several hundred metres depth. Large-scale MT investigation proved to be useful in detecting the MAF location down to several hundred meters depth, but it didn't show any shallow evidence about MAF. Conversely, ERT and GPR surveys evidenced signatures of normal-faulting activity at shallow depth (e.g., back-tilting of the bedrock, colluvial wedges, etc.). In

  13. Scaling: An Items Module

    Science.gov (United States)

    Tong, Ye; Kolen, Michael J.

    2010-01-01

    "Scaling" is the process of constructing a score scale that associates numbers or other ordered indicators with the performance of examinees. Scaling typically is conducted to aid users in interpreting test results. This module describes different types of raw scores and scale scores, illustrates how to incorporate various sources of…

  14. Raters & Rating Scales.

    Science.gov (United States)

    Lopez, Winifred A.; Stone, Mark H.

    1998-01-01

    The first article in this section, "Rating Scales and Shared Meaning," by Winifred A. Lopez, discusses the analysis of rating scale data. The second article, "Rating Scale Categories: Dichotomy, Double Dichotomy, and the Number Two," by Mark H. Stone, argues that dichotomies in rating scales are more useful than multiple ratings. (SLD)

  15. Scaling of Metabolic Scaling within Physical Limits

    Directory of Open Access Journals (Sweden)

    Douglas S. Glazier

    2014-10-01

    Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.

  16. Atomic Scale Plasmonic Switch

    OpenAIRE

    Emboras, A.; Niegemann, J.; Ma, P.; Haffner, C; Pedersen, A.; Luisier, M.; Hafner, C.; Schimmel, T.; Leuthold, J.

    2016-01-01

    The atom sets an ultimate scaling limit to Moore’s law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocat...

  17. On Quantitative Rorschach Scales.

    Science.gov (United States)

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  18. Saturation and geometrical scaling

    CERN Document Server

    Praszalowicz, Michal

    2016-01-01

    We discuss emergence of geometrical scaling as a consequence of the nonlinear evolution equations of QCD, which generate a new dynamical scale, known as the saturation momentum: Qs. In the kinematical region where no other energy scales exist, particle spectra exhibit geometrical scaling (GS), i.e. they depend on the ratio pT=Qs, and the energy dependence enters solely through the energy dependence of the saturation momentum. We confront the hypothesis of GS in different systems with experimental data.

  19. Scale Space Hierarchy

    NARCIS (Netherlands)

    Kuijper, Arjan; Florack, L.M.J.; Viergever, M.A.

    2002-01-01

    We investigate the deep structure of a scale space image. We concentrate on scale space critical points - points with vanishing gradient with respect to both spatial and scale direction. We show that these points are always saddle points. They turn out to be extremely useful, since the iso-intensity

  20. Scaling of differential equations

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...

  1. Scale and scaling in agronomy and environmental sciences

    Science.gov (United States)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  2. Universal scalings of universal scaling exponents

    Energy Technology Data Exchange (ETDEWEB)

    Llave, Rafael de la [Department of Mathematics, University of Texas, Austin, TX 78712 (United States); Olvera, Arturo [IIMAS-UNAM, FENOMEC, Apdo. Postal 20-726, Mexico DF 01000 (Mexico); Petrov, Nikola P [Department of Mathematics, University of Oklahoma, Norman, OK 73019 (United States)

    2007-06-08

    In the last decades, renormalization group (RG) ideas have been applied to describe universal properties of different routes to chaos (quasi-periodic, period doubling or tripling, Siegel disc boundaries, etc). Each of the RG theories leads to universal scaling exponents which are related to the action of certain RG operators. The goal of this announcement is to show that there is a principle that organizes many of these scaling exponents. We give numerical evidence that the exponents of different routes to chaos satisfy approximately some arithmetic relations. These relations are determined by combinatorial properties of the route and become exact in an appropriate limit. (fast track communication)

  3. Parabolic scaling beams.

    Science.gov (United States)

    Gao, Nan; Xie, Changqing

    2014-06-15

    We generalize the concept of diffraction free beams to parabolic scaling beams (PSBs), whose normalized intensity scales parabolically during propagation. These beams are nondiffracting in the circular parabolic coordinate systems, and all the diffraction free beams of Durnin's type have counterparts as PSBs. Parabolic scaling Bessel beams with Gaussian apodization are investigated in detail, their nonparaxial extrapolations are derived, and experimental results agree well with theoretical predictions.

  4. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  5. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical pr...

  6. Universities Scale Like Cities

    CERN Document Server

    van Raan, Anthony F J

    2012-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the gross university income in terms of total number of citations over size in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its ...

  7. The career distress scale

    DEFF Research Database (Denmark)

    Creed, Peter; Hood, Michelle; Praskova, Anna

    2016-01-01

    Career distress is a common and painful outcome of many negative career experiences, such as career indecision, career compromise, and discovering career barriers. However, there are very few scales devised to assess career distress, and the two existing scales identified have psychometric...... weaknesses. The absence of a practical, validated scale to assess this construct restricts research related to career distress and limits practitioners who need to assess and treat it. Using a sample of 226 young adults (mean age 20.5 years), we employed item response theory to assess 12 existing career......, which we combined into a scale labelled the Career Distress Scale, demonstrated excellent psychometric properties, meaning that both researchers and practitioners can use it with confidence, although continued validation is required, including testing its relationship to other nomological net variables...

  8. Parallel Computing in SCALE

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D [ORNL; Williams, Mark L [ORNL; Bowman, Stephen M [ORNL

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  9. Scaling the universe

    CERN Document Server

    Frankel, Norman E

    2014-01-01

    A model is presented for the origin of the large scale structure of the universe and their Mass-Radius scaling law. The physics is conventional, orthodox, but it is used to fashion a highly unorthodox model of the origin of the galaxies, their groups, clusters, super-clusters, and great walls. The scaling law fits the observational results and the model offers new suggestions and predictions. These include a largest, a supreme, cosmic structure, and possible implications for the recently observed pressing cosmological anomalies.

  10. Scaling the Universe

    Science.gov (United States)

    Frankel, Norman E.

    2014-04-01

    A model is presented for the origin of the large scale structure of the universe and their Mass-Radius scaling law. The physics is conventional, orthodox, but it is used to fashion a highly unorthodox model of the origin of the galaxies, their groups, clusters, super-clusters, and great walls. The scaling law fits the observational results and the model offers new suggestions and predictions. These include a largest, a supreme, cosmic structure, and possible implications for the recently observed pressing cosmological anomalies.

  11. Small scale optics

    CERN Document Server

    Yupapin, Preecha

    2013-01-01

    The behavior of light in small scale optics or nano/micro optical devices has shown promising results, which can be used for basic and applied research, especially in nanoelectronics. Small Scale Optics presents the use of optical nonlinear behaviors for spins, antennae, and whispering gallery modes within micro/nano devices and circuits, which can be used in many applications. This book proposes a new design for a small scale optical device-a microring resonator device. Most chapters are based on the proposed device, which uses a configuration know as a PANDA ring resonator. Analytical and nu

  12. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....

  13. ON BANDLIMITED SCALING FUNCTION

    Institute of Scientific and Technical Information of China (English)

    Wei Chen; Qiao Yang; Wei-jun Jiang; Si-long Peng

    2002-01-01

    This paper discuss band-limited scaling function, especially on the interval band case and three interval bands case, its relationship to oversampling property and weakly translation invariance are also studied. At the end, we propose an open problem.

  14. Small-scale Biorefining

    NARCIS (Netherlands)

    Visser, de C.L.M.; Ree, van R.

    2016-01-01

    One promising way to accelerate the market implementation of integrated biorefineries is to promote small (regional) biorefinery initiatives. Small-scale biorefineries require relatively low initial investments, and therefore are often lacking the financing problems that larger facilities face. They

  15. Allometric Scaling of Countries

    CERN Document Server

    Zhang, Jiang

    2010-01-01

    As huge complex systems consisting of geographic regions, natural resources, people and economic entities, countries follow the allometric scaling law which is ubiquitous in ecological, urban systems. We systematically investigated the allometric scaling relationships between a large number of macroscopic properties and geographic (area), demographic (population) and economic (GDP, gross domestic production) sizes of countries respectively. We found that most of the economic, trade, energy consumption, communication related properties have significant super-linear (the exponent is larger than 1) or nearly linear allometric scaling relations with GDP. Meanwhile, the geographic (arable area, natural resources, etc.), demographic(labor force, military age population, etc.) and transportation-related properties (road length, airports) have significant and sub-linear (the exponent is smaller than 1) allometric scaling relations with area. Several differences of power law relations with respect to population betwee...

  16. Scaling Foreign Exchange Volatility

    OpenAIRE

    Jonathan Batten; Craig Ellis

    2001-01-01

    When asset returns are normally distributed the risk of an asset over a long return interval may be estimated by scaling the risk from shorter return intervals. While it is well known that asset returns are not normally distributed a key empirical question concerns the effect that scaling the volatility of dependent processes will have on the pricing of related financial assets. This study provides an insight into this issue by investigating the return properties of the most important currenc...

  17. SMALL SCALE MORPHODYNAMICAL MODELLING

    Institute of Scientific and Technical Information of China (English)

    D. Ditschke; O. Gothel; H. Weilbeer

    2001-01-01

    Long term morphological simulations using complete coupled models lead to very time consuming computations. Latteux (1995) presented modelling techniques developed for tidal current situations in order to reduce the computational effort. In this paper the applicability of such methods to small scale problems is investigated. It is pointed out that these methods can be transferred to small scale problems using the periodicity of the vortex shedding process.

  18. Atomic Scale Plasmonic Switch.

    Science.gov (United States)

    Emboras, Alexandros; Niegemann, Jens; Ma, Ping; Haffner, Christian; Pedersen, Andreas; Luisier, Mathieu; Hafner, Christian; Schimmel, Thomas; Leuthold, Juerg

    2016-01-13

    The atom sets an ultimate scaling limit to Moore's law in the electronics industry. While electronics research already explores atomic scales devices, photonics research still deals with devices at the micrometer scale. Here we demonstrate that photonic scaling, similar to electronics, is only limited by the atom. More precisely, we introduce an electrically controlled plasmonic switch operating at the atomic scale. The switch allows for fast and reproducible switching by means of the relocation of an individual or, at most, a few atoms in a plasmonic cavity. Depending on the location of the atom either of two distinct plasmonic cavity resonance states are supported. Experimental results show reversible digital optical switching with an extinction ratio of 9.2 dB and operation at room temperature up to MHz with femtojoule (fJ) power consumption for a single switch operation. This demonstration of an integrated quantum device allowing to control photons at the atomic level opens intriguing perspectives for a fully integrated and highly scalable chip platform, a platform where optics, electronics, and memory may be controlled at the single-atom level.

  19. Universities scale like cities.

    Directory of Open Access Journals (Sweden)

    Anthony F J van Raan

    Full Text Available Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  20. Cardinal scales for health evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles; Østerdal, Lars Peter Raahave

    2010-01-01

    Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal ...

  1. Scaled-Free Objects

    CERN Document Server

    Grilliette, Will

    2010-01-01

    Several functional analysts and C*-algebraists have been moving toward a categorical means of understanding normed objects. In this work, I address a primary issue with adapting these abstract concepts to functional analytic settings, the lack of free objects. Using a new object, called a "crutched set", and associated categories, I devise generalized construction of normed objects as a left adjoint functor to a natural forgetful functor. Further, the universal property in each case yields a "scaled-free" mapping property, which extends previous notions of `"free" normed objects. In particular, I construct the following types of scaled-free objects: Banach spaces, Banach algebras, C*-algebras, operator spaces, and operator algebras. In subsequent papers, this scaled-free property, coupled with the associated functorial results, will give rise to a new view of presentation theory for C*-algebras, which inherits many properties and constructions from its algebraic counterpart.

  2. No-Scale Inflation

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-01-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on $R + R^2$ gravity, with a tilted spectrum of scalar perturbations: $n_s \\sim 0.96$, and small values of the tensor-to-scalar perturbation ratio $r < 0.1$, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  3. Wavelets, vibrations and scalings

    CERN Document Server

    Meyer, Yves

    1997-01-01

    Physicists and mathematicians are intensely studying fractal sets of fractal curves. Mandelbrot advocated modeling of real-life signals by fractal or multifractal functions. One example is fractional Brownian motion, where large-scale behavior is related to a corresponding infrared divergence. Self-similarities and scaling laws play a key role in this new area. There is a widely accepted belief that wavelet analysis should provide the best available tool to unveil such scaling laws. And orthonormal wavelet bases are the only existing bases which are structurally invariant through dyadic dilations. This book discusses the relevance of wavelet analysis to problems in which self-similarities are important. Among the conclusions drawn are the following: 1) A weak form of self-similarity can be given a simple characterization through size estimates on wavelet coefficients, and 2) Wavelet bases can be tuned in order to provide a sharper characterization of this self-similarity. A pioneer of the wavelet "saga", Meye...

  4. Rolling at small scales

    DEFF Research Database (Denmark)

    Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.

    2016-01-01

    The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well....... Metals are known to be stronger when large strain gradients appear over a few microns; hence, the forces involved in the rolling process are expected to increase relatively at these smaller scales. In the present numerical analysis, a steady-state modeling technique that enables convergence without...... dealing with the transient response period is employed. This allows for a comprehensive parameter study. Coulomb friction, including a stick-slip condition, is used as a first approximation. It is found that length scale effects increase both the forces applied to the roll, the roll torque, and thus...

  5. Urban Scaling in Europe

    CERN Document Server

    Bettencourt, Luis M A

    2015-01-01

    Over the last decades, in disciplines as diverse as economics, geography, and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the for...

  6. Scaled Sparse Linear Regression

    CERN Document Server

    Sun, Tingni

    2011-01-01

    Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual squares and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs nearly nothing beyond the computation of a path of the sparse regression estimator for penalty levels above a threshold. For the scaled Lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the method yields simultaneously an estimator for the noise level and an estimated coefficient vector in the Lasso path satisfying certain oracle inequalities for the estimation of the noise level, prediction, and the estimation of regression coefficients. These oracle inequalities provide sufficient conditions for the consistency and asymptotic...

  7. Scaling up Telemedicine

    DEFF Research Database (Denmark)

    Christensen, Jannie Kristine Bang; Nielsen, Jeppe Agger; Gustafsson, Jeppe

    Although the processes of innovation have attracted attention of an increasing number of scholars, its political dynamics remains underexplored. Against this backdrop, this paper examines political behavior as critical aspects of the process of scaling up innovations. We revisit the concepts...... telemedicine project through simultaneous translation and theorization efforts in a cross-sectorial, politicized social context. Although we focus on upscaling as a bottom up process (from pilot to large scale), we argue that translation and theorization, and associated political behavior occurs in a broader...... through negotiating, mobilizing coalitions, and legitimacy building. To illustrate and further develop this conceptualization, we build on insights from a longitudinal case study (2008-2014) and provide a rich empirical account of how a Danish telemedicine pilot was transformed into a large-scale...

  8. Elders Health Empowerment Scale

    Science.gov (United States)

    2014-01-01

    Introduction: Empowerment refers to patient skills that allow them to become primary decision-makers in control of daily self-management of health problems. As important the concept as it is, particularly for elders with chronic diseases, few available instruments have been validated for use with Spanish speaking people. Objective: Translate and adapt the Health Empowerment Scale (HES) for a Spanish-speaking older adults sample and perform its psychometric validation. Methods: The HES was adapted based on the Diabetes Empowerment Scale-Short Form. Where "diabetes" was mentioned in the original tool, it was replaced with "health" terms to cover all kinds of conditions that could affect health empowerment. Statistical and Psychometric Analyses were conducted on 648 urban-dwelling seniors. Results: The HES had an acceptable internal consistency with a Cronbach's α of 0.89. The convergent validity was supported by significant Pearson's Coefficient correlations between the HES total and item scores and the General Self Efficacy Scale (r= 0.77), Swedish Rheumatic Disease Empowerment Scale (r= 0.69) and Making Decisions Empowerment Scale (r= 0.70). Construct validity was evaluated using item analysis, half-split test and corrected item to total correlation coefficients; with good internal consistency (α> 0.8). The content validity was supported by Scale and Item Content Validity Index of 0.98 and 1.0, respectively. Conclusions: HES had acceptable face validity and reliability coefficients; which added to its ease administration and users' unbiased comprehension, could set it as a suitable tool in evaluating elder's outpatient empowerment-based medical education programs. PMID:25767307

  9. Global Scale Impacts

    CERN Document Server

    Asphaug, Erik; Jutzi, Martin

    2015-01-01

    Global scale impacts modify the physical or thermal state of a substantial fraction of a target asteroid. Specific effects include accretion, family formation, reshaping, mixing and layering, shock and frictional heating, fragmentation, material compaction, dilatation, stripping of mantle and crust, and seismic degradation. Deciphering the complicated record of global scale impacts, in asteroids and meteorites, will lead us to understand the original planet-forming process and its resultant populations, and their evolution in time as collisions became faster and fewer. We provide a brief overview of these ideas, and an introduction to models.

  10. Irreversibility time scale.

    Science.gov (United States)

    Gallavotti, G

    2006-06-01

    Entropy creation rate is introduced for a system interacting with thermostats (i.e., for a system subject to internal conservative forces interacting with "external" thermostats via conservative forces) and a fluctuation theorem for it is proved. As an application, a time scale is introduced, to be interpreted as the time over which irreversibility becomes manifest in a process leading from an initial to a final stationary state of a mechanical system in a general nonequilibrium context. The time scale is evaluated in a few examples, including the classical Joule-Thompson process (gas expansion in a vacuum).

  11. Seismic risk assessment of Navarre (Northern Spain)

    Science.gov (United States)

    Gaspar-Escribano, J. M.; Rivas-Medina, A.; García Rodríguez, M. J.; Benito, B.; Tsige, M.; Martínez-Díaz, J. J.; Murphy, P.

    2009-04-01

    on rock conditions (for the same probability level). Again, the highest hazard is found in the northeastern part of the region. The lowest hazard is obtained along major river valleys The vulnerability assessment of the Navarra building stock is accomplished using as proxy a combination of building age, location, number of floors and the implantation of building codes. Field surveys help constraining the extent of traditional and technological construction types. The vulnerability characterization is carried out following three methods: European Macroseismic Scale (EMS 98), RISK UE vulnerability index and the capacity spectrum method implemented in Hazus. Vulnerability distribution maps for each Navarrean municipality are provided, adapted to the EMS98 vulnerability classes. The vulnerability of Navarre is medium to high, except for recent urban, highly populated developments. For each vulnerability class and expected ground motion, damage distribution is estimated by means of damage probability matrixes. Several damage indexes, embracing relative and absolute damage estimates, are used. Expected average damage is low. Whereas the largest amounts of damaged structures are found in big cities, the highest percentages are obtained in some muniucipalities of northeastern Navarre. Additionally, expected percentages and amounts of affected persons by earthquake damage are calculated for each municipality. Expected amounts of affected people are low, reflecting the low expected damage degree.

  12. The career distress scale

    DEFF Research Database (Denmark)

    Creed, Peter; Hood, Michelle; Praskova, Anna

    2016-01-01

    weaknesses. The absence of a practical, validated scale to assess this construct restricts research related to career distress and limits practitioners who need to assess and treat it. Using a sample of 226 young adults (mean age 20.5 years), we employed item response theory to assess 12 existing career...

  13. Symbolic Multidimensional Scaling

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); Y. Terada

    2015-01-01

    markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here, w

  14. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  15. Scaling the Salary Heights.

    Science.gov (United States)

    McNamee, Mike

    1986-01-01

    Federal cutbacks have created new demand for fund-raisers everywhere. Educational fund-raisers are thinking about "pay for performance"--incentive-based pay plans that can help them retain, reward, and motivate talented fund raisers within the tight pay scales common at colleges and universities. (MLW)

  16. Vineland Adaptive Behavior Scales.

    Science.gov (United States)

    Icabone, Dona G.

    1999-01-01

    This article describes the Vineland Adaptive Behavior Scales, a general assessment of personal and social sufficiency of individuals from birth through adulthood to determine areas of strength and weakness. The instrument assesses communication, daily living skills, socialization, and motor skills. Its administration, standardization, reliability,…

  17. The Spiritual Competency Scale

    Science.gov (United States)

    Robertson, Linda A.

    2010-01-01

    This study describes the development of the Spiritual Competency Scale, which was based on the Association for Spiritual, Ethical and Religious Values in Counseling's original Spiritual Competencies. Participants were 662 counseling students from religiously based and secular universities nationwide. Exploratory factor analysis revealed a 22-item,…

  18. Is this scaling nonlinear?

    CERN Document Server

    Leitao, J C; Gerlach, M; Altmann, E G

    2016-01-01

    One of the most celebrated findings in complex systems in the last decade is that different indexes y (e.g., patents) scale nonlinearly with the population~x of the cities in which they appear, i.e., $y\\sim x^\\beta, \\beta \

  19. Scaling School Turnaround

    Science.gov (United States)

    Herman, Rebecca

    2012-01-01

    This article explores the research on turning around low performing schools to summarize what we know, what we don't know, and what this means for scaling school turnaround efforts. "School turnaround" is defined here as quick, dramatic gains in academic achievement for persistently low performing schools. The article first considers the…

  20. Supergranulation Scale Connection Simulations

    CERN Document Server

    Stein, R F; Georgobiani, D; Benson, D; Schaffenberger, W

    2008-01-01

    Results of realistic simulations of solar surface convection on the scale of supergranules (96 Mm wide by 20 Mm deep) are presented. The simulations cover only 10% of the geometric depth of the solar convection zone, but half its pressure scale heights. They include the hydrogen, first and most of the second helium ionization zones. The horizontal velocity spectrum is a power law and the horizontal size of the dominant convective cells increases with increasing depth. Convection is driven by buoyancy work which is largest close to the surface, but significant over the entire domain. Close to the surface buoyancy driving is balanced by the divergence of the kinetic energy flux, but deeper down it is balanced by dissipation. The damping length of the turbulent kinetic energy is 4 pressure scale heights. The mass mixing length is 1.8 scale heights. Two thirds of the area is upflowing fluid except very close to the surface. The internal (ionization) energy flux is the largest contributor to the convective flux fo...

  1. Allometric scaling of countries

    Science.gov (United States)

    Zhang, Jiang; Yu, Tongkui

    2010-11-01

    As huge complex systems consisting of geographic regions, natural resources, people and economic entities, countries follow the allometric scaling law which is ubiquitous in ecological, and urban systems. We systematically investigated the allometric scaling relationships between a large number of macroscopic properties and geographic (area), demographic (population) and economic (GDP, gross domestic production) sizes of countries respectively. We found that most of the economic, trade, energy consumption, communication related properties have significant super-linear (the exponent is larger than 1) or nearly linear allometric scaling relations with the GDP. Meanwhile, the geographic (arable area, natural resources, etc.), demographic (labor force, military age population, etc.) and transportation-related properties (road length, airports) have significant and sub-linear (the exponent is smaller than 1) allometric scaling relations with area. Several differences of power law relations with respect to the population between countries and cities were pointed out. First, population increases sub-linearly with area in countries. Second, the GDP increases linearly in countries but not super-linearly as in cities. Finally, electricity or oil consumption per capita increases with population faster than cities.

  2. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.

  3. Global Dynamic Exposure and the OpenBuildingMap - Communicating Risk and Involving Communities

    Science.gov (United States)

    Schorlemmer, Danijel; Beutin, Thomas; Hirata, Naoshi; Hao, Ken; Wyss, Max; Cotton, Fabrice; Prehn, Karsten

    2017-04-01

    Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing, focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for this task. More than 3.5 billion geographical nodes, more than 200 million building footprints (growing by 100'000 per day), and a plethora of information about school, hospital, and other critical facilities allows us to exploit this dataset for risk-related computations. We are combining the strengths of crowd-sourced data collection with the knowledge of experts in extracting the most information from these data. Besides relying on the very active OpenStreetMap community and the Humanitarian OpenStreetMap Team, which are collecting building information at high pace, we are providing a tailored building capture tool for mobile devices. This tool is facilitating simple and fast building property capturing for OpenStreetMap by any person or interested community. With our OpenBuildingMap system, we are harvesting this dataset by processing every building in near-realtime. We are collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. The expert knowledge is needed to translate the simple building properties as captured by OpenStreetMap users into vulnerability and exposure indicators and subsequently into building classifications as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM) and the European Macroseismic Scale (EMS98). With this approach, we increase the resolution of existing

  4. The KnowRISK project: Tools and strategies for risk communication and learning

    Science.gov (United States)

    Musacchio, Gemma; Amaral Ferreira, Mónica; Falsaperla, Susanna; Piangiamore, Giovanna Lucia; Pino, Nicola Alessandro; Solarino, Stefano; Crescimbene, Massimo; Eva, Elena; Reitano, Danilo; Þorvaldsdottir, Solveig; Sousa Silva, Delta; Rupakhety, Rajesh; Sousa Oliveira, Carlos

    2016-04-01

    Damage of non-structural elements of buildings (i.e. partitions, ceilings, cladding, electrical and mechanical systems and furniture) is known to cause injuries and human losses. Also it has a significant impact on earthquake resilience and is yet being worldwide underestimated. The project KnowRISK (Know your city, Reduce seISmic risK through non-structural elements) is financed by the European Commission to develop prevention measures that may reduce non-structural damage in urban areas. Pilot areas of the project are within the three European participating countries, namely Portugal, Iceland and Italy. They were chosen because they are prone to damage level 2 and 3 (EMS-98, European Macroseismic Scale) that typically affects non-structural elements. We will develop and test a risk communication strategy taking into account the needs of households and schools, putting into practice a portfolio of best practice to reduce the most common non-structural vulnerabilities. We will target our actions to different societal groups, considering their cultural background and social vulnerabilities, and implement a participatory approach that will promote engagement and interaction between the scientific community, practitioners and citizens to foster knowledge on everyone's own neighborhoods, resilience and vulnerability. A Practical Guide for citizens will highlight that low-cost actions can be implemented to increase safety of households, meant as being the places where the most vulnerable societal groups, including children and elderly people, spend much of their time. Since our actions towards communication will include education, we will define tools that allow a clear and direct understanding of elements exposed to risk. Schools will be one of our target societal groups and their central role played at the community level will ensure spreading and strengthening of the communication process. Schools are often located in old or re-adapted buildings, formerly used for

  5. The Conscientious Responders Scale

    Directory of Open Access Journals (Sweden)

    Zdravko Marjanovic

    2014-07-01

    Full Text Available This investigation introduces a novel tool for identifying conscientious responders (CRs and random responders (RRs in psychological inventory data. The Conscientious Responders Scale (CRS is a five-item validity measure that uses instructional items to identify responders. Because each item instructs responders exactly how to answer that particular item, each response can be scored as either correct or incorrect. Given the long odds of answering a CRS item correctly by chance alone on a 7-point scale (14.29%, we reasoned that RRs would answer most items incorrectly, whereas CRs would answer them correctly. This rationale was evaluated in two experiments in which CRs’ CRS scores were compared against RRs’ scores. As predicted, results showed large differences in CRS scores across responder groups. Moreover, the CRS correctly classified responders as either conscientious or random with greater than 93% accuracy. Implications for the reliability and effectiveness of the CRS are discussed.

  6. An Elastica Arm Scale

    CERN Document Server

    Bosi, F; Corso, F Dal; Bigoni, D

    2015-01-01

    The concept of 'deformable arm scale' (completely different from a traditional rigid arm balance) is theoretically introduced and experimentally validated. The idea is not intuitive, but is the result of nonlinear equilibrium kinematics of rods inducing configurational forces, so that deflection of the arms becomes necessary for the equilibrium, which would be impossible for a rigid system. In particular, the rigid arms of usual scales are replaced by a flexible elastic lamina, free of sliding in a frictionless and inclined sliding sleeve, which can reach a unique equilibrium configuration when two vertical dead loads are applied. Prototypes realized to demonstrate the feasibility of the system show a high accuracy in the measure of load within a certain range of use. It is finally shown that the presented results are strongly related to snaking of confined beams, with implications on locomotion of serpents, plumbing, and smart oil drilling.

  7. Evolution of Scale Worms

    DEFF Research Database (Denmark)

    Gonzalez, Brett Christopher

    of adaptability and convergent evolution between relatively closely related scale worms. While some morphological and behavioral modifications in cave polynoids reflected troglomorphism, other modifications like eye loss were found to stem from a common ancestor inhabiting the deep sea, further corroborating...... the deep sea ancestry of scale worm cave fauna. In conclusion, while morphological characterization across Aphroditiformia appears deceptively easy due to the presence of elytra, convergent evolution during multiple early radiations across wide ranging habitats have confounded our ability to reconstruct......) caves, and the interstitium, recovering six monophyletic clades within Aphroditiformia: Acoetidae, Aphroditidae, Eulepethidae, Iphionidae, Polynoidae, and Sigalionidae (inclusive of the former ‘Pisionidae’ and ‘Pholoidae’), respectively. Tracing of morphological character evolution showed a high degree...

  8. Scaling macroscopic aquatic locomotion

    Science.gov (United States)

    Gazzola, Mattia; Argentina, Mederic; Mahadevan, Lakshminarayanan

    2014-11-01

    Inertial aquatic swimmers that use undulatory gaits range in length L from a few millimeters to 30 meters, across a wide array of biological taxa. Using elementary hydrodynamic arguments, we uncover a unifying mechanistic principle characterizing their locomotion by deriving a scaling relation that links swimming speed U to body kinematics (tail beat amplitude A and frequency ω) and fluid properties (kinematic viscosity ν). This principle can be simply couched as the power law Re ~ Swα , where Re = UL / ν >> 1 and Sw = ωAL / ν , with α = 4 / 3 for laminar flows, and α = 1 for turbulent flows. Existing data from over 1000 measurements on fish, amphibians, larvae, reptiles, mammals and birds, as well as direct numerical simulations are consistent with our scaling. We interpret our results as the consequence of the convergence of aquatic gaits to the performance limits imposed by hydrodynamics.

  9. Perceived prominence and scale types

    DEFF Research Database (Denmark)

    Tøndering, John; Jensen, Christian

    2005-01-01

    Three different scales which have been used to measure perceived prominence are evaluated in a perceptual experiment. Average scores of raters using a multi-level (31-point) scale, a simple binary (2-point) scale and an intermediate 4-point scale are almost identical. The potentially finer gradat...

  10. Seismic Hazard and risk assessment for Romania -Bulgaria cross-border region

    Science.gov (United States)

    Simeonova, Stela; Solakov, Dimcho; Alexandrova, Irena; Vaseva, Elena; Trifonova, Petya; Raykova, Plamena

    2016-04-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic hazard and vulnerability to earthquakes are steadily increasing as urbanization and development occupy more areas that are prone to effects of strong earthquakes. The assessment of the seismic hazard and risk is particularly important, because it provides valuable information for seismic safety and disaster mitigation, and it supports decision making for the benefit of society. Romania and Bulgaria, situated in the Balkan Region as a part of the Alpine-Himalayan seismic belt, are characterized by high seismicity, and are exposed to a high seismic risk. Over the centuries, both countries have experienced strong earthquakes. The cross-border region encompassing the northern Bulgaria and southern Romania is a territory prone to effects of strong earthquakes. The area is significantly affected by earthquakes occurred in both countries, on the one hand the events generated by the Vrancea intermediate-depth seismic source in Romania, and on the other hand by the crustal seismicity originated in the seismic sources: Shabla (SHB), Dulovo, Gorna Orjahovitza (GO) in Bulgaria. The Vrancea seismogenic zone of Romania is a very peculiar seismic source, often described as unique in the world, and it represents a major concern for most of the northern part of Bulgaria as well. In the present study the seismic hazard for Romania-Bulgaria cross-border region on the basis of integrated basic geo-datasets is assessed. The hazard results are obtained by applying two alternative approaches - probabilistic and deterministic. The MSK64 intensity (MSK64 scale is practically equal to the new EMS98) is used as output parameter for the hazard maps. We prefer to use here the macroseismic intensity instead of PGA, because it is directly related to the degree of damages and, moreover, the epicentral intensity is the original

  11. The Unintentional Procrastination Scale

    OpenAIRE

    Fernie, BA; Bharucha, Z; Nikčević, AV; Spada, MM

    2016-01-01

    © 2016 The Author(s)Procrastination refers to the delay or postponement of a task or decision and is often conceptualised as a failure of self-regulation. Recent research has suggested that procrastination could be delineated into two domains: intentional and unintentional. In this two-study paper, we aimed to develop a measure of unintentional procrastination (named the Unintentional Procrastination Scale or the ‘UPS’) and test whether this would be a stronger marker of psychopathology than ...

  12. Extreme Scale Visual Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Potok, Thomas E [ORNL; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL; Shipman, Galen M [ORNL; Thornton, Peter E [ORNL; Potok, Thomas E [ORNL

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  13. The Chinese Politeness Scale

    Institute of Scientific and Technical Information of China (English)

    王喜凤

    2012-01-01

    In order to make sense of what is said in an interaction,we have to look at various factors which relate to social distance and closeness.Generally,these factors include the specific situation language takes place,the relative status of the two participants,the message being delivered and finally the age of the participants.In this article,the Chinese Politeness Scale,based on Chinese social values and tradition,will be explained and demonstrated in detail.

  14. EARTHQUAKE SCALING PARADOX

    Institute of Scientific and Technical Information of China (English)

    WU ZHONG-LIANG

    2001-01-01

    Two measures of earthquakes, the seismic moment and the broadband radiated energy, show completely different scaling relations. For shallow earthquakes worldwide from January 1987 to December 1998, the frequency distribution of the seismic moment shows a clear kink between moderate and large earthquakes, as revealed by previous works. But the frequency distribution of the broadband radiated energy shows a single power law, a classical Gutenberg-Richter relation. This inconsistency raises a paradox in the self-organized criticality model of earthquakes.

  15. Scaling up Copy Detection

    OpenAIRE

    Li, Xian; Dong, Xin Luna; Lyons, Kenneth B.; Meng, Weiyi; Srivastava, Divesh

    2015-01-01

    Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\\em how to improve scalability of copy detection on structured data}. Ou...

  16. Micro-Scale Thermoacoustics

    Science.gov (United States)

    Offner, Avshalom; Ramon, Guy Z.

    2016-11-01

    Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).

  17. H2@Scale Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark

    2017-07-12

    'H2@Scale' is a concept based on the opportunity for hydrogen to act as an intermediate between energy sources and uses. Hydrogen has the potential to be used like the primary intermediate in use today, electricity, because it too is fungible. This presentation summarizes the H2@Scale analysis efforts performed during the first third of 2017. Results of technical potential uses and supply options are summarized and show that the technical potential demand for hydrogen is 60 million metric tons per year and that the U.S. has sufficient domestic resources to meet that demand. A high level infrastructure analysis is also presented that shows an 85% increase in energy on the grid if all hydrogen is produced from grid electricity. However, a preliminary spatial assessment shows that supply is sufficient in most counties across the U.S. The presentation also shows plans for analysis of the economic potential for the H2@Scale concept. Those plans involve developing supply and demand curves for potential hydrogen generation options and as compared to other options for use of that hydrogen.

  18. Tournament Satisfaction Scale (TOSS

    Directory of Open Access Journals (Sweden)

    Kubilay Öcal

    2016-04-01

    Full Text Available Increasing in the popularity of regional sport tourism addresses the consumers to measure satisfaction of participants in order to provide high quality product or services. The literature declares the strong need of a reliable and valid scale in the area of sport tourism. For that purpose this paper describes the process of developing Tournament Satisfaction Scale (TOSS that can be used to asses athletes’ perception of satisfaction through sport tournaments. An item pool with 33 items was developed by literature reviews and interviews with experts in the area of sport tourism, sport management and coaching. Exploratory Factor Analysis with Maximum Likelihood extraction method and oblique rotation (direct oblimin was carried out by using the data obtained from 278 athletes in various sport branches participated in a tournament as a regional sport tourist. Exploratory Factor Analysis results yielded one factor with 22 items over .50 factor loading. The 22-item TOSS was found to explain 40.3% of the variance in tournament satisfaction. Cronbach alpha coefficient is 0.93 for TOSS indicating satisfactory reliability evidence. Overall, it can be concluded that the scale is reliable and valid tool for evaluating tournament satisfaction from the perceptions of athletes. In this way coaches, team managers, and tournament organizers would possible to obtain important clues about their performances.

  19. Mechanism for salt scaling

    Science.gov (United States)

    Valenza, John J., II

    Salt scaling is superficial damage caused by freezing a saline solution on the surface of a cementitious body. The damage consists of the removal of small chips or flakes of binder. The discovery of this phenomenon in the early 1950's prompted hundreds of experimental studies, which clearly elucidated the characteristics of this damage. In particular it was shown that a pessimum salt concentration exists, where a moderate salt concentration (˜3%) results in the most damage. Despite the numerous studies, the mechanism responsible for salt scaling has not been identified. In this work it is shown that salt scaling is a result of the large thermal expansion mismatch between ice and the cementitious body, and that the mechanism responsible for damage is analogous to glue-spalling. When ice forms on a cementitious body a bi-material composite is formed. The thermal expansion coefficient of the ice is ˜5 times that of the underlying body, so when the temperature of the composite is lowered below the melting point, the ice goes into tension. Once this stress exceeds the strength of the ice, cracks initiate in the ice and propagate into the surface of the cementitious body, removing a flake of material. The glue-spall mechanism accounts for all of the characteristics of salt scaling. In particular, a theoretical analysis is presented which shows that the pessimum concentration is a consequence of the effect of brine pockets on the mechanical properties of ice, and that the damage morphology is accounted for by fracture mechanics. Finally, empirical evidence is presented that proves that the glue-small mechanism is the primary cause of salt scaling. The primary experimental tool used in this study is a novel warping experiment, where a pool of liquid is formed on top of a thin (˜3 mm) plate of cement paste. Stresses in the plate, including thermal expansion mismatch, result in warping of the plate, which is easily detected. This technique revealed the existence of

  20. Nestedness across biological scales

    Science.gov (United States)

    Marquitti, Flavia M. D.; Raimundo, Rafael L. G.; Sebastián-González, Esther; Coltri, Patricia P.; Perez, S. Ivan; Brandt, Débora Y. C.; Nunes, Kelly; Daura-Jorge, Fábio G.; Floeter, Sergio R.; Guimarães, Paulo R.

    2017-01-01

    Biological networks pervade nature. They describe systems throughout all levels of biological organization, from molecules regulating metabolism to species interactions that shape ecosystem dynamics. The network thinking revealed recurrent organizational patterns in complex biological systems, such as the formation of semi-independent groups of connected elements (modularity) and non-random distributions of interactions among elements. Other structural patterns, such as nestedness, have been primarily assessed in ecological networks formed by two non-overlapping sets of elements; information on its occurrence on other levels of organization is lacking. Nestedness occurs when interactions of less connected elements form proper subsets of the interactions of more connected elements. Only recently these properties began to be appreciated in one-mode networks (where all elements can interact) which describe a much wider variety of biological phenomena. Here, we compute nestedness in a diverse collection of one-mode networked systems from six different levels of biological organization depicting gene and protein interactions, complex phenotypes, animal societies, metapopulations, food webs and vertebrate metacommunities. Our findings suggest that nestedness emerge independently of interaction type or biological scale and reveal that disparate systems can share nested organization features characterized by inclusive subsets of interacting elements with decreasing connectedness. We primarily explore the implications of a nested structure for each of these studied systems, then theorize on how nested networks are assembled. We hypothesize that nestedness emerges across scales due to processes that, although system-dependent, may share a general compromise between two features: specificity (the number of interactions the elements of the system can have) and affinity (how these elements can be connected to each other). Our findings suggesting occurrence of nestedness

  1. (Re)scaling identities

    DEFF Research Database (Denmark)

    Koefoed, Lasse Martin; Simonsen, Kirsten

    2012-01-01

    This article draws attention to life as an ‘internal stranger’ in the city, the nation and other spatial formations. It explores the habitability of the different spatial formations and the possibilities of identification for ethnic minority groups. Drawing on research on citizens in Copenhagen...... and identification is ambivalence in affiliation to the Danish nation expressing the discrepancy between feeling Danish and not being recognized as a full member of the Danish imagined community. This emotional ambivalence gives rise to what we call jumping scale in identification and a search for alternative spaces...

  2. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  3. Scaling CouchDB

    CERN Document Server

    Holt, Bradley

    2011-01-01

    This practical guide offers a short course on scaling CouchDB to meet the capacity needs of your distributed application. Through a series of scenario-based examples, this book lets you explore several methods for creating a system that can accommodate growth and meet expected demand. In the process, you learn about several tools that can help you with replication, load balancing, clustering, and load testing and monitoring. Apply performance tips for tuning your databaseReplicate data, using Futon and CouchDB's RESTful interfaceDistribute CouchDB's workload through load balancingLearn option

  4. Scales on the scalp

    Directory of Open Access Journals (Sweden)

    Jamil A

    2013-05-01

    Full Text Available A five-year-old boy presented with a six-week history of scales, flaking and crusting of the scalp. He had mild pruritus but no pain. He did not have a history of atopy and there were no pets at home. Examination of the scalp showed thick, yellowish dry crusts on the vertex and parietal areas and the hair was adhered to the scalp in clumps. There was non-scarring alopecia and mild erythema (Figure 1 & 2. There was no cervical or occipital lymphadenopathy. The patient’s nails and skin in other parts of the body were normal.

  5. Soil organic carbon across scales.

    Science.gov (United States)

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management.

  6. Assessment of earthquake-induced landslides hazard in El Salvador after the 2001 earthquakes using macroseismic analysis

    Science.gov (United States)

    Esposito, Eliana; Violante, Crescenzo; Giunta, Giuseppe; Ángel Hernández, Miguel

    2016-04-01

    Two strong earthquakes and a number of smaller aftershocks struck El Salvador in the year 2001. The January 13 2001 earthquake, Mw 7.7, occurred along the Cocos plate, 40 km off El Salvador southern coast. It resulted in about 1300 deaths and widespread damage, mainly due to massive landsliding. Two of the largest earthquake-induced landslides, Las Barioleras and Las Colinas (about 2x105 m3) produced major damage to buildings and infrastructures and 500 fatalities. A neighborhood in Santa Tecla, west of San Salvador, was destroyed. The February 13 2001 earthquake, Mw 6.5, occurred 40 km east-southeast of San Salvador. This earthquake caused over 300 fatalities and triggered several landslides over an area of 2,500 km2 mostly in poorly consolidated volcaniclastic deposits. The La Leona landslide (5-7x105 m3) caused 12 fatalities and extensive damage to the Panamerican Highway. Two very large landslides of 1.5 km3 and 12 km3 produced hazardous barrier lakes at Rio El Desague and Rio Jiboa, respectively. More than 16.000 landslides occurred throughout the country after both quakes; most of them occurred in pyroclastic deposits, with a volume less than 1x103m3. The present work aims to define the relationship between the above described earthquake intensity, size and areal distribution of induced landslides, as well as to refine the earthquake intensity in sparsely populated zones by using landslide effects. Landslides triggered by the 2001 seismic sequences provided useful indication for a realistic seismic hazard assessment, providing a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides.

  7. On the modeling of strong motion parameters and correlation with historical macroseismic data: an application to the 1915 Avezzano earthquake

    Directory of Open Access Journals (Sweden)

    G. Longhi

    1995-06-01

    Full Text Available This article describes the results of a ground motion modeling study of the 1915 Avezzano earthquake. The goal was to test assuinptions regarding the rupture process of this earthquake by attempting to model the damage to historical monuments and populated habitats during the earthquake. The methodology used combines stochastic and deterministic modeling techniques to synthesize strong ground motion, starting from a simple characterization of the earthquake source on an extended fault plane. The stochastic component of the methodology is used to simulate high-frequency ground motion oscillations. The envelopes of these synthetic waveforms, however, are simulated in a deterministic way based on the isochron formulation for the calculation of radiated seismic energy. Synthetic acceleration time histories representative of ground motion experienced at the towns of Avezzano, Celano, Ortucchio, and Sora are then analyzed in terms of the damage to historical buildings at these sites. The article also discusses how the same methodology can be adapted to efficiently evaluate various strong motion parameters such as duration and amplitude of ground shaking, at several hundreds of surface sites and as a function of rupture process. The usefulness of such a technique is illustrated through the inodeling of intensity data from the Avezzano earthquake. One of the most interesting results is that it is possible to distinguish between different rupture scenarios for the 1915 earthquake based on the goodness of fit of theoretical intensities to observed values.

  8. Small scale sanitation technologies.

    Science.gov (United States)

    Green, W; Ho, G

    2005-01-01

    Small scale systems can improve the sustainability of sanitation systems as they more easily close the water and nutrient loops. They also provide alternate solutions to centrally managed large scale infrastructures. Appropriate sanitation provision can improve the lives of people with inadequate sanitation through health benefits, reuse products as well as reduce ecological impacts. In the literature there seems to be no compilation of a wide range of available onsite sanitation systems around the world that encompasses black and greywater treatment plus stand-alone dry and urine separation toilet systems. Seventy technologies have been identified and classified according to the different waste source streams. Sub-classification based on major treatment methods included aerobic digestion, composting and vermicomposting, anaerobic digestion, sand/soil/peat filtration and constructed wetlands. Potential users or suppliers of sanitation systems can choose from wide range of technologies available and examine the different treatment principles used in the technologies. Sanitation systems need to be selected according to the local social, economic and environmental conditions and should aim to be sustainable.

  9. Returns to Scale and Economies of Scale: Further Observations.

    Science.gov (United States)

    Gelles, Gregory M.; Mitchell, Douglas W.

    1996-01-01

    Maintains that most economics textbooks continue to repeat past mistakes concerning returns to scale and economies of scale under assumptions of constant and nonconstant input prices. Provides an adaptation for a calculus-based intermediate microeconomics class that demonstrates the pointwise relationship between returns to scale and economies of…

  10. Tera Scale Remnants of Unification and Supersymmetry at Planck Scale

    CERN Document Server

    Kawamura, Yoshiharu

    2013-01-01

    We predict new particles at the Tera scale based on the assumptions that the standard model gauge interactions are unified around the gravitational scale with a big desert and new particles originate from hypermultiplets as remnants of supersymmetry, and propose a theoretical framework at the Tera scale and beyond, that has predictability.

  11. A Validity Scale for the Sharp Consumer Satisfaction Scales.

    Science.gov (United States)

    Tanner, Barry A.; Stacy, Webb, Jr.

    1985-01-01

    A validity scale for the Sharp Consumer Satisfaction Scale was developed and used in experiments to assess patients' satisfaction with community mental health centers. The scale discriminated between clients who offered suggestions and those who did not. It also improved researcher's ability to predict true scores from obtained scores. (DWH)

  12. Earthquake impact scale

    Science.gov (United States)

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  13. Northeast Snowfall Impact Scale (NESIS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — While the Fujita and Saffir-Simpson Scales characterize tornadoes and hurricanes respectively, there is no widely used scale to classify snowstorms. The Northeast...

  14. Scaling Equation for Invariant Measure

    Institute of Scientific and Technical Information of China (English)

    LIU Shi-Kuo; FU Zun-Tao; LIU Shi-Da; REN Kui

    2003-01-01

    An iterated function system (IFS) is constructed. It is shown that the invariant measure of IFS satisfies the same equation as scaling equation for wavelet transform (WT). Obviously, IFS and scaling equation of WT both have contraction mapping principle.

  15. ScaleUp America Communities

    Data.gov (United States)

    Small Business Administration — SBA’s new ScaleUp America Initiative is designed to help small firms with high potential “scale up” and grow their businesses so that they will provide more jobs and...

  16. Scale setting in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, Rainer [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2014-02-15

    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  17. Indian scales and inventories.

    Science.gov (United States)

    Venkatesan, S

    2010-01-01

    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines.

  18. Biological scaling and physics

    Indian Academy of Sciences (India)

    A R P Rau

    2002-09-01

    Kleiber’s law in biology states that the specific metabolic rate (metabolic rate per unit mass) scales as -1/4 in terms of the mass of the organism. A long-standing puzzle is the (- 1/4) power in place of the usual expectation of (- 1/3) based on the surface to volume ratio in three-dimensions. While recent papers by physicists have focused exclusively on geometry in attempting to explain the puzzle, we consider here a specific law of physics that governs fluid flow to show how the (- 1/4) power arises under certain conditions. More generally, such a line of approach that identifies a specific physical law as involved and then examines the implications of a power law may illuminate better the role of physics in biology.

  19. Scaling MongoDB

    CERN Document Server

    Chodorow, Kristina

    2011-01-01

    Create a MongoDB cluster that will to grow to meet the needs of your application. With this short and concise book, you'll get guidelines for setting up and using clusters to store a large volume of data, and learn how to access the data efficiently. In the process, you'll understand how to make your application work with a distributed database system. Scaling MongoDB will help you: Set up a MongoDB cluster through shardingWork with a cluster to query and update dataOperate, monitor, and backup your clusterPlan your application to deal with outages By following the advice in this book, you'l

  20. Multi Scale Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Huemmer, Matthias [AREVA NP GmbH, Paul-Gossen Strasse 100, Erlangen (Germany)

    2008-07-01

    The safety of the Reactor Pressure Vessels (RPV) must be assured and demonstrated by safety assessments against brittle fracture according to the codes and standards. In addition to these deterministic methods, researchers developed statistic methods, so called local approach (LA) models, to predict specimen or component failure. These models transfer the microscopic fracture events to the macro scale by means of Weibull stresses and therefore can describe the fracture behavior more accurate. This paper will propose a recently developed LA model. After the calibration of the model parameters the wide applicability of the model will be demonstrated. Therefore a large number of computations, based on 3D finite element simulations, have been conducted, containing different specimen types and materials in unirradiated and irradiated condition. Comparison of the experimental data with the predictions attained by means of the LA model shows that the fracture behavior can be well described. (authors)

  1. Biological scaling and physics.

    Science.gov (United States)

    Rau, A R P

    2002-09-01

    Kleiber's law in biology states that the specific metabolic rate (metabolic rate per unit mass) scales as M- 1/4 in terms of the mass M of the organism. A long-standing puzzle is the (- 1/4) power in place of the usual expectation of (- 1/3) based on the surface to volume ratio in three-dimensions. While recent papers by physicists have focused exclusively on geometry in attempting to explain the puzzle, we consider here a specific law of physics that governs fluid flow to show how the (- 1/4) power arises under certain conditions. More generally, such a line of approach that identifies a specific physical law as involved and then examines the implications of a power law may illuminate better the role of physics in biology.

  2. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  3. The Children's Loneliness Scale.

    Science.gov (United States)

    Maes, Marlies; Van den Noortgate, Wim; Vanhalst, Janne; Beyers, Wim; Goossens, Luc

    2017-03-01

    The present study examined the factor structure and construct validity of the Children's Loneliness Scale (CLS), a popular measure of childhood loneliness, in Belgian children. Analyses were conducted on two samples of fifth and sixth graders in Belgium, for a total of 1,069 children. A single-factor structure proved superior to alternative solutions proposed in the literature, when taking item wording into account. Construct validity was shown by substantial associations with related constructs, based on both self-reported (e.g., depressive symptoms and low social self-esteem), and peer-reported variables (e.g., victimization). Furthermore, a significant association was found between the CLS and a peer-reported measure of loneliness. Collectively, these findings provide a solid foundation for the continuing use of the CLS as a measure of childhood loneliness.

  4. Excitable Scale Free Networks

    CERN Document Server

    Copelli, Mauro

    2007-01-01

    When a simple excitable system is continuously stimulated by a Poissonian external source, the response function (mean activity versus stimulus rate) generally shows a linear saturating shape. This is experimentally verified in some classes of sensory neurons, which accordingly present a small dynamic range (defined as the interval of stimulus intensity which can be appropriately coded by the mean activity of the excitable element), usually about one or two decades only. The brain, on the other hand, can handle a significantly broader range of stimulus intensity, and a collective phenomenon involving the interaction among excitable neurons has been suggested to account for the enhancement of the dynamic range. Since the role of the pattern of such interactions is still unclear, here we investigate the performance of a scale-free (SF) network topology in this dynamic range problem. Specifically, we study the transfer function of disordered SF networks of excitable Greenberg-Hastings cellular automata. We obser...

  5. An ordinal metrical scale built on a fuzzy nominal scale

    Science.gov (United States)

    Benoit, E.

    2010-07-01

    The Measurement theory defines a measurement as a mapping from a set of empirical property manifestations to a set of abstract property values called symbols. The ordinal metrical scales were introduced within the context of Psychophysics as a way to solve the problem of multidimensional scaling. Usually the distances used to define such scales are based on the hypothesis that symbols are vectors of numbers and that each component is expressed on an interval scale or a ratio scale. In a recent paper was introduced a distance-based scale that represents manifestations from an empirical world with fuzzy subsets of lexical terms. This approach supposes only the existence of a fuzzy nominal scale and allows a choice into a wider set of distances to build the ordinal metrical scales. This paper focuses on the knowledge source used to choose a scale definition and takes metrical scales built on fuzzy nominal scale as example. Then it opens a discussion on the reality of some distances in the empirical world.

  6. Jensen's Functionals on Time Scales

    Directory of Open Access Journals (Sweden)

    Matloob Anwar

    2012-01-01

    Full Text Available We consider Jensen’s functionals on time scales and discuss its properties and applications. Further, we define weighted generalized and power means on time scales. By applying the properties of Jensen’s functionals on these means, we obtain several refinements and converses of Hölder’s inequality on time scales.

  7. Study of Adherent Oxide Scales

    Science.gov (United States)

    1987-09-14

    oxide scale-metal interface, thereby improving scale adherence. Because the reactive elements which improve scale adherence (yttrium, hafnium , etc...temperature range, the chromium in the alloy lowers the sulfur activity greater than that of aluminium . Despite this ability of chromium to reduce sulfur

  8. Coma scales: a historical review

    Directory of Open Access Journals (Sweden)

    Ana Luisa Bordini

    2010-12-01

    Full Text Available OBJECTIVE: To describe the most important coma scales developed in the last fifty years. METHOD: A review of the literature between 1969 and 2009 in the Medline and Scielo databases was carried out using the following keywords: coma scales, coma, disorders of consciousness, coma score and levels of coma. RESULTS: Five main scales were found in chronological order: the Jouvet coma scale, the Moscow coma scale, the Glasgow coma scale (GCS, the Bozza-Marrubini scale and the FOUR score (Full Outline of UnResponsiveness, as well as other scales that have had less impact and are rarely used outside their country of origin. DISCUSSION: Of the five main scales, the GCS is by far the most widely used. It is easy to apply and very suitable for cases of traumatic brain injury (TBI. However, it has shortcomings, such as the fact that the speech component in intubated patients cannot be tested. While the Jouvet scale is quite sensitive, particularly for levels of consciousness closer to normal levels, it is difficult to use. The Moscow scale has good predictive value but is little used by the medical community. The FOUR score is easy to apply and provides more neurological details than the Glasgow scale.

  9. The Fundamental Scale of Descriptions

    CERN Document Server

    Febres, Gerardo

    2014-01-01

    The complexity of a system description is a function of the entropy of its symbolic description. Prior to computing the entropy of the system description, an observation scale has to be assumed. In natural language texts, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, is only a presumption. This study depicts the notion of the Description Fundamental Scale as a set of symbols which serves to analyze the essence a language structure. The concept of Fundamental Scale is tested using English and MIDI music texts by means of an algorithm developed to search for a set of symbols, which minimizes the system observed entropy, and therefore best expresses the fundamental scale of the language employed. Test results show that it is possible to find the Fundamental Scale of some languages. The concept of Fundamental Scale, and the method for its determination, emerges as an interesting tool to fac...

  10. Scale-aware shape manipulation

    Institute of Scientific and Technical Information of China (English)

    Zheng LIU; Wei-ming WANG; Xiu-ping LIU; Li-gang LIU

    2014-01-01

    A novel representation of a triangular mesh surface using a set of scale-invariant measures is proposed. The measures consist of angles of the triangles (triangle angles) and dihedral angles along the edges (edge angles) which are scale and rigidity independent. The vertex coordinates for a mesh give its scale-invariant measures, unique up to scale, rotation, and translation. Based on the representation of mesh using scale-invariant measures, a two-step iterative deformation algorithm is proposed, which can arbitrarily edit the mesh through simple handles interaction. The algorithm can explicitly preserve the local geometric details as much as possible in different scales even under severe editing operations including rotation, scaling, and shearing. The efficiency and robustness of the proposed algorithm are demonstrated by examples.

  11. Solar system to scale

    Science.gov (United States)

    Gerwig López, Susanne

    2016-04-01

    One of the most important successes in astronomical observations has been to determine the limit of the Solar System. It is said that the first man able to measure the distance Earth-Sun with only a very slight mistake, in the second century BC, was the wise Greek man Aristarco de Samos. Thanks to Newtońs law of universal gravitation, it was possible to measure, with a little margin of error, the distances between the Sun and the planets. Twelve-year old students are very interested in everything related to the universe. However, it seems too difficult to imagine and understand the real distances among the different celestial bodies. To learn the differences among the inner and outer planets and how far away the outer ones are, I have considered to make my pupils work on the sizes and the distances in our solar system constructing it to scale. The purpose is to reproduce our solar system to scale on a cardboard. The procedure is very easy and simple. Students of first year of ESO (12 year-old) receive the instructions in a sheet of paper (things they need: a black cardboard, a pair of scissors, colored pencils, a ruler, adhesive tape, glue, the photocopies of the planets and satellites, the measurements they have to use). In another photocopy they get the pictures of the edge of the sun, the planets, dwarf planets and some satellites, which they have to color, cut and stick on the cardboard. This activity is planned for both Spanish and bilingual learning students as a science project. Depending on the group, they will receive these instructions in Spanish or in English. When the time is over, the students bring their works on their cardboard to the class. They obtain a final mark: passing, good or excellent, depending on the accuracy of the measurements, the position of all the celestial bodies, the asteroids belts, personal contributions, etc. If any of the students has not followed the instructions they get the chance to remake it again properly, in order not

  12. Scaling of structural failure

    Energy Technology Data Exchange (ETDEWEB)

    Bazant, Z.P. [Northwestern Univ., Evanston, IL (United States); Chen, Er-Ping [Sandia National Lab., Albuquerque, NM (United States)

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. Transition from large-scale to small-scale dynamo.

    Science.gov (United States)

    Ponty, Y; Plunian, F

    2011-04-15

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  16. Large scale-small scale duality and cosmological constant

    CERN Document Server

    Darabi, F

    1999-01-01

    We study a model of quantum cosmology originating from a classical model of gravitation where a self interacting scalar field is coupled to gravity with the metric undergoing a signature transition. We show that there are dual classical signature changing solutions, one at large scales and the other at small scales. It is possible to fine-tune the physics in both scales with an infinitesimal effective cosmological constant.

  17. Scaling Effects on Materials Tribology: From Macro to Micro Scale

    Science.gov (United States)

    Stoyanov, Pantcho; Chromik, Richard R.

    2017-01-01

    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale. PMID:28772909

  18. Cryptic individual scaling relationships and the evolution of morphological scaling.

    Science.gov (United States)

    Dreyer, Austin P; Saleh Ziabari, Omid; Swanson, Eli M; Chawla, Akshita; Frankino, W Anthony; Shingleton, Alexander W

    2016-08-01

    Morphological scaling relationships between organ and body size-also known as allometries-describe the shape of a species, and the evolution of such scaling relationships is central to the generation of morphological diversity. Despite extensive modeling and empirical tests, however, the modes of selection that generate changes in scaling remain largely unknown. Here, we mathematically model the evolution of the group-level scaling as an emergent property of individual-level variation in the developmental mechanisms that regulate trait and body size. We show that these mechanisms generate a "cryptic individual scaling relationship" unique to each genotype in a population, which determines body and trait size expressed by each individual, depending on developmental nutrition. We find that populations may have identical population-level allometries but very different underlying patterns of cryptic individual scaling relationships. Consequently, two populations with apparently the same morphological scaling relationship may respond very differently to the same form of selection. By focusing on the developmental mechanisms that regulate trait size and the patterns of cryptic individual scaling relationships they produce, our approach reveals the forms of selection that should be most effective in altering morphological scaling, and directs researcher attention on the actual, hitherto overlooked, targets of selection.

  19. Environmental complexity across scales: mechanism, scaling and the phenomenological fallacy

    Science.gov (United States)

    Lovejoy, Shaun

    2015-04-01

    Ever since Van Leeuwenhoek used a microscope to discover "new worlds in a drop of water" we have become used to the idea that "zooming in" - whether in space or in time - will reveal new processes, new phenomena. Yet in the natural environment - geosystems - this is often wrong. For example, in the temporal domain, a recent publication has shown that from hours to hundreds of millions of years the conventional scale bound view of atmospheric variability was wrong by a factor of over a quadrillion (10**15). Mandelbrot challenged the "scale bound" ideology and proposed that many natural systems - including many geosystems - were instead better treated as fractal systems in which the same basic mechanism acts over potentially huge ranges of scale. However, in its original form Mandelbrot's isotropic scaling (self-similar) idea turned out to be too naïve: geosystems are typically anisotropic so that shapes and morphologies (e.g. of clouds landmasses) are not the same at different resolutions. However it turns out that the scaling idea often still applies on condition that the notion of scale is generalized appropriately (using the framework of Generalized Scale Invariance). The overall result is that unique processes, unique dynamical mechanisms may act over huge ranges of scale even though the morphologies systematically change with scale. Therefore the common practice of inferring mechanism from shapes, forms, morphologies is unjustified, the "phenomenological fallacy". We give examples of the phenomenological fallacy drawn from diverse areas of geoscience.

  20. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers

    2017-01-01

    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  1. The Scales of Injustice

    Directory of Open Access Journals (Sweden)

    Charles Blattberg

    2008-02-01

    Full Text Available This paper criticises four major approaches to criminal law – consequentialism, retributivism, abolitionism, and “mixed” pluralism – each of which, in its own fashion, affirms the celebrated emblem of the “scales of justice.” The argument is that there is a better way of dealing with the tensions that often arise between the various legal purposes than by merely balancing them against each other. It consists, essentially, of striving to genuinely reconcile those purposes, a goal which is shown to require taking a new, “patriotic” approach to law. Le présent article porte une critique à quatre approches majeures en droit pénal : le conséquentialisme, le rétributivisme, l’abolitionnisme et le pluralisme « mixte. » Toutes ces approches se rangent, chacune à leur manière, sous le célèbre emblème des « échelles de justice. » L’argument est qu’il existe une meilleure façon de faire face aux tensions qui opposent les multiples objectifs judiciaires plutôt que de comparer le poids des uns contre le poids des autres. Il s’agit essentiellement de s’efforcer à réaliser une authentique réconciliation de ces objectifs. Il apparaîtra que pour y parvenir il est nécessaire d’avoir recours à une nouvelle approche du droit, une approche précisément « patriotique. »

  2. Industrial scale gene synthesis.

    Science.gov (United States)

    Notka, Frank; Liss, Michael; Wagner, Ralf

    2011-01-01

    The most recent developments in the area of deep DNA sequencing and downstream quantitative and functional analysis are rapidly adding a new dimension to understanding biochemical pathways and metabolic interdependencies. These increasing insights pave the way to designing new strategies that address public needs, including environmental applications and therapeutic inventions, or novel cell factories for sustainable and reconcilable energy or chemicals sources. Adding yet another level is building upon nonnaturally occurring networks and pathways. Recent developments in synthetic biology have created economic and reliable options for designing and synthesizing genes, operons, and eventually complete genomes. Meanwhile, high-throughput design and synthesis of extremely comprehensive DNA sequences have evolved into an enabling technology already indispensable in various life science sectors today. Here, we describe the industrial perspective of modern gene synthesis and its relationship with synthetic biology. Gene synthesis contributed significantly to the emergence of synthetic biology by not only providing the genetic material in high quality and quantity but also enabling its assembly, according to engineering design principles, in a standardized format. Synthetic biology on the other hand, added the need for assembling complex circuits and large complexes, thus fostering the development of appropriate methods and expanding the scope of applications. Synthetic biology has also stimulated interdisciplinary collaboration as well as integration of the broader public by addressing socioeconomic, philosophical, ethical, political, and legal opportunities and concerns. The demand-driven technological achievements of gene synthesis and the implemented processes are exemplified by an industrial setting of large-scale gene synthesis, describing production from order to delivery.

  3. H2@Scale Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Pivovar, Bryan

    2017-03-31

    Final report from the H2@Scale Workshop held November 16-17, 2016, at the National Renewable Energy Laboratory in Golden, Colorado. The U.S. Department of Energy's National Renewable Energy Laboratory hosted a technology workshop to identify the current barriers and research needs of the H2@Scale concept. H2@Scale is a concept regarding the potential for wide-scale impact of hydrogen produced from diverse domestic resources to enhance U.S. energy security and enable growth of innovative technologies and domestic industries. Feedback received from a diverse set of stakeholders at the workshop will guide the development of an H2@Scale roadmap for research, development, and early stage demonstration activities that can enable hydrogen as an energy carrier at a national scale.

  4. Plague and climate: scales matter.

    Directory of Open Access Journals (Sweden)

    Tamara Ben-Ari

    2011-09-01

    Full Text Available Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations.

  5. Plague and climate: scales matter.

    Science.gov (United States)

    Ben-Ari, Tamara; Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr

    2011-09-01

    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations.

  6. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  7. Plague and Climate: Scales Matter

    Science.gov (United States)

    Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.

    2011-01-01

    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648

  8. Discrete implementations of scale transform

    Science.gov (United States)

    Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.

    1999-11-01

    Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.

  9. Fundamental Scaling Laws in Nanophotonics

    OpenAIRE

    Ke Liu; Shuai Sun; Arka Majumdar; Volker J. Sorger

    2016-01-01

    The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoe...

  10. Electroweak scale neutrinos and Higgses

    CERN Document Server

    Aranda, Alfredo

    2009-01-01

    We present two different models with electroweak scale right-handed neutrinos. One of the models is created under the constraint that any addition to the Standard Model must not introduce new higher scales. The model contains right-handed neutrinos with electroweak scale masses and a lepton number violating singlet scalar field. The scalar phenomenology is also presented. The second model is a triplet Higgs model where again the right-handed neutrinos have electroweak scale masses. In this case the model has a rich scalar phenomenology and in particular we present the analysis involving the doubly charged Higgs.

  11. Extended scaling in high dimensions

    Science.gov (United States)

    Berche, B.; Chatelain, C.; Dhall, C.; Kenna, R.; Low, R.; Walter, J.-C.

    2008-11-01

    We apply and test the recently proposed 'extended scaling' scheme in an analysis of the magnetic susceptibility of Ising systems above the upper critical dimension. The data are obtained by Monte Carlo simulations using both the conventional Wolff cluster algorithm and the Prokof'ev-Svistunov worm algorithm. As already observed for other models, extended scaling is shown to extend the high-temperature critical scaling regime over a range of temperatures much wider than that achieved conventionally. It allows for an accurate determination of leading and sub-leading scaling indices, critical temperatures and amplitudes of the confluent corrections.

  12. Natural Scales in Geographical Patterns

    Science.gov (United States)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  13. Natural Scales in Geographical Patterns

    Science.gov (United States)

    Menezes, Telmo; Roth, Camille

    2017-01-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal. PMID:28374825

  14. Drift Scale THM Model

    Energy Technology Data Exchange (ETDEWEB)

    J. Rutqvist

    2004-10-07

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because

  15. Ultra-Large-Scale Systems: Scale Changes Everything

    Science.gov (United States)

    2008-03-06

    Statistical Mechanics, Complexity Networks Are Everywhere Recurring “scale free” structure • internet & yeast protein structures Analogous dynamics...Design • Design Representation and Analysis • Assimilation • Determining and Managing Requirements 43 Ultra-Large-Scale Systems Linda Northrop: March

  16. Relating urban scaling, fundamental allometry, and density scaling

    CERN Document Server

    Rybski, Diego

    2016-01-01

    We study the connection between urban scaling, fundamental allometry (between city population and city area), and per capita vs.\\ population density scaling. From simple analytical derivations we obtain the relation between the 3 involved exponents. We discuss particular cases and ranges of the exponents which we illustrate in a "phase diagram". As we show, the results are consistent with previous work.

  17. From dynamical scaling to local scale-invariance: a tutorial

    CERN Document Server

    Henkel, Malte

    2016-01-01

    Dynamical scaling arises naturally in various many-body systems far from equilibrium. After a short historical overview, the elements of possible extensions of dynamical scaling to a local scale-invariance will be introduced. Schr\\"odinger-invariance, the most simple example of local scale-invariance, will be introduced as a dynamical symmetry in the Edwards-Wilkinson universality class of interface growth. The Lie algebra construction, its representations and the Bargman superselection rules will be combined with non-equilibrium Janssen-de Dominicis field-theory to produce explicit predictions for responses and correlators, which can be compared to the results of explicit model studies. At the next level, the study of non-stationary states requires to go over, from Schr\\"odinger-invariance, to ageing-invariance. The ageing algebra admits new representations, which acts as dynamical symmetries on more general equations, and imply that each non-equilibrium scaling operator is characterised by two distinct, ind...

  18. Scaling properties of small-scale fluctuations in magnetohydrodynamic turbulence

    CERN Document Server

    Perez, J C; Boldyrev, S; Cattaneo, F

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence in the majority of natural systems, including the interstellar medium, the solar corona, and the solar wind, has Reynolds numbers far exceeding the Reynolds numbers achievable in numerical experiments. Much attention is therefore drawn to the universal scaling properties of small-scale fluctuations, which can be reliably measured in the simulations and then extrapolated to astrophysical scales. However, in contrast with hydrodynamic turbulence, where the universal structure of the inertial and dissipation intervals is described by the Kolmogorov self-similarity, the scaling for MHD turbulence cannot be established based solely on dimensional arguments due to the presence of an intrinsic velocity scale -- the Alfven velocity. In this Letter, we demonstrate that the Kolmogorov first self-similarity hypothesis cannot be formulated for MHD turbulence in the same way it is formulated for the hydrodynamic case. Besides profound consequences for the analytical consideration, this...

  19. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  20. Very Large Scale Integration (VLSI).

    Science.gov (United States)

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  1. Metric scales for emotion measurement

    Directory of Open Access Journals (Sweden)

    Martin Junge

    2016-09-01

    Full Text Available The scale quality of indirect and direct scalings of the intensity of emotional experiences was investigated from the perspective of representational measurement theory. Study 1 focused on sensory pleasantness and disgust, Study 2 on surprise and amusement, and Study 3 on relief and disappointment. In each study, the emotion intensities elicited by a set of stimuli were estimated using Ordinal Difference Scaling, an indirect probabilistic scaling method based on graded pair comparisons. The obtained scale values were used to select test cases for the quadruple axiom, a central axiom of difference measurement. A parametric bootstrap test was used to decide whether the participants’ difference judgments systematically violated the axiom. Most participants passed this test. The indirect scalings of these participants were then linearly correlated with their direct emotion intensity ratings to determine whether they agreed with them up to measurement error, and hence might be metric as well. The majority of the participants did not pass this test. The findings suggest that Ordinal Difference Scaling allows to measure emotion intensity on a metric scale level for most participants. As a consequence, quantitative emotion theories become amenable to empirical test on the individual level using indirect measurements of emotional experience.

  2. Spiritual Competency Scale: Further Analysis

    Science.gov (United States)

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  3. On the Geologic Time Scale

    NARCIS (Netherlands)

    Gradstein, F.M.; Ogg, J.G.; Hilgen, F.J.

    2012-01-01

    This report summarizes the international divisions and ages in the Geologic Time Scale, published in 2012 (GTS2012). Since 2004, when GTS2004 was detailed, major developments have taken place that directly bear and have considerable impact on the intricate science of geologic time scaling. Precam br

  4. Voice, Schooling, Inequality, and Scale

    Science.gov (United States)

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  5. Spiritual Competency Scale: Further Analysis

    Science.gov (United States)

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  6. A Scale of Mobbing Impacts

    Science.gov (United States)

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  7. SCALING AND 4-QUARK FRAGMENTATION

    NARCIS (Netherlands)

    SCHOLTEN, O; BOSVELD, GD

    1991-01-01

    The conditions for a scaling behaviour from the fragmentation process leading to slow protons are discussed- The scaling referred to implies that the fragmentation functions depend on the light-cone momentum fraction only. It is shown that differences in the fragmentation functions for valence- and

  8. Entanglement scaling in lattice systems

    Energy Technology Data Exchange (ETDEWEB)

    Audenaert, K M R [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Cramer, M [QOLS, Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BW (United Kingdom); Eisert, J [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Plenio, M B [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom)

    2007-05-15

    We review some recent rigorous results on scaling laws of entanglement properties in quantum many body systems. More specifically, we study the entanglement of a region with its surrounding and determine its scaling behaviour with its size for systems in the ground and thermal states of bosonic and fermionic lattice systems. A theorem connecting entanglement between a region and the rest of the lattice with the surface area of the boundary between the two regions is presented for non-critical systems in arbitrary spatial dimensions. The entanglement scaling in the field limit exhibits a peculiar difference between fermionic and bosonic systems. In one-spatial dimension a logarithmic divergence is recovered for both bosonic and fermionic systems. In two spatial dimensions in the setting of half-spaces however we observe strict area scaling for bosonic systems and a multiplicative logarithmic correction to such an area scaling in fermionic systems. Similar questions may be posed and answered in classical systems.

  9. Multi-scale brain networks

    CERN Document Server

    Betzel, Richard F

    2016-01-01

    The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales -- of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuros...

  10. Scale effect on overland flow connectivity, at the interill scale

    Science.gov (United States)

    Penuela Fernandez, A.; Bielders, C.; Javaux, M.

    2012-04-01

    The relative surface connection function (RSC) was proposed by Antoine et al. (2009) as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outlet (C) as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrogram at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function). For this purpose, digital elevation data of a real field (9 x 3 m) and three synthetic fields (6 x 6 m) with contrasting hydrological responses was used, and the RSC function was calculated at different scales by changing the length (L) or width (l) of the field. Border effects were observed for the smaller scales. In most of cases, for L or l smaller than 750mm, increasing L or l, resulted in a strong increase or decrease of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing l. On the contrary, a remarkable scale effect was observed in the RSC function when changing L. In general, for a given degree of filling of the depression storage, C decreased as L increased. This change in C was inversely proportional to the change in L. This observation applied only up to approx. 50-70% (depending on the hydrological response of the field) of filling of depression storage, after which no correlation was found between C and L. The results of this study help identify the critical scale to study overland flow connectivity. At scales larger than the critical scale, the RSC function showed a great potential to be extrapolated to other scales.

  11. Scale effect on overland flow connectivity at the plot scale

    Science.gov (United States)

    Peñuela, A.; Javaux, M.; Bielders, C. L.

    2013-01-01

    A major challenge in present-day hydrological sciences is to enhance the performance of existing distributed hydrological models through a better description of subgrid processes, in particular the subgrid connectivity of flow paths. The Relative Surface Connection (RSC) function was proposed by Antoine et al. (2009) as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outflow boundary (C) as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrograph at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function). For this purpose, digital elevation data of a real field (9 × 3 m) and three synthetic fields (6 × 6 m) with contrasting hydrological responses were used, and the RSC function was calculated at different scales by changing the length (l) or width (w) of the field. To different extents depending on the microtopography, border effects were observed for the smaller scales when decreasing l or w, which resulted in a strong decrease or increase of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing w, but a remarkable scale effect was observed in the RSC function when changing l. In general, for a given degree of filling of the depression storage, C decreased as l increased, the change in C being inversely proportional to the change in l. However, this observation applied only up to approx. 50-70% (depending on the hydrological response of the field) of filling of depression storage, after which no

  12. Scale effect on overland flow connectivity at the plot scale

    Directory of Open Access Journals (Sweden)

    A. Peñuela

    2012-06-01

    Full Text Available A major challenge in present-day hydrological sciences is to enhance the performance of existing distributed hydrological models through a better description of subgrid processes, in particular the subgrid connectivity of flow paths. The relative surface connection function (RSC was proposed by Antoine et al. (2009 as a functional indicator of runoff flow connectivity. For a given area, it expresses the percentage of the surface connected to the outflow boundary (C as a function of the degree of filling of the depression storage. This function explicitly integrates the flow network at the soil surface and hence provides essential information regarding the flow paths' connectivity. It has been shown that this function could help improve the modeling of the hydrogram at the square meter scale, yet it is unknown how the scale affects the RSC function, and whether and how it can be extrapolated to other scales. The main objective of this research is to study the scale effect on overland flow connectivity (RSC function. For this purpose, digital elevation data of a real field (9 × 3 m and three synthetic fields (6 × 6 m with contrasting hydrological responses were used, and the RSC function was calculated at different scales by changing the length (l or width (w of the field. Border effects, at different extents depending on the microtopography, were observed for the smaller scales, when decreasing l or w, which resulted in a strong decrease or increase of the maximum depression storage, respectively. There was no scale effect on the RSC function when changing w. On the contrary, a remarkable scale effect was observed in the RSC function when changing l. In general, for a given degree of filling of the depression storage, C decreased as l increased. This change in C was inversely proportional to the change in l. This observation applied only up to approx. 50–70

  13. Generic Dynamic Scaling in Kinetic Roughening

    OpenAIRE

    Ramasco, José J.; López, Juan M.; Rodríguez, Miguel A.

    2000-01-01

    We study the dynamic scaling hypothesis in invariant surface growth. We show that the existence of power-law scaling of the correlation functions (scale invariance) does not determine a unique dynamic scaling form of the correlation functions, which leads to the different anomalous forms of scaling recently observed in growth models. We derive all the existing forms of anomalous dynamic scaling from a new generic scaling ansatz. The different scaling forms are subclasses of this generic scali...

  14. A Figurine and its Scale, a Scale and its Figurine

    Directory of Open Access Journals (Sweden)

    Fotis Ifantidis

    2015-05-01

    Full Text Available I was taught to think of archaeological photography as faceless, a to-scale and accurate depiction of ancient artefacts and sites but these rules only apply to one part of archaeological photography, the 'official' one.

  15. Gelation on the microscopic scale

    Science.gov (United States)

    Oppong, Felix K.; Coussot, P.; de Bruyn, John R.

    2008-08-01

    Particle-tracking methods are used to study gelation in a colloidal suspension of Laponite clay particles. We track the motion of small fluorescent polystyrene spheres added to the suspension, and obtain the micron-scale viscous and elastic moduli of the material from their mean-squared displacement. The fluorescent spheres move subdiffusively due to the microstructure of the suspension, with the diffusive exponent decreasing from close to one at early times to near zero as the material gels. The particle-tracking data show that the system becomes more heterogeneous on the microscopic scale as gelation proceeds. We also determine the bulk-scale moduli using small-amplitude oscillatory shear rheometry. Both the macroscopic and microscopic moduli increase with time, and on both scales we observe a transition from a primarily viscous fluid to an elastic gel. We find that the gel point, determined as the time at which the viscous and elastic moduli are equal, is length-scale dependent—gelation occurs earlier on the bulk scale than on the microscopic scale.

  16. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming–Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  17. From dynamical scaling to local scale-invariance: a tutorial

    Science.gov (United States)

    Henkel, Malte

    2017-03-01

    Dynamical scaling arises naturally in various many-body systems far from equilibrium. After a short historical overview, the elements of possible extensions of dynamical scaling to a local scale-invariance will be introduced. Schrödinger-invariance, the most simple example of local scale-invariance, will be introduced as a dynamical symmetry in the Edwards-Wilkinson universality class of interface growth. The Lie algebra construction, its representations and the Bargman superselection rules will be combined with non-equilibrium Janssen-de Dominicis field-theory to produce explicit predictions for responses and correlators, which can be compared to the results of explicit model studies. At the next level, the study of non-stationary states requires to go over, from Schrödinger-invariance, to ageing-invariance. The ageing algebra admits new representations, which acts as dynamical symmetries on more general equations, and imply that each non-equilibrium scaling operator is characterised by two distinct, independent scaling dimensions. Tests of ageing-invariance are described, in the Glauber-Ising and spherical models of a phase-ordering ferromagnet and the Arcetri model of interface growth.

  18. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....

  19. Rotated and Scaled Alamouti Coding

    CERN Document Server

    Willems, Frans M J

    2008-01-01

    Repetition-based retransmission is used in Alamouti-modulation [1998] for $2\\times 2$ MIMO systems. We propose to use instead of ordinary repetition so-called "scaled repetition" together with rotation. It is shown that the rotated and scaled Alamouti code has a hard-decision performance which is only slightly worse than that of the Golden code [2005], the best known $2\\times 2$ space-time code. Decoding the Golden code requires an exhaustive search over all codewords, while our rotated and scaled Alamouti code can be decoded with an acceptable complexity however.

  20. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  1. Physical capability scale: psychometric testing.

    Science.gov (United States)

    Resnick, Barbara; Boltz, Marie; Galik, Elizabeth; Wells, Chris

    2013-02-01

    The purpose of this study was to describe the psychometric testing of the Basic Physical Capability Scale. The study was a secondary data analysis of combined data sets from three studies. Study participants included 93 older adults, recruited from 2 acute-care settings and 110 older adults living in long-term care facilities. Rasch analysis was used for the testing of the measurement model. There was some support for construct validity based on the fit of the items to the scale across both samples. In addition, there was support for hypothesis testing as physical function was significantly associated with physical capability. There was evidence for internal consistency (Alpha coefficients of .77-.83) and interrater reliability based on an intraclass correlation of .81. This study provided preliminary support for the reliability and validity of the Basic Physical Capability Scale, and guidance for scale revisions and continued use.

  2. Pilot Scale Advanced Fogging Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Demmer, Rick L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fox, Don T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Archiblad, Kip E. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-01-01

    Experiments in 2006 developed a useful fog solution using three different chemical constituents. Optimization of the fog recipe and use of commercially available equipment were identified as needs that had not been addressed. During 2012 development work it was noted that low concentrations of the components hampered coverage and drying in the United Kingdom’s National Nuclear Laboratory’s testing much more so than was evident in the 2006 tests. In fiscal year 2014 the Idaho National Laboratory undertook a systematic optimization of the fogging formulation and conducted a non-radioactive, pilot scale demonstration using commercially available fogging equipment. While not as sophisticated as the equipment used in earlier testing, the new approach is much less expensive and readily available for smaller scale operations. Pilot scale testing was important to validate new equipment of an appropriate scale, optimize the chemistry of the fogging solution, and to realize the conceptual approach.

  3. Fluid dynamics: Swimming across scales

    Science.gov (United States)

    Baumgart, Johannes; Friedrich, Benjamin M.

    2014-10-01

    The myriad creatures that inhabit the waters of our planet all swim using different mechanisms. Now, a simple relation links key physical observables of underwater locomotion, on scales ranging from millimetres to tens of metres.

  4. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.

    2015-01-01

    available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant...... of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were......Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...

  5. Local Scale Invariance and Inflation

    CERN Document Server

    Singh, Naveen K

    2016-01-01

    We study the inflation and the cosmological perturbations generated during the inflation in a local scale invariant model. The local scale invariant model introduces a vector field $S_{\\mu}$ in this theory. In this paper, for simplicity, we consider the temporal part of the vector field $S_t$. We show that the temporal part is associated with the slow roll parameter of scalar field. Due to local scale invariance, we have a gauge degree of freedom. In a particular gauge, we show that the local scale invariance provides sufficient number of e-foldings for the inflation. Finally, we estimate the power spectrum of scalar perturbation in terms of the parameters of the theory.

  6. Scale locality of magnetohydrodynamic turbulence.

    Science.gov (United States)

    Aluie, Hussein; Eyink, Gregory L

    2010-02-26

    We investigate the scale locality of cascades of conserved invariants at high kinetic and magnetic Reynold's numbers in the "inertial-inductive range" of magnetohydrodynamic (MHD) turbulence, where velocity and magnetic field increments exhibit suitable power-law scaling. We prove that fluxes of total energy and cross helicity-or, equivalently, fluxes of Elsässer energies-are dominated by the contributions of local triads. Flux of magnetic helicity may be dominated by nonlocal triads. The magnetic stretching term may also be dominated by nonlocal triads, but we prove that it can convert energy only between velocity and magnetic modes at comparable scales. We explain the disagreement with numerical studies that have claimed conversion nonlocally between disparate scales. We present supporting data from a 1024{3} simulation of forced MHD turbulence.

  7. Dimensional scaling in chemical physics

    CERN Document Server

    Avery, John; Goscinski, Osvaldo

    1993-01-01

    Dimensional scaling offers a new approach to quantum dynamical correlations. This is the first book dealing with dimensional scaling methods in the quantum theory of atoms and molecules. Appropriately, it is a multiauthor production, derived chiefly from papers presented at a workshop held in June 1991 at the Ørsted Institute in Copenhagen. Although focused on dimensional scaling, the volume includes contributions on other unorthodox methods for treating nonseparable dynamical problems and electronic correlation. In shaping the book, the editors serve three needs: an introductory tutorial for this still fledgling field; a guide to the literature; and an inventory of current research results and prospects. Part I treats basic aspects of dimensional scaling. Addressed to readers entirely unfamiliar with the subject, it provides both a qualitative overview, and a tour of elementary quantum mechanics. Part II surveys the research frontier. The eight chapters exemplify current techniques and outline results. Part...

  8. Fluctuation scaling in point processes

    CERN Document Server

    Koyama, Shinsuke

    2014-01-01

    Fluctuation scaling has universally been observed in a wide variety of phenomena. For time series describing sequences of events, it can be expressed as power function relationship between the variance and the mean of either the inter-event interval or counting statistics, depending on the measurement variables. In this article, fluctuation scaling for series of events is formulated for the first time, in which the scaling exponents in the inter-event interval and counting statistics are related. It is also shown that a simple mechanism consisting of first-passage time to a threshold for Ornstein-Uhlenbeck processes explains fluctuation scaling with various exponents depending on the subthreshold dynamics. A possible implication of the results is discussed in terms of characterizing `intrinsic' variability of neuronal discharges.

  9. Scaling of exploding pusher targets

    Energy Technology Data Exchange (ETDEWEB)

    Nuckolls, J.H.

    1977-08-22

    A theory of exploding pusher laser pusher targets is compared to results of LASNEX calculations and to Livermore experiments. A scaling relationship is described which predicts the optimum target/pulse combinations as a function of the laser power.

  10. Scaling of graphene integrated circuits

    Science.gov (United States)

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A.; Pop, Eric; Sordan, Roman

    2015-04-01

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing. Electronic supplementary information (ESI) available: Discussions on the cutoff frequency fT, the maximum frequency of oscillation fmax, and the intrinsic gate delay CV/I. See DOI: 10.1039/c5nr01126d

  11. Scale issues in remote sensing

    CERN Document Server

    Weng, Qihao

    2014-01-01

    This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.

  12. Two-Dimensional Vernier Scale

    Science.gov (United States)

    Juday, Richard D.

    1992-01-01

    Modified vernier scale gives accurate two-dimensional coordinates from maps, drawings, or cathode-ray-tube displays. Movable circular overlay rests on fixed rectangular-grid overlay. Pitch of circles nine-tenths that of grid and, for greatest accuracy, radii of circles large compared with pitch of grid. Scale enables user to interpolate between finest divisions of regularly spaced rule simply by observing which mark on auxiliary vernier rule aligns with mark on primary rule.

  13. Normalization of emotion control scale

    Directory of Open Access Journals (Sweden)

    Hojatoolah Tahmasebian

    2014-09-01

    Full Text Available Background: Emotion control skill teaches the individuals how to identify their emotions and how to express and control them in various situations. The aim of this study was to normalize and measure the internal and external validity and reliability of emotion control test. Methods: This standardization study was carried out on a statistical society, including all pupils, students, teachers, nurses and university professors in Kermanshah in 2012, using Williams’ emotion control scale. The subjects included 1,500 (810 females and 690 males people who were selected by stratified random sampling. Williams (1997 emotion control scale, was used to collect the required data. Emotional Control Scale is a tool for measuring the degree of control people have over their emotions. This scale has four subscales, including anger, depressed mood, anxiety and positive affect. The collected data were analyzed by SPSS software using correlation and Cronbach's alpha tests. Results: The results of internal consistency of the questionnaire reported by Cronbach's alpha indicated an acceptable internal consistency for emotional control scale, and the correlation between the subscales of the test and between the items of the questionnaire was significant at 0.01 confidence level. Conclusion: The validity of emotion control scale among the pupils, students, teachers, nurses and teachers in Iran has an acceptable range, and the test itemswere correlated with each other, thereby making them appropriate for measuring emotion control.

  14. Fundamental Scaling Laws in Nanophotonics

    Science.gov (United States)

    Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J.

    2016-11-01

    The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of “smaller-is-better” has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.

  15. Fundamental Scaling Laws in Nanophotonics.

    Science.gov (United States)

    Liu, Ke; Sun, Shuai; Majumdar, Arka; Sorger, Volker J

    2016-11-21

    The success of information technology has clearly demonstrated that miniaturization often leads to unprecedented performance, and unanticipated applications. This hypothesis of "smaller-is-better" has motivated optical engineers to build various nanophotonic devices, although an understanding leading to fundamental scaling behavior for this new class of devices is missing. Here we analyze scaling laws for optoelectronic devices operating at micro and nanometer length-scale. We show that optoelectronic device performance scales non-monotonically with device length due to the various device tradeoffs, and analyze how both optical and electrical constrains influence device power consumption and operating speed. Specifically, we investigate the direct influence of scaling on the performance of four classes of photonic devices, namely laser sources, electro-optic modulators, photodetectors, and all-optical switches based on three types of optical resonators; microring, Fabry-Perot cavity, and plasmonic metal nanoparticle. Results show that while microrings and Fabry-Perot cavities can outperform plasmonic cavities at larger length-scales, they stop working when the device length drops below 100 nanometers, due to insufficient functionality such as feedback (laser), index-modulation (modulator), absorption (detector) or field density (optical switch). Our results provide a detailed understanding of the limits of nanophotonics, towards establishing an opto-electronics roadmap, akin to the International Technology Roadmap for Semiconductors.

  16. Scale Of Fermion Mass Generation

    CERN Document Server

    Niczyporuk, J M

    2002-01-01

    Unitarity of longitudinal weak vector boson scattering implies an upper bound on the scale of electroweak symmetry breaking, Λ EWSB ≡ 8pv ≈ 1 TeV. Appelquist and Chanowitz have derived an analogous upper bound on the scale of fermion mass generation, proportional to v 2/mf, by considering the scattering of same-helicity fermions into pairs of longitudinal weak vector bosons in a theory without a standard Higgs boson. We show that there is no upper bound, beyond that on the scale of electroweak symmetry breaking, in such a theory. This result is obtained by considering the same process, but with a large number of longitudinal weak vector bosons in the final state. We further argue that there is no scale of (Dirac) fermion mass generation in the standard model. In contrast, there is an upper bound on the scale of Majorana-neutrino mass generation, given by ΛMaj ≡ 4πv2/m ν. In general, the upper bound on the scale of fermion mass generation depend...

  17. Chemical Measurement and Fluctuation Scaling.

    Science.gov (United States)

    Hanley, Quentin S

    2016-12-20

    Fluctuation scaling reports on all processes producing a data set. Some fluctuation scaling relationships, such as the Horwitz curve, follow exponential dispersion models which have useful properties. The mean-variance method applied to Poisson distributed data is a special case of these properties allowing the gain of a system to be measured. Here, a general method is described for investigating gain (G), dispersion (β), and process (α) in any system whose fluctuation scaling follows a simple exponential dispersion model, a segmented exponential dispersion model, or complex scaling following such a model locally. When gain and dispersion cannot be obtained directly, relative parameters, GR and βR, may be used. The method was demonstrated on data sets conforming to simple, segmented, and complex scaling. These included mass, fluorescence intensity, and absorbance measurements and specifications for classes of calibration weights. Changes in gain, dispersion, and process were observed in the scaling of these data sets in response to instrument parameters, photon fluxes, mathematical processing, and calibration weight class. The process parameter which limits the type of statistical process that can be invoked to explain a data set typically exhibited 0 4 possible. With two exceptions, calibration weight class definitions only affected β. Adjusting photomultiplier voltage while measuring fluorescence intensity changed all three parameters (0 < α < 0.8; 0 < βR < 3; 0 < GR < 4.1). The method provides a framework for calibrating and interpreting uncertainty in chemical measurement allowing robust comparison of specific instruments, conditions, and methods.

  18. Feasibility of scaling from pilot to process scale.

    Science.gov (United States)

    Ignatova, Svetlana; Wood, Philip; Hawes, David; Janaway, Lee; Keay, David; Sutherland, Ian

    2007-06-01

    The pharmaceutical industry is looking for new technology that is easy to scale up from analytical to process scale and is cheap and reliable to operate. Large scale counter-current chromatography is an emerging technology that could provide this advance, but little was known about the key variables affecting scale-up. This paper investigates two such variables: the rotor radius and the tubing bore. The effect of rotor radius was studied using identical: length, beta-value, helix angle and tubing bore coils for rotors of different radii (50 mm, 110 mm and 300 mm). The effect of bore was researched using identical: length, helix angle and mean beta-value coils on the Maxi-DE centrifuge (R=300 mm). The rotor radius results show that there is very little difference in retention and resolution as rotor radius increases at constant bore. The tubing bore results show that good retention is maintained as bore increases and resolution only decrease slightly, but at the highest bore (17.5 mm) resolution can be maintained at very high flow rates making it possible for process scale centrifuges to be designed with throughputs exceeding 25 kg/day.

  19. On the scaling of small-scale jet noise to large scale

    Science.gov (United States)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  20. Scale Construction: Motivation and Relationship Scale in Education

    Directory of Open Access Journals (Sweden)

    Yunus Emre Demir

    2016-01-01

    Full Text Available The aim of this study is to analyze the validity and reliability of the Turkish version of Motivation and Relationship Scale (MRS, (Raufelder , Drury , Jagenow , Hoferichter & Bukowski , 2013.Participants were 526 students of secondary school. The results of confirmatory factor analysis described that the 21 items loaded three factor and the three-dimensional model was well fit (x2= 640.04, sd= 185, RMSEA= .068, NNFI= .90, CFI = .91, IFI=.91,SRMR=079, GFI= .90,AGFI=.87. Overall findings demonstrated that this scale is a valid and indicates that the adapted MRS is a valid instrument for measuring secondary school children’s motivation in Turkey.

  1. Scaling laws of Rydberg excitons

    Science.gov (United States)

    Heckötter, J.; Freitag, M.; Fröhlich, D.; Aßmann, M.; Bayer, M.; Semina, M. A.; Glazov, M. M.

    2017-09-01

    Rydberg atoms have attracted considerable interest due to their huge interaction among each other and with external fields. They demonstrate characteristic scaling laws in dependence on the principal quantum number n for features such as the magnetic field for level crossing or the electric field of dissociation. Recently, the observation of excitons in highly excited states has allowed studying Rydberg physics in cuprous oxide crystals. Fundamentally different insights may be expected for Rydberg excitons, as the crystal environment and associated symmetry reduction compared to vacuum give not only optical access to many more states within an exciton multiplet but also extend the Hamiltonian for describing the exciton beyond the hydrogen model. Here we study experimentally and theoretically the scaling of several parameters of Rydberg excitons with n , for some of which we indeed find laws different from those of atoms. For others we find identical scaling laws with n , even though their origin may be distinctly different from the atomic case. At zero field the energy splitting of a particular multiplet n scales as n-3 due to crystal-specific terms in the Hamiltonian, e.g., from the valence band structure. From absorption spectra in magnetic field we find for the first crossing of levels with adjacent principal quantum numbers a Br∝n-4 dependence of the resonance field strength, Br, due to the dominant paramagnetic term unlike for atoms for which the diamagnetic contribution is decisive, resulting in a Br∝n-6 dependence. By contrast, the resonance electric field strength shows a scaling as Er∝n-5 as for Rydberg atoms. Also similar to atoms with the exception of hydrogen we observe anticrossings between states belonging to multiplets with different principal quantum numbers at these resonances. The energy splittings at the avoided crossings scale roughly as n-4, again due to crystal specific features in the exciton Hamiltonian. The data also allow us to

  2. Featured Invention: Laser Scaling Device

    Science.gov (United States)

    Dunn, Carol Anne

    2008-01-01

    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  3. Definition of a nucleophilicity scale.

    Science.gov (United States)

    Jaramillo, Paula; Pérez, Patricia; Contreras, Renato; Tiznado, William; Fuentealba, Patricio

    2006-07-06

    This work deals with exploring some empirical scales of nucleophilicity. We have started evaluating the experimental indices of nucleophilicity proposed by Legon and Millen on the basis of the measure of the force constants derived from vibrational frequencies using a probe dipole H-X (X = F,CN). The correlation among some theoretical parameters with this experimental scale has been evaluated. The theoretical parameters have been chosen as the minimum of the electrostatic potential V(min), the binding energy (BE) between the nucleophile and the H-X dipole, and the electrostatic potential measured at the position of the hydrogen atom V(H) when the complex nucleophile and dipole H-X is in the equilibrium geometry. All of them present good correlations with the experimental nucleophilicity scale. In addition, the BEs of the nucleophiles with two other Lewis acids (one hard, BF(3), and the other soft, BH(3)) have been evaluated. The results suggest that the Legon and Millen nucleophilicity scale and the electrostatic potential derived scales can describe in good approximation the reactivity order of the nucleophiles only when the interactions with a probe electrophile is of the hard-hard type. For a covalent interaction that is orbital controlled, a new nucleophilicity index using information of the frontier orbitals of both, the nucleophile and the electrophile has been proposed.

  4. Scales of Natural Flood Management

    Science.gov (United States)

    Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg

    2016-04-01

    The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.

  5. Allometric scaling in-vitro

    Science.gov (United States)

    Ahluwalia, Arti

    2017-02-01

    About two decades ago, West and coworkers established a model which predicts that metabolic rate follows a three quarter power relationship with the mass of an organism, based on the premise that tissues are supplied nutrients through a fractal distribution network. Quarter power scaling is widely considered a universal law of biology and it is generally accepted that were in-vitro cultures to obey allometric metabolic scaling, they would have more predictive potential and could, for instance, provide a viable substitute for animals in research. This paper outlines a theoretical and computational framework for establishing quarter power scaling in three-dimensional spherical constructs in-vitro, starting where fractal distribution ends. Allometric scaling in non-vascular spherical tissue constructs was assessed using models of Michaelis Menten oxygen consumption and diffusion. The models demonstrate that physiological scaling is maintained when about 5 to 60% of the construct is exposed to oxygen concentrations less than the Michaelis Menten constant, with a significant concentration gradient in the sphere. The results have important implications for the design of downscaled in-vitro systems with physiological relevance.

  6. Visions of Atomic Scale Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, T. F. [Cameca Instruments; Miller, Michael K [ORNL; Rajan, Krishna [Iowa State University; Ringer, S. P. [University of Sydney, Australia

    2012-01-01

    A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?

  7. 30 CFR 56.3202 - Scaling tools.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Scaling tools. 56.3202 Section 56.3202 Mineral... HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Ground Control Scaling and Support § 56.3202 Scaling tools. Where manual scaling is performed, a scaling bar shall be provided. This...

  8. 30 CFR 57.3202 - Scaling tools.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Scaling tools. 57.3202 Section 57.3202 Mineral... HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Ground Control Scaling and Support-Surface and Underground § 57.3202 Scaling tools. Where manual scaling is performed, a scaling...

  9. Large Scale Dynamos in Stars

    Science.gov (United States)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  10. The Satisfaction With Life Scale.

    Science.gov (United States)

    Diener, E; Emmons, R A; Larsen, R J; Griffin, S

    1985-02-01

    This article reports the development and validation of a scale to measure global life satisfaction, the Satisfaction With Life Scale (SWLS). Among the various components of subjective well-being, the SWLS is narrowly focused to assess global life satisfaction and does not tap related constructs such as positive affect or loneliness. The SWLS is shown to have favorable psychometric properties, including high internal consistency and high temporal reliability. Scores on the SWLS correlate moderately to highly with other measures of subjective well-being, and correlate predictably with specific personality characteristics. It is noted that the SWLS is Suited for use with different age groups, and other potential uses of the scale are discussed.

  11. Hidden scale in quantum mechanics

    CERN Document Server

    Giri, Pulak Ranjan

    2007-01-01

    We show that the intriguing localization of a free particle wave-packet is possible due to a hidden scale present in the system. Self-adjoint extensions (SAE) is responsible for introducing this scale in quantum mechanical models through the nontrivial boundary conditions. We discuss a couple of classically scale invariant free particle systems to illustrate the issue. In this context it has been shown that a free quantum particle moving on a full line may have localized wave-packet around the origin. As a generalization, it has also been shown that particles moving on a portion of a plane or on a portion of a three dimensional space can have unusual localized wave-packet.

  12. Functional Scaling of Musculoskeletal Models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark;

    The validity of the predictions from musculoskeletal models depends largely on how well the morphology of the model matches that of the patient. To address this problem, we present a novel method to scale a cadaver-based musculoskeletal model to match both the segment lengths and joint parameters...... orientations are then used to morph/scale a cadaver based musculoskeletal model using a set of radial basis functions (RBFs). Using the functional joint axes to scale musculoskeletal models provides a better fit to the marker data, and allows for representation of patients with considerable difference in bone...... geometry, without the need for MR/CT scans. However, more validation activities are needed to better understand the effect of morphing musculoskeletal models based on functional joint parameters....

  13. Scaling of boson sampling experiments

    Science.gov (United States)

    Drummond, P. D.; Opanchuk, B.; Rosales-Zárate, L.; Reid, M. D.; Forrester, P. J.

    2016-10-01

    Boson sampling is the problem of generating a multiphoton state whose counting probability is the permanent of an n ×n matrix. This is created as the output n -photon coincidence rate of a prototype quantum computing device with n input photons. It is a fundamental challenge to verify boson sampling, and therefore the question of how output count rates scale with matrix size n is crucial. Here we apply results from random matrix theory as well as the characteristic function approach from quantum optics to establish analytical scaling laws for average count rates. We treat boson sampling experiments with arbitrary inputs, outputs, and losses. Using the scaling laws we analyze grouping of channel outputs and the count rates for this case.

  14. Large-scale circuit simulation

    Science.gov (United States)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  15. Flavor hierarchies from dynamical scales

    CERN Document Server

    Panico, Giuliano

    2016-07-20

    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, $\\bar ff$. This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in $\\Delta F=2$ and $\\Delta F=1$ flavor observables that could be seen in future experiments.

  16. Weyl Current, Scale-Invariant Inflation and Planck Scale Generation

    CERN Document Server

    Ferreira, Pedro G; Ross, Graham G

    2016-01-01

    Scalar fields, $\\phi_i$ can be coupled non-minimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including the Planck mass; (ii) the $\\phi_i$ have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, $K(\\phi_i) =$ constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. These models are governed by a global Weyl scale symmetry and its conserved current, $K_\\mu$ . At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renorma...

  17. IMF Length Scales and Predictability: The Two Length Scale Medium

    Science.gov (United States)

    Collier, Michael R.; Szabo, Adam; Slavin, James A.; Lepping, R. P.; Kokubun, S.

    1999-01-01

    We present preliminary results from a systematic study using simultaneous data from three spacecraft, Wind, IMP 8 (Interplanetary Monitoring Platform) and Geotail to examine interplanetary length scales and their implications on predictability for magnetic field parcels in the typical solar wind. Time periods were selected when the plane formed by the three spacecraft included the GSE (Ground Support Equipment) x-direction so that if the parcel fronts were strictly planar, the two adjacent spacecraft pairs would determine the same phase front angles. After correcting for the motion of the Earth relative to the interplanetary medium and deviations in the solar wind flow from radial, we used differences in the measured front angle between the two spacecraft pairs to determine structure radius of curvature. Results indicate that the typical radius of curvature for these IMF parcels is of the order of 100 R (Sub E). This implies that there are two important IMF (Interplanetary Magnetic Field) scale lengths relevant to predictability: (1) the well-established scale length over which correlations observed by two spacecraft decay along a given IMF parcel, of the order of a few tens of Earth radii and (2) the scale length over which two spacecraft are unlikely to even observe the same parcel because of its curvature, of the order of a hundred Earth radii.

  18. Weyl current, scale-invariant inflation, and Planck scale generation

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Pedro G. [Univ. of Oxford (United Kingdom). Dept. of Physics; Hill, Christopher T. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ross, Graham G. [Univ. of Oxford (United Kingdom). Rudolf Peierls Centre for Theoretical Physics

    2017-02-08

    Scalar fields, $\\phi$i, can be coupled nonminimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including MP=0; (ii) the $\\phi$i have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, K($\\phi$i)=constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale-invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant; (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. Finally, these models are governed by a global Weyl scale symmetry and its conserved current, Kμ. At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renormalized quantities.

  19. Critical Multicultural Education Competencies Scale: A Scale Development Study

    Science.gov (United States)

    Acar-Ciftci, Yasemin

    2016-01-01

    The purpose of this study is to develop a scale in order to identify the critical mutlicultural education competencies of teachers. For this reason, first of all, drawing on the knowledge in the literature, a new conceptual framework was created with deductive method based on critical theory, critical race theory and critical multicultural…

  20. Learning From the Furniture Scale

    DEFF Research Database (Denmark)

    Hvejsel, Marie Frier; Kirkegaard, Poul Henning

    2017-01-01

    Given its proximity to the human body, the furniture scale holds a particular potential in grasping the fundamental aesthetic potential of architecture to address its inhabitants by means of spatial ‘gestures’. Likewise, it holds a technical germ in realizing this potential given its immediate...... tangibility allowing experimentation with the ‘principles’ of architectural construction. In present paper we explore this dual tectonic potential of the furniture scale as an epistemological foundation in architectural education. In this matter, we discuss the conduct of a master-level course where we...

  1. Scaling relation for earthquake networks

    CERN Document Server

    Abe, Sumiyoshi

    2008-01-01

    The scaling relation derived by Dorogovtsev, Goltsev, Mendes and Samukhin [Phys. Rev. E, 68 (2003) 046109] states that the exponents of the power-law connectivity distribution, gamma, and the power-law eigenvalue distribution of the adjacency matrix, delta, of a locally treelike scale-free network satisfy 2*gamma - delta = 1 in the mean field approximation. Here, it is shown that this relation holds well for the reduced simple earthquake networks (without tadpole-loops and multiple edges) constructed from the seismic data taken from California and Japan. The result is interpreted from the viewpoint of the hierarchical organization of the earthquake networks.

  2. Pelamis WEC - intermediate scale demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Yemm, R.

    2003-07-01

    This report describes the successful building and commissioning of an intermediate 1/7th scale model of the Pelamis Wave Energy Converter (WEC) and its testing in the wave climate of the Firth of Forth. Details are given of the design of the semi-submerged articulated structure of cylindrical elements linked by hinged joints. The specific programme objectives and conclusions, development issues addressed, and key remaining risks are discussed along with development milestones to be passed before the Pelamis WEC is ready for full-scale prototype testing.

  3. Integral equations on time scales

    CERN Document Server

    Georgiev, Svetlin G

    2016-01-01

    This book offers the reader an overview of recent developments of integral equations on time scales. It also contains elegant analytical and numerical methods. This book is primarily intended for senior undergraduate students and beginning graduate students of engineering and science courses. The students in mathematical and physical sciences will find many sections of direct relevance. The book contains nine chapters and each chapter is pedagogically organized. This book is specially designed for those who wish to understand integral equations on time scales without having extensive mathematical background.

  4. Scaling properties of universal tetramers.

    Science.gov (United States)

    Hadizadeh, M R; Yamashita, M T; Tomio, Lauro; Delfino, A; Frederico, T

    2011-09-23

    We evidence the existence of a universal correlation between the binding energies of successive four-boson bound states (tetramers), for large two-body scattering lengths (a), related to an additional scale not constrained by three-body Efimov physics. Relevant to ultracold atom experiments, the atom-trimer relaxation peaks for |a|→∞ when the ratio between the tetramer and trimer energies is ≃4.6 and a new tetramer is formed. The new scale is also revealed for a < 0 by the prediction of a correlation between the positions of two successive peaks in the four-atom recombination process.

  5. Continuously-Variable Vernier Scale

    Science.gov (United States)

    Miller, Irvin M.

    1989-01-01

    Easily fabricated device increases precision in reading graphical data. Continuously-variable vernier scale (CV VS) designed to provide greater accuracy to scientists and technologists in reading numerical values from graphical data. Placed on graph and used to interpolate coordinate value of point on curve or plotted point on figure within division on each coordinate axis. Requires neither measurement of line segments where projection of point intersects division nor calculation to quantify projected value. Very flexible device constructed with any kind of scale. Very easy to use, requiring no special equipment of any kind, and saves considerable amount of time if numerous points to be evaluated.

  6. Scaling of sand flux over bedforms- experiments to field scale

    Science.gov (United States)

    McElroy, B. J.; Mahon, R. C.; Ashley, T.; Alexander, J. S.

    2015-12-01

    Bed forms are one of the few geomorphic phenomena whose field and laboratory geometric scales have significant overlap. This is similarly true for scales of sediment transport. Whether in the lab or field, at low transport stages and high Rouse numbers where suspension is minimal, sand fluxes scale nonlinearly with transport stage. At high transport stages, and low Rouse numbers where suspension is substantial, sand transport scales with rouse number. In intermediate cases deformation of bed forms is a direct result of the exchange of sediment between the classically suspended and bed load volumes. These parameters are straightforwardly measured in the laboratory. However, practical difficulties and cost ineffectiveness often exclude bed-sediment measurements from studies and monitoring efforts aimed at estimating sediment loads in rivers. An alternative to direct sampling is through the measurement of evolution of bed topography constrained by sediment-mass conservation. Historically, the topographic-evolution approach has been limited to systems with negligible transport of sand in suspension. As was shown decades ago, pure bed load transport is responsible for the mean migration of trains of bed forms when no sediment is exchanged between individual bed forms. In contrast, the component of bed-material load that moves in suspension is responsible for changes in the size, shape, and spacing of evolving bed forms; collectively this is called deformation. The difference between bed-load flux and bed-material-load flux equals the flux of suspended bed material. We give a partial demonstration of this using available field and laboratory data and comparing them across geometric and sediment transport scales.

  7. Scaling Limits of Graphene Nanoelectrodes.

    Science.gov (United States)

    Sarwat, Syed Ghazi; Gehring, Pascal; Rodriguez Hernandez, Gerardo; Warner, Jamie H; Briggs, G Andrew D; Mol, Jan A; Bhaskaran, Harish

    2017-06-14

    Graphene nanogap electrodes have been of recent interest in a variety of fields, ranging from molecular electronics to phase change memories. Several recent reports have highlighted that scaling graphene nanogaps to even smaller sizes is a promising route to more efficient and robust molecular and memory devices. Despite the significant interest, the operating and scaling limits of these electrodes are completely unknown. In this paper, we report on our observations of consistent voltage driven resistance switching in sub-5 nm graphene nanogaps. We find that such electrical switching from an insulating state to a conductive state occurs at very low currents and voltages (0.06 μA and 140 mV), independent of the conditions (room ambient, low temperatures, as well as in vacuum), thus portending potential limits to scaling of functional devices with carbon electrodes. We then associate this phenomenon to the formation and rupture of carbon chains. Using a phase change material in the nanogap as a demonstrator device, fabricated using a self-alignment technique, we show that for gap sizes approaching 1 nm the switching is dominated by such carbon chain formation, creating a fundamental scaling limit for potential devices. These findings have important implications, not only for fundamental science, but also in terms of potential applications.

  8. Scaling up of renewable chemicals.

    Science.gov (United States)

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully.

  9. Multidimensional Scaling and Its Applications.

    Science.gov (United States)

    Davison, Mark L., Ed.; Jones, Lawrence E., Ed.

    1983-01-01

    This special issues describes multidimensional scaling (MDS), with emphasis on proximity and preference models. An introduction and six papers review statistical developments in MDS study design and scrutinize MDS research in four areas of application (consumer, social, cognitive, and vocational psychology). (SLD)

  10. Tera Scale Systems and Applications

    Science.gov (United States)

    Niggley, Chuck; Ciotti, Bob; Parks, John W. (Technical Monitor)

    2002-01-01

    This presentation discusses NASA's efforts to develop tera scale systems designed to push the envelope of supercomputing research. Topics cover include: NASA's existing supercomputing facilities and capabilities, NASA's computational challenges in developing these systems, development of production supercomputer, and potential research projects which could benefit from these types of systems.

  11. A Time scales Noether's theorem

    OpenAIRE

    Anerot, Baptiste; Cresson, Jacky; Pierret, Frédéric

    2016-01-01

    We prove a time scales version of the Noether's theorem relating group of symmetries and conservation laws. Our result extends the continuous version of the Noether's theorem as well as the discrete one and corrects a previous statement of Bartosiewicz and Torres in \\cite{BT}.

  12. Scale invariance and superfluid turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Siddhartha, E-mail: siddhartha.sen@tcd.ie [CRANN, Trinity College Dublin, Dublin 2 (Ireland); R.K. Mission Vivekananda University, Belur 711 202, West Bengal (India); Ray, Koushik, E-mail: koushik@iacs.res.in [Department of Theoretical Physics, Indian Association for the Cultivation of Science, Calcutta 700 032 (India)

    2013-11-11

    We construct a Schroedinger field theory invariant under local spatial scaling. It is shown to provide an effective theory of superfluid turbulence by deriving, analytically, the observed Kolmogorov 5/3 law and to lead to a Biot–Savart interaction between the observed filament excitations of the system as well.

  13. Source Code Analysis Laboratory (SCALe)

    Science.gov (United States)

    2012-04-01

    SCALe undertakes. Testing and calibration laboratories that comply with ISO /IEC 17025 also operate in accordance with ISO 9001 . • NIST National...17025:2005 accredited and ISO 9001 :2008 registered. 4.3 SAIC Accreditation and Certification Services SAIC (Science Applications International...particular implementation, and executing in a particular execution environment [ ISO /IEC 2005]. Successful conformance testing of a software system

  14. Accentuation-suppression and scaling

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik; Bundesen, Claus

    2012-01-01

    a scaling mechanism modulating the decision bias of the observer and also through an accentuation-suppression mechanism that modulates the degree of subjective relevance of objects, contracting attention around fewer, highly relevant objects while suppressing less relevant objects. These mechanisms may...

  15. Mixed scale joint graphical lasso

    NARCIS (Netherlands)

    Pircalabelu, E.; Claeskens, G.; Waldorp, L.J.

    2016-01-01

    We have developed a method for estimating brain networks from fMRI datasets that have not all been measured using the same set of brain regions. Some of the coarse scale regions have been split in smaller subregions. The proposed penalized estimation procedure selects undirected graphical models wit

  16. Interspecies Scaling in Blast Neurotrauma

    Science.gov (United States)

    2015-08-27

    Body." American Journal of Physiology -- Legacy Content 184, 1: 119-26. Clifford, C, Jaeger, J, Moe, J and Hess , J. 1984. "Gastrointestinal lesions...Emergency Care 10, 2: 165-72. Savage, VM, Allen, AP, Brown, JH, Gillooly, JF, Herman , AB, Woodruff, WH and West, GB. 2007. "Scaling of number, size

  17. Optimal scaling in ductile fracture

    Science.gov (United States)

    Fokoua Djodom, Landry

    This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity

  18. Dark Matter on small scales; Telescopes on large scales

    CERN Document Server

    Gilmore, G

    2007-01-01

    This article reviews recent progress in observational determination of the properties of dark matter on small astrophysical scales, and progress towards the European Extremely Large Telescope. Current results suggest some surprises: the central DM density profile is typically cored, not cusped, with scale sizes never less than a few hundred pc; the central densities are typically 10-20GeV/cc; no galaxy is found with a dark mass halo less massive than $\\sim5.10^7M_{\\odot}$. We are discovering many more dSphs, which we are analysing to test the generality of these results. The European Extremely Large Telescope Design Study is going forward well, supported by an outstanding scientific case, and founded on detailed industrial studies of the technological requirements.

  19. Scaling Irrational Beliefs in the General Attitude and Belief Scale

    Directory of Open Access Journals (Sweden)

    Lindsay R. Owings

    2013-04-01

    Full Text Available Accurate measurement of key constructs is essential to the continued development of Rational-Emotive Behavior Therapy (REBT. The General Attitude and Belief Scale (GABS, a contemporary inventory of rational and irrational beliefs based on current REBT theory, is one of the most valid and widely used instruments available, and recent research has continued to improve its psychometric standing. In this study of 544 students, item response theory (IRT methods were used (a to identify the most informative item in each irrational subscale of the GABS, (b to determine the level of irrationality represented by each of those items, and (c to suggest a condensed form of the GABS for further study with clinical populations. Administering only the most psychometrically informative items to clients could result in economies of time and effort. Further research based on the scaling of items could clarify the specific patterns of irrational beliefs associated with particular clinical syndromes.

  20. Optimal scales in weighted networks

    CERN Document Server

    Garlaschelli, Diego; Fink, Thomas M A; Caldarelli, Guido

    2013-01-01

    The analysis of networks characterized by links with heterogeneous intensity or weight suffers from two long-standing problems of arbitrariness. On one hand, the definitions of topological properties introduced for binary graphs can be generalized in non-unique ways to weighted networks. On the other hand, even when a definition is given, there is no natural choice of the (optimal) scale of link intensities (e.g. the money unit in economic networks). Here we show that these two seemingly independent problems can be regarded as intimately related, and propose a common solution to both. Using a formalism that we recently proposed in order to map a weighted network to an ensemble of binary graphs, we introduce an information-theoretic approach leading to the least biased generalization of binary properties to weighted networks, and at the same time fixing the optimal scale of link intensities. We illustrate our method on various social and economic networks.

  1. THE MODERN RACISM SCALE: PSYCHOMETRIC

    Directory of Open Access Journals (Sweden)

    MANUEL CÁRDENAS

    2007-08-01

    Full Text Available An adaption of McConahay, Harder and Batts’ (1981 moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc., as well as with other forms of prejudice (gender stereotypes and homophobia. The sample consistedof 120 participants, students of psychology, resident in the city of Antofagasta (a geographical zone with a highnumber of Latin-American inmigrants. Our findings show that the scale seems to be a reliable instrument to measurethe prejudice towards Bolivian immigrants in our social environment. Likewise, important differences among thesubjects are detected with high and low scores in the psychosocial variables used.

  2. Ruby fluorescence pressure scale: Revisited

    Science.gov (United States)

    Liu, Lei; Bi, Yan; Xu, Ji-An

    2013-05-01

    Effect of non-hydrostatic stress on X-ray diffraction in a diamond anvil cell (DAC) is studied. The pressure gradient in the sample chamber leads to the broadening of the diffraction peaks, which increase with the hkl index of the crystal. It is found that the difference between the determined d-spacing compressive ratio d/d0 and the real d-spacing compressive ratio dr/d0 is determined by the yield stress of the pressure transmitting media (if used) and the shear modulus of the sample. On the basis of the corrected experiment data of Mao et al. (MXB86), which was used to calibrate the most widely used ruby fluorescence scale, a new relationship of ruby fluorescence pressure scale is corrected, i.e., P = (1904/9.827)[(1 + Δλ/λ0)9.827-1].

  3. Large-Scale Galaxy Bias

    CERN Document Server

    Desjacques, Vincent; Schmidt, Fabian

    2016-01-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...

  4. Frequency scaling for angle gathers

    KAUST Repository

    Zuberi, M. A H

    2014-01-01

    Angle gathers provide an extra dimension to analyze the velocity after migration. Space-shift and time shift-imaging conditions are two methods used to obtain angle gathers, but both are reasonably expensive. By scaling the time-lag axis of the time-shifted images, the computational cost of the time shift imaging condition can be considerably reduced. In imaging and more so Full waveform inversion, frequencydomain Helmholtz solvers are used more often to solve for the wavefields than conventional time domain extrapolators. In such cases, we do not need to extend the image, instead we scale the frequency axis of the frequency domain image to obtain the angle gathers more efficiently. Application on synthetic data demonstrate such features.

  5. Scaling: Lost in the smog

    CERN Document Server

    Louf, Rémi

    2014-01-01

    In this commentary we discuss the validity of scaling laws and their relevance for understanding urban systems and helping policy makers. We show how the recent controversy about the scaling of CO2 transport-related emissions with population size, where different authors reach contradictory conclusions, is symptomatic of the lack of understanding of the underlying mechanisms. In particular, we highlight different sources of errors, ranging from incorrect estimate of CO2 to problems related with the definition of cities. We argue here that while data are necessary to build of a new science of cities, they are not enough: they have to go hand in hand with a theoretical understanding of the main processes. This effort of building models whose predictions agree with data is the prerequisite for a science of cities. In the meantime, policy advice are, at best, a shot in the dark.

  6. Impedance Scaling and Impedance Control

    Energy Technology Data Exchange (ETDEWEB)

    Chou, W.; Griffin, J.

    1997-06-01

    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ``normal`` way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane.

  7. Inflation and classical scale invariance

    CERN Document Server

    Racioppi, Antonio

    2014-01-01

    BICEP2 measurement of primordial tensor modes in CMB suggests that cosmological inflation is due to a slowly rolling inflaton taking trans-Planckian values and provides further experimental evidence for the absence of large $M_{\\rm P}$ induced operators. We show that classical scale invariance solves the problem and allows for a remarkably simple scale-free inflaton model without any gauge group. Due to trans-Planckian inflaton values and VEVs, a dynamically induced Coleman-Weinberg-type inflaton potential of the model can predict tensor-to-scalar ratio $r$ in a large range. Precise determination of $r$ in future experiments will allow to test the proposed field-theoretic framework.

  8. Scaling of Information in Turbulence

    CERN Document Server

    Granero-Belinchon, Carlos; Garnier, Nicolas B

    2016-01-01

    We propose a new perspective on Turbulence using Information Theory. We compute the entropy rate of a turbulent velocity signal and we particularly focus on its dependence on the scale. We first report how the entropy rate is able to describe the distribution of information amongst scales, and how one can use it to isolate the injection, inertial and dissipative ranges, in perfect agreement with the Batchelor model and with a fBM model. In a second stage, we design a conditioning procedure in order to finely probe the asymmetries in the statistics that are responsible for the energy cascade. Our approach is very generic and can be applied to any multiscale complex system.

  9. Inhibiting scale in oil wells

    Energy Technology Data Exchange (ETDEWEB)

    D' Errico, M.J.; Adler, S.F.

    1972-09-27

    An oil well treatment is described to inhibit the formation of hard scale by precipitation from the oil well brine of scale-forming water insoluble sulfate, carbonate, and other salts. The process consists of incorporating into the oil well during a fracturing treatment, a fluid containing a solid polymeric material characterized by molecular weight in the range of 1,000 to 15,000 and a substantially linear structure, derived by the linear polymerization of at least one monoolefinically unsaturated compound through the olefinically unsaturated group. The linear structure has pendent groups, 50% of which are carboxy groups, the carboxy groups being neutralized with a sufficient proportion of at least one compound having a cation of a metal selected from alkaline earth metals, chromium, aluminum, iron, cobalt, zinc, nickel or copper to render the polymer soluble in water at 25$C to a concentration of not more than 50 ppm. (8 claims)

  10. 21 CFR 880.2720 - Patient scale.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Patient scale. 880.2720 Section 880.2720 Food and... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used to measure the weight of a patient who cannot stand on a scale. This generic device includes...

  11. Full scale lightning test technique

    Science.gov (United States)

    Walko, L. C.; Schneider, J. G.

    1980-01-01

    A test technique was developed for applying a full scale mean value (30 kiloampere peak) simulated lightning return stroke current on a complete flight ready aircraft to assess the threat of lightning to aircraft electrical circuits. A computer-aided generator design was used to establish the parameters of the test system. Data from previous work done on development of low inductance current paths determined the basic system configuration.

  12. Modeling agreement on bounded scales.

    Science.gov (United States)

    Vanbelle, Sophie; Lesaffre, Emmanuel

    2017-01-01

    Agreement is an important concept in medical and behavioral sciences, in particular in clinical decision making where disagreements possibly imply a different patient management. The concordance correlation coefficient is an appropriate measure to quantify agreement between two scorers on a quantitative scale. However, this measure is based on the first two moments, which could poorly summarize the shape of the score distribution on bounded scales. Bounded outcome scores are common in medical and behavioral sciences. Typical examples are scores obtained on visual analog scales and scores derived as the number of positive items on a questionnaire. These kinds of scores often show a non-standard distribution, like a J- or U-shape, questioning the usefulness of the concordance correlation coefficient as agreement measure. The logit-normal distribution has shown to be successful in modeling bounded outcome scores of two types: (1) when the bounded score is a coarsened version of a latent score with a logit-normal distribution on the [0,1] interval and (2) when the bounded score is a proportion with the true probability having a logit-normal distribution. In the present work, a model-based approach, based on a bivariate generalization of the logit-normal distribution, is developed in a Bayesian framework to assess the agreement on bounded scales. This method permits to directly study the impact of predictors on the concordance correlation coefficient and can be simply implemented in standard Bayesian softwares, like JAGS and WinBUGS. The performances of the new method are compared to the classical approach using simulations. Finally, the methodology is used in two different medical domains: cardiology and rheumatology.

  13. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  14. Scaling Exponents in Financial Markets

    Science.gov (United States)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  15. Scale effects in workplace innovations

    OpenAIRE

    Jan Kok; Sophie Doove; Peter Oeij; Karolus Kraan

    2014-01-01

    Workplace innovation can be defined as the implementation of new and combined interventions in work organisation, HRM and supportive technologies, and strategies to improve performance of organisations and quality of jobs. Previous research confirms the presence of a positive relationship between workplace innovation and firm performance. Within this study we are interested in the scale effects in workplace innovation. Does firm size moderate the relationship between workplace innovation and ...

  16. Integrable Equations on Time Scales

    OpenAIRE

    Gurses, Metin; Guseinov, Gusein Sh.; Silindir, Burcu

    2005-01-01

    Integrable systems are usually given in terms of functions of continuous variables (on ${\\mathbb R}$), functions of discrete variables (on ${\\mathbb Z}$) and recently in terms of functions of $q$-variables (on ${\\mathbb K}_{q}$). We formulate the Gel'fand-Dikii (GD) formalism on time scales by using the delta differentiation operator and find more general integrable nonlinear evolutionary equations. In particular they yield integrable equations over integers (difference equations) and over $q...

  17. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  18. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  19. Mechanically reliable scales and coatings

    Energy Technology Data Exchange (ETDEWEB)

    Tortorelli, P.F.; Alexander, K.B. [Oak Ridge National Lab., TN (United States)

    1995-06-01

    In many high-temperature fossil energy systems, corrosion and deleterious environmental effects arising from reactions with reactive gases and condensible products often compromise materials performance and, as a consequence, degrade operating efficiencies. Protection of materials from such reactions is best afforded by the formation of stable surface oxides (either as deposited coatings or thermally grown scales) that are slowly reacting, continuous, dense, and adherent to the substrate. However, the ability of normally brittle ceramic films and coatings to provide such protection has long been problematical, particularly for applications involving numerous or severe high-temperature thermal cycles or very aggressive (for example, sulfidizing) environments. A satisfactory understanding of how scale and coating integrity and adherence are improved by compositional, microstructural, and processing modifications is lacking. Therefore, to address this issue, the present work is intended to define the relationships between substrate characteristics (composition, microstructure, and mechanical behavior) and the structure and protective properties of deposited oxide coatings and/or thermally grown scales. Such information is crucial to the optimization of the chemical, interfacial, and mechanical properties of the protective oxides on high-temperature materials through control of processing and composition and directly supports the development of corrosion-resistant, high-temperature materials for improved energy and environmental control systems.

  20. A Lab-Scale CELSS

    Science.gov (United States)

    Flynn, Mark E.; Finn, Cory K.; Srinivasan, Venkatesh; Sun, Sidney; Harper, Lynn D. (Technical Monitor)

    1994-01-01

    It has been shown that prohibitive resupply costs for extended-duration manned space flight missions will demand that a high degree of recycling and in situ food production be implemented. A prime candidate for in situ food production is the growth of higher level plants. Research in the area of plant physiology is currently underway at many institutions. This research is aimed at the characterization and optimization of gas exchange, transpiration and food production of higher plants in order to support human life in space. However, there are a number of unresolved issues involved in making plant chambers an integral part of a closed life support system. For example, issues pertaining to the integration of tightly coupled, non-linear systems with small buffer volumes will need to be better understood in order to ensure successful long term operation of a Controlled Ecological Life Support System (CELSS). The Advanced Life Support Division at NASA Ames Research Center has embarked on a program to explore some of these issues and demonstrate the feasibility of the CELSS concept. The primary goal of the Laboratory Scale CELSS Project is to develop a fully-functioning integrated CELSS on a laboratory scale in order to provide insight, knowledge and experience applicable to the design of human-rated CELSS facilities. Phase I of this program involves the integration of a plant chamber with a solid waste processor. This paper will describe the requirements, design and some experimental results from Phase I of the Laboratory Scale CELSS Program.

  1. Development of emotional stability scale

    Directory of Open Access Journals (Sweden)

    M Chaturvedi

    2010-01-01

    Full Text Available Background: Emotional stability remains the central theme in personality studies. The concept of stable emotional behavior at any level is that which reflects the fruits of normal emotional development. The study aims at development of an emotional stability scale. Materials and Methods: Based on available literature the components of emotional stability were identified and 250 items were developed, covering each component. Two-stage elimination of items was carried out, i.e. through judges′ opinions and item analysis. Results: Fifty items with highest ′t′ values covering 5 dimensions of emotional stability viz pessimism vs. optimism, anxiety vs. calm, aggression vs. tolerance., dependence vs. autonomy., apathy vs. empathy were retained in the final scale. Reliability as checked by Cronbach′s alpha was .81 and by split half method it was .79. Content validity and construct validity were checked. Norms are given in the form of cumulative percentages. Conclusion: Based on the psychometric principles a 50 item, self-administered 5 point Lickert type rating scale was developed for measurement of emotional stability.

  2. Temporal scaling in information propagation.

    Science.gov (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-18

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  3. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  4. Analytic theories of allometric scaling.

    Science.gov (United States)

    Agutter, Paul S; Tuszynski, Jack A

    2011-04-01

    During the 13 years since it was first advanced, the fractal network theory (FNT), an analytic theory of allometric scaling, has been subjected to a wide range of methodological, mathematical and empirical criticisms, not all of which have been answered satisfactorily. FNT presumes a two-variable power-law relationship between metabolic rate and body mass. This assumption has been widely accepted in the past, but a growing body of evidence during the past quarter century has raised questions about its general validity. There is now a need for alternative theories of metabolic scaling that are consistent with empirical observations over a broad range of biological applications. In this article, we briefly review the limitations of FNT, examine the evidence that the two-variable power-law assumption is invalid, and outline alternative perspectives. In particular, we discuss quantum metabolism (QM), an analytic theory based on molecular-cellular processes. QM predicts the large variations in scaling exponent that are found empirically and also predicts the temperature dependence of the proportionality constant, issues that have eluded models such as FNT that are based on macroscopic and network properties of organisms.

  5. Temporal scaling in information propagation

    Science.gov (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  6. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier

    2011-01-01

    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  7. Patch scales in coastal ecosystems

    Science.gov (United States)

    Broitman, Bernardo R.

    Quantifying the spatial and temporal scales over which ecological processes are coupled to environmental variability is a major challenge for ecologists. Here, I assimilate patterns of oceanographic variability with ecological field studies in an attempt to quantify spatial and temporal scales of coupling. Using coastal time series of chlorophyll-a concentration from remote sensing, the first chapter examines the alongshore extent of coastal regions subject to similar temporal patterns of oceanographic variability in Western North America (WNA) and North-Central Chile (Chile). I found striking interhemispherical differences in the length of coastal sections under similar oceanographic regimes, with the Chile region showing longshore coherency over much smaller spatial scales (˜60 km) than on the coast of WNA (˜140 km). Through a spatial analysis of coastal orientation I suggest that the characteristic length scales may be traced to the geomorphologic character of the ocean margins. The second chapter examines spatial patterns of primary production through long-term means of coastal chlorophyll-a concentration and kelp (Macrocystis pyrifera) cover and explores their relationship with coastal geomorphology and sea surface temperature (SST). Spatial analyses showed a striking match in length scales around 180--250 km. Strong anticorrelations at small spatial lags and positive correlations at longer distances suggest little overlap between patches of kelp and coastal chlorophyll-a. In agreement with findings from the previous chapter, I found that coastal patches could be traced back to spatial patterns of coastal geomorphology. Through SST time series and long-term datasets of larval recruitment in Santa Cruz Island, California, the third chapter examines temporal patterns of oceanographic variability as determinants of ecological patterns. SST time series from sites experiencing low larval recruitment rates were dominated by strong temporal variability. These sites

  8. Evaluating the impact of farm scale innovation at catchment scale

    Science.gov (United States)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  9. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  10. Fractional Scaling Analysis for IRIS pressurizer reduced scale experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra da Silva, Mario Augusto, E-mail: mabs500@gmail.co [Departamento de Energia Nuclear - Centro de Tecnologia e Geociencias, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire, 1000, 50740-540 Recife, PE (Brazil); Brayner de Oliveira Lira, Carlos Alberto, E-mail: cabol@ufpe.b [Departamento de Energia Nuclear - Centro de Tecnologia e Geociencias, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire, 1000, 50740-540 Recife, PE (Brazil); Oliveira Barroso, Antonio Carlos de, E-mail: barroso@ipen.b [Instituto de Pesquisas Energeticas e Nucleares - Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes, 2242, 05508-900 Cidade Universitaria, Sao Paulo (Brazil)

    2010-10-15

    About twenty organizations joined in a consortium led by Westinghouse to develop an integral, modular and medium size pressurized water reactor (PWR), known as international reactor innovative and secure (IRIS), which is characterized by having most of its components inside the pressure vessel, eliminating or minimizing the probability of severe accidents. The pressurizer is responsible for pressure control in PWRs. A small continuous flow is maintained by the spray system in conventional pressurizers. This mini-flow allows a mixing between the reactor coolant and the pressurizer water, warranting acceptable limits for occasional differences in boron concentrations. There are neither surge lines nor spray in IRIS pressurizer, but surge and recirculation orifices that promote a circulation flow between primary system and pressurizer, avoiding power transients whether outsurges occur. The construction of models is a routine practice in engineering, being supported by similarity rules. A new method of scaling systems, Fractional Scaling Analysis, has been successfully used to analyze pressure variations, considering the most relevant agents of change. The aim of this analysis is to obtain the initial boron concentration ratio and the volumetric flows that ensure similar behavior for boron dispersion in a prototype and its model.

  11. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  12. Preambolo al Catalogo Sismico Nazionale (CSN. I criteri di informazione del Catalogo Sismico Nazionale (CSN

    Directory of Open Access Journals (Sweden)

    M. VECCHI

    1979-06-01

    Full Text Available

    This note presents a study meant to set out a national catalogue
    (C.S.N. of seismic events the epicentres of which are to be found in
    Italian territory (or nearby. The catalogue, that will start from the
    year 1450 B.C., could be used for various purpose employing modern
    technologies.
    The complete C.S.N, is made of three main parts, each of which can
    also have a separate life:
    1 An Analytical Catalogue which comprises the greater part of
    data and is the most complete;
    2 A Macroseismic Atlas that shows the macroseismic aspect of
    the most relevant events;
    3 A Macroseismic Catalogue that translates in numeric terms the
    Atlas.

    To these is added a comprehensive Bibliography subdivided into 24
    chapters each of which covers one of the 24 time periods into which the
    catalogue has been subdivided.
    The Analytical Catalogue besides giving the main parameters for each
    earthquake (date, epicentre, hypocentral depth, MCS scale intensity, raagnitudo,
    each with its own reliability index gives indications also on the
    following sideline data:
    1 epicentral location (meaning by this the geographic region and
    the eventual indication that the epicentre is to be found on a borderline
    touching more geographic regions, or in the sea, or on the coast, or othermise
    in the external band (see text;
    2 with a reference to the Atlas, the event is shown as having been
    dealt with also mocroseismic data and it will con be found, as such, in
    the Atlas and in the Macroseismic Catalogue;
    3 indications on the typology of the earthquake: i.e. whether it is an
    « isolated » earthquake, or a « seismic period » (and if so with the indication
    of the foreshock, main or aftershocks or of a swarm;
    4 the possibility of indicating up to 7 sets of information suitably
    chose among 40 additional notes regarding instrumental, geophysical

  13. Dimensional Review of Scales for Forensic Photography.

    Science.gov (United States)

    Ferrucci, Massimiliano; Doiron, Theodore D; Thompson, Robert M; Jones, John P; Freeman, Adam J; Neiman, Janice A

    2016-03-01

    Scales for photography provide a geometrical reference in the photographic documentation of a crime scene, pattern, or item of evidence. The ABFO No. 2 Standard Reference Scale (1) is used by the forensic science community as an accurate reference scale. We investigated the overall accuracy of the major centimeter graduations, internal/external diameters of the circles, error in placement of the circle centers, and leg perpendicularity. Four vendors were selected for the scales, and the features were measured on a vision-based coordinate measurement system. The scales were well within the specified tolerance for the length graduations. After 4 years, the same scales were measured to determine what change could be measured. The scales demonstrated acceptable stability in the scale length and center-to-center measurements; however, the perpendicularity exhibited change. The study results indicate that scale quality checks using certified metal rulers are good practice.

  14. Scaling Behaviors of Branched Polymers

    CERN Document Server

    Aoki, H; Kawai, H; Kitazawa, Y; Aoki, Hajime; Iso, Satoshi; Kawai, Hikaru; Kitazawa, Yoshihisa

    2000-01-01

    We study the thermodynamic behavior of branched polymers. We first study random walks in order to clarify the thermodynamic relation between the canonical ensemble and the grand canonical ensemble. We then show that correlation functions for branched polymers are given by those for $\\phi^3$ theory with a single mass insertion, not those for the $\\phi^3$ theory themselves. In particular, the two-point function behaves as $1/p^4$, not as $1/p^2$, in the scaling region. This behavior is consistent with the fact that the Hausdorff dimension of the branched polymer is four.

  15. The Scales of Gravitational Lensing

    CERN Document Server

    De Paolis, Francesco; Ingrosso, Gabriele; Manni, Luigi; Nucita, Achille; Strafella, Francesco

    2016-01-01

    After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge) of supermassive black holes in the center of ours and nearby galaxies.

  16. Supergroups and economies of scale.

    Science.gov (United States)

    Schlossberg, Steven

    2009-02-01

    With the changing environment for medical practice, physician practice models will continue to evolve. These "supergoups'' create economies of scale, but their advantage is not only in the traditional economic sense. Practices with enough size are able to better meet the challenges of medical practice with increasing regulatory demands, explosion of clinical knowledge, quality and information technology initiatives, and an increasingly tight labor market. Smaller practices can adapt some of these strategies selectively. Depending on the topic, smaller practices should think differently about how to approach the challenges of practice.

  17. Strings and large scale magnetohydrodynamics

    CERN Document Server

    Olesen, P

    1995-01-01

    From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.

  18. Water flow at all scales

    DEFF Research Database (Denmark)

    Sand-Jensen, K.

    2006-01-01

    Continuous water fl ow is a unique feature of streams and distinguishes them from all other ecosystems. The main fl ow is always downstream but it varies in time and space and can be diffi cult to measure and describe. The interest of hydrologists, geologists, biologists and farmers in water fl ow......, and its physical impact, depends on whether the main focus is on the entire stream system, the adjacent fi elds, the individual reaches or the habitats of different species. It is important to learn how to manage fl ow at all scales, in order to understand the ecology of streams and the biology...

  19. The Scales of Gravitational Lensing

    Directory of Open Access Journals (Sweden)

    Francesco De Paolis

    2016-03-01

    Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.

  20. JavaScript at scale

    CERN Document Server

    Boduch, Adam

    2015-01-01

    Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready

  1. Full-Scale Tunnel (FST)

    Science.gov (United States)

    1929-01-01

    Modified propeller and spinner in Full-Scale Tunnel (FST) model. On June 26, 1929, Elton W. Miller wrote to George W. Lewis proposing the construction of a model of the full-scale tunnel. 'The excellent energy ratio obtained in the new wind tunnel of the California Institute of Technology suggests that before proceeding with our full scale tunnel design, we ought to investigate the effect on energy ratio of such factors as: 1. small included angle for the exit cone; 2. carefully designed return passages of circular section as far as possible, without sudden changes in cross sections; 3. tightness of walls. It is believed that much useful information can be obtained by building a model of about 1/16 scale, that is, having a closed throat of 2 ft. by 4 ft. The outside dimensions would be about 12 ft. by 25 ft. in plan and the height 4 ft. Two propellers will be required about 28 in. in diameter, each to be driven by direct current motor at a maximum speed of 4500 R.P.M. Provision can be made for altering the length of certain portions, particularly the exit cone, and possibly for the application of boundary layer control in order to effect satisfactory air flow. This model can be constructed in a comparatively short time, using 2 by 4 framing with matched sheathing inside, and where circular sections are desired they can be obtained by nailing sheet metal to wooden ribs, which can be cut on the band saw. It is estimated that three months will be required for the construction and testing of such a model and that the cost will be approximately three thousand dollars, one thousand dollars of which will be for the motors. No suitable location appears to exist in any of our present buildings, and it may be necessary to build it outside and cover it with a roof.' George Lewis responded immediately (June 27) granting the authority to proceed. He urged Langley to expedite construction and to employ extra carpenters if necessary. Funds for the model came from the FST project

  2. Scalings of pitches in music

    CERN Document Server

    Shi, Y

    1995-01-01

    We investigate correlations among pitches in several songs and pieces of piano music by mapping them to one-dimensional walks. Two kinds of correlations are studied, one is related to the real values of frequencies while they are treated only as different symbols for another. Long-range power law behavior is found in both kinds. The first is more meaningful. The structure of music, such as beat, measure and stanza, are reflected in the change of scaling exponents. Some interesting features are observed. Our results demonstrate the viewpoint that the fundamental principle of music is the balance between repetition and contrast.

  3. Adopted: A practical salinity scale

    Science.gov (United States)

    The Unesco/ICES/SCOR/IAPSO Joint Panel on Oceanographic Tables and Standards has recommended the adoption of a Practical Salinity Scale, 1978, and a corresponding new International Equation of State of Seawater, 1980. A full account of the research leading to their recommendation is available in the series Unesco Technical Papers in Marine Science.The parent organizations have accepted the panel's recommendations and have set January 1, 1982, as the date when the new procedures, formulae, and tables should replace those now in use.

  4. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  5. Determination of heavy metals in fish scales

    Directory of Open Access Journals (Sweden)

    Hana Nováková

    2010-12-01

    Full Text Available The outcomes from measurements of amount of selected elements in the fish scales of common carp are presented. Concentrations in the scales were identified and differences between storage of heavy metals in exposed and covered part of scale were studied. The spatial distribution of elements on the fish scale´s surface layer was measured by Laser Ablation–Inductively Coupled Plasma–Mass Spectrometry (LA–ICP–MS. The average amount of elements in the dissolved scales was quantified by ICP–MS. The fine structure of fish scales was visualized by phase–contrast Synchrotron radiation (SR microradiography.

  6. Observed and estimated economic losses in Guadeloupe (French Antilles) after Les Saintes Earthquake (2004). Application to risk comparison

    Science.gov (United States)

    Monfort, Daniel; Reveillère, Arnaud; Lecacheux, Sophie; Muller, Héloise; Grisanti, Ludovic; Baills, Audrey; Bertil, Didier; Sedan, Olivier; Tinard, Pierre

    2013-04-01

    (industry, commerce and tourism), even if in these municipalities intensities were quite smaller (V to VI in EMS98 scale). It seems that damage scenario cannot consider completely this situation and the greater complexity of industrial and commercial areas. The next work is to compare seismic risk and storm surge risk in a little scale (for 3 municipalities in Pointe à Pitre area, the capital of Guadeloupe), in terms of potential direct economic losses for different return periods. The methodology therefore relies on (i) a probabilistic hazard assessment, (ii) a loss ratio estimation for the exposed elements and (iii) an economic estimation of these assets. Seismic hazard assessment was done for return periods of 100, 475, 1000 and 5000 years. Storm surge hazard assessment is based on the selection of relevant historical cyclones and on the simulation of the associated wave and cyclonic surge. The combined local sea elevations, called "set-up", are then fitted with a statistical distribution in order to obtain its time return characteristics. Several run-ups are then extracted, the inundation areas are calculated and the relative losses of the affected assets are deduced. Current building vulnerability was adapted for each single risk, vulnerability indices (RISK-UE method) for seismic risk and vulnerability functions for storm surge. State of art of available vulnerability functions in storm surge and floods in tropical context has been done (CAPRA software, HAZUS software) even if these functions do not consider explicitly the local context in Guadeloupe. Damages caused by wind are not considered. The past storm surge events in French Antilles data are not enough to build new vulnerability functions. The results have been achieved in the project MATRIX (http://matrix.gpi.kit.edu/), funded by the European Commission in the Seventh Framework Programme (FP7/2007-2013), under grant agreement n° 265138.

  7. Three Scales of Acephalous Organization

    Directory of Open Access Journals (Sweden)

    Victor MacGill

    2016-04-01

    Full Text Available Dominance-based hierarchies have been taken for granted as the way we structure our organizations, but they are a part of a paradigm that has put our whole existence in peril. There is an urgent need to explore alternative paradigms that take us away from dystopic futures towards preferred, life enhancing paradigms based on wellbeing. One of the alternative ways of organizing ourselves that avoids much of the structural violence of existing organizations is the acephalous group (operating without any structured, ongoing leadership. Decision making becomes distributed, transitory and self-selecting. Such groups are not always appropriate and have their strengths and weaknesses, but they can be a more effective, humane way of organizing ourselves and can open windows to new ways of being. Acephalous groups operate at many different scales and adapt their structure accordingly. For this reason, a comparison of small, medium and large-scale acephalous groups reveals some of the dynamics involved in acephalous functioning and provides a useful overview of these emergent forms of organization and foreshadows the role they may play in future.

  8. Scaling analysis of stock markets

    Science.gov (United States)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  9. Dyons near the Planck scale

    CERN Document Server

    Laperashvili, L V; Laperashvili, Larisa

    2006-01-01

    In the present talk we suggest a new model of preons-dyons making composite quark-leptons and bosons, described by the supersymmetric string-inspired flipped E_6\\times \\tilde E_6 gauge group of symmetry. This investigation predicts the possible extension of the Standard Model to the Family replicated gauge group model of type G^{N_{fam}}, where N_{fam} is the number of families. Here E_6 and \\tilde E_6 are non-dual and dual sectors of theory with hyper-electric g and hyper-magnetic \\tilde g charges, respectively. Our model is based on the recent theory of composite non-Abelian flux tubes in SQCD. Considering the breakdown of E_6 (and \\tilde E_6) at the Planck scale into the SU(6)\\times U(1) gauge group, we have shown that the six types of composite N = 1 supersymmetric non-Abelian flux tubes are created by the condensation of spreons-dyons near the Planck scale and have fluxes quantized according to the Z_6 center group of SU(6): \\Phi_n = n\\Phi_0 (n = \\pm 1,\\pm 2,\\pm 3). These fluxes give three types of k-str...

  10. Unified Theory of Allometric Scaling

    CERN Document Server

    Silva, J K L; Silva, P R; Silva, Jafferson K. L. da; Barbosa, Lauro A.; Silva, Paulo Roberto

    2006-01-01

    A general simple theory for the allometric scaling is developed in the $d+1$-dimensional space ($d$ biological lengths and a physiological time) of metabolic states of organisms. It is assumed that natural selection shaped the metabolic states in such a way that the mass and energy $d+1$-densities are size-invariant quantities (independent of body mass). The different metabolic states (basal and maximum) are described by considering that the biological lengths and the physiological time are related by different transport processes of energy and mass. In the basal metabolism, transportation occurs by balistic and diffusion processes. In $d=3$, the 3/4 law occurs if the balistic movement is the dominant process, while the 2/3 law appears when both transport processes are equivalent. Accelerated movement during the biological time is related to the maximum aerobic sustained metabolism, which is characterized by the scaling exponent $2d/(2d+1)$ (6/7 in $d=3$). The results are in good agreement with empirical data...

  11. The Weak Scale from BBN

    CERN Document Server

    Hall, Lawrence J; Ruderman, Joshua T

    2014-01-01

    The measured values of the weak scale, $v$, and the first generation masses, $m_{u,d,e}$, are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of $(m_u,m_d,m_e,v)$, catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of $(1.4,1.3,2.5,\\sim5)$, respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as $v$ is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of $m_{u,d,e}$ that yield both hydrogen and heavy nuclei stability. For very large values of $v$ a new regime is entered where weak in...

  12. The weak scale from BBN

    Science.gov (United States)

    Hall, Lawrence J.; Pinner, David; Ruderman, Joshua T.

    2014-12-01

    The measured values of the weak scale, v, and the first generation masses, m u, d, e , are simultaneously explained in the multiverse, with all these parameters scanning independently. At the same time, several remarkable coincidences are understood. Small variations in these parameters away from their measured values lead to the instability of hydrogen, the instability of heavy nuclei, and either a hydrogen or a helium dominated universe from Big Bang Nucleosynthesis. In the 4d parameter space of ( m u , m d , m e , v), catastrophic boundaries are reached by separately increasing each parameter above its measured value by a factor of (1.4, 1.3, 2.5, ˜ 5), respectively. The fine-tuning problem of the weak scale in the Standard Model is solved: as v is increased beyond the observed value, it is impossible to maintain a significant cosmological hydrogen abundance for any values of m u, d, e that yield both hydrogen and heavy nuclei stability.

  13. Scaling analysis of affinity propagation.

    Science.gov (United States)

    Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

    2010-06-01

    We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for ss*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set.

  14. The Autonomy Over Smoking Scale.

    Science.gov (United States)

    DiFranza, Joseph R; Wellman, Robert J; Ursprung, W W Sanouri A; Sabiston, Catherine

    2009-12-01

    Our goal was to create an instrument that can be used to study how smokers lose autonomy over smoking and regain it after quitting. The Autonomy Over Smoking Scale was produced through a process involving item generation, focus-group evaluation, testing in adults to winnow items, field testing with adults and adolescents, and head-to-head comparisons with other measures. The final 12-item scale shows excellent reliability (alphas = .91-.97), with a one-factor solution explaining 59% of the variance in adults and 61%-74% of the variance in adolescents. Concurrent validity was supported by associations with age of smoking initiation, lifetime use, smoking frequency, daily cigarette consumption, history of failed cessation, Hooked on Nicotine Checklist scores, and Diagnostic and Statistical Manual of Mental Disorder (4th ed., text rev.; American Psychiatric Association, 2000) nicotine dependence criteria. Potentially useful features of this new instrument include (a) it assesses tobacco withdrawal, cue-induced craving, and psychological dependence on cigarettes; (b) it measures symptom intensity; and (c) it asks about current symptoms only, so it could be administered to quitting smokers to track the resolution of symptoms.

  15. Scaling analysis of stock markets.

    Science.gov (United States)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  16. Toward ecologically scaled landscape indices.

    Science.gov (United States)

    Vos, C C; Verboom, J; Opdam, P F; Ter Braak, C J

    2001-01-01

    Nature conservation is increasingly based on a landscape approach rather than a species approach. Landscape planning that includes nature conservation goals requires integrated ecological tools. However, species differ widely in their response to landscape change. We propose a framework of ecologically scaled landscape indices that takes into account this variation. Our approach is based on a combination of field studies of spatially structured populations (metapopulations) and model simulations in artificial landscapes. From these, we seek generalities in the relationship among species features, landscape indices, and metapopulation viability. The concept of ecological species profiles is used to group species according to characteristics that are important in metapopulations' response to landscape change: individual area requirements as the dominant characteristic of extinction risk in landscape patches and dispersal distance as the main determinant of the ability to colonize patches. The ecological profiles and landscape indices are then integrated into two ecologically scaled landscape indices (ESLI): average patch carrying capacity and average patch connectivity. The field data show that the fraction of occupied habitat patches is correlated with the two ESLI. To put the ESLI into a perspective of metapopulation persistence, we determine the viability for six ecological profiles at different degrees of habitat fragmentation using a metapopulation model and computer-generated landscapes. The model results show that the fraction of occupied patches is a good indicator for metapopulation viability. We discuss how ecological profiles, ESLI, and the viability threshold can be applied for landscape planning and design in nature conservation.

  17. The SCALE-UP Project

    Science.gov (United States)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  18. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  19. Rating on life valuation scale

    Directory of Open Access Journals (Sweden)

    Lapčević Mirjana

    2006-01-01

    Full Text Available Introduction: World Health Organization (WHO Articles of Association defines health as the state of complete physical, mental and social well-being and not merely the absence of disease. According to this definition, the concept of health is enlarged and consists of public and personal needs, motives and psychological nature of a person, education, culture, tradition, religion, etc. All these needs do not have the same rank on life valuation scale. Objective: The objective of our study was ranking 6 most important values of life out of 12 suggested. Method: Questionnaire about Life Valuation Scale was used as method in our study. This questionnaire was created by the Serbian Medical Association and Department of General Medicine, School of Medicine, University of Belgrade. It analyzed 10% of all citizens in 18 places in Serbia, aged from 25 to 64 years, including Belgrade commune Vozdovac. Survey was performed in health institutions and in citizens’ residencies in 1995/96 by doctors, nurses and field nurses. Results: A total of 14,801 citizens was questioned in Serbia (42.57% of men, 57.25% of women, and 852 citizens in Vozdovac commune (34.62% of men, 65.38% of women. People differently value things in their lives. On the basis of life values scoring, the most important thing in people’s life was health. In Serbia, public rank of health was 4.79%, and 4.4% in Vozdovac commune. Relations in family were on the second place, and engagement in politics was on the last place. Conclusion: The results of our study in the whole Serbia and in Vozdovac commune do not differ significantly from each other, and all of them demonstrated that people attached the greatest importance to health on the scale of proposed values. Relationships in family were on the second place, and political activity was on the last place. High ranking of health and relationships in family generally shows that general practitioners in Serbia take important part in primary

  20. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  1. Scaling ansatz for the jamming transition

    Science.gov (United States)

    Goodrich, Carl P.; Liu, Andrea J.; Sethna, James P.

    2016-08-01

    We propose a Widom-like scaling ansatz for the critical jamming transition. Our ansatz for the elastic energy shows that the scaling of the energy, compressive strain, shear strain, system size, pressure, shear stress, bulk modulus, and shear modulus are all related to each other via scaling relations, with only three independent scaling exponents. We extract the values of these exponents from already known numerical or theoretical results, and we numerically verify the resulting predictions of the scaling theory for the energy and residual shear stress. We also derive a scaling relation between pressure and residual shear stress that yields insight into why the shear and bulk moduli scale differently. Our theory shows that the jamming transition exhibits an emergent scale invariance, setting the stage for the potential development of a renormalization group theory for jamming.

  2. Large-scale dynamics of magnetic helicity

    Science.gov (United States)

    Linkmann, Moritz; Dallas, Vassilios

    2016-11-01

    In this paper we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a nonlocal inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic field. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.

  3. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  4. Scaling ansatz for the jamming transition.

    Science.gov (United States)

    Goodrich, Carl P; Liu, Andrea J; Sethna, James P

    2016-08-30

    We propose a Widom-like scaling ansatz for the critical jamming transition. Our ansatz for the elastic energy shows that the scaling of the energy, compressive strain, shear strain, system size, pressure, shear stress, bulk modulus, and shear modulus are all related to each other via scaling relations, with only three independent scaling exponents. We extract the values of these exponents from already known numerical or theoretical results, and we numerically verify the resulting predictions of the scaling theory for the energy and residual shear stress. We also derive a scaling relation between pressure and residual shear stress that yields insight into why the shear and bulk moduli scale differently. Our theory shows that the jamming transition exhibits an emergent scale invariance, setting the stage for the potential development of a renormalization group theory for jamming.

  5. Inductance Scaling of a Helicoil Using ALEGRA

    Science.gov (United States)

    2013-05-01

    5066 The inductance scaling of several helicoil configurations are investigated using the March 2011 release of the Sandia magnetohydrodynamics (MHD...inductance scaling of several helicoil configurations are investigated using the March 2011 release of the Sandia magnetohydrodynamics (MHD) code

  6. Strongly Scale-dependent Non-Gaussianity

    CERN Document Server

    Riotto, Antonio

    2011-01-01

    We discuss models of primordial density perturbations where the non-Gaussianity is strongly scale-dependent. In particular, the non-Gaussianity may have a sharp cut-off and be very suppressed on large cosmological scales, but sizeable on small scales. This may have an impact on probes of non-Gaussianity in the large-scale structure and in the cosmic microwave background radiation anisotropies.

  7. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  8. The Scaled Thermal Explosion Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Wardell, J F; Maienschein, J L

    2002-07-05

    We have developed the Scaled Thermal Explosion Experiment (STEX) to provide a database of reaction violence from thermal explosion for explosives of interest. Such data are needed to develop, calibrate, and validate predictive capability for thermal explosions using simulation computer codes. A cylinder of explosive 25, 50 or 100 mm in diameter, is confined in a steel cylinder with heavy end caps, and heated under controlled conditions until reaction. Reaction violence is quantified through non-contact micropower impulse radar measurements of the cylinder wall velocity and by strain gauge data at reaction onset. Here we describe the test concept, design and diagnostic recording, and report results with HMX- and RDX-based energetic materials.

  9. Significant Scales in Community Structure

    CERN Document Server

    Traag, V A; Van Dooren, P

    2013-01-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of "significance" of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine "good" resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationa...

  10. Cognitive Reserve Scale and ageing

    Directory of Open Access Journals (Sweden)

    Irene León

    2016-01-01

    Full Text Available The construct of cognitive reserve attempts to explain why some individuals with brain impairment, and some people during normal ageing, can solve cognitive tasks better than expected. This study aimed to estimate cognitive reserve in a healthy sample of people aged 65 years and over, with special attention to its influence on cognitive performance. For this purpose, it used the Cognitive Reserve Scale (CRS and a neuropsychological battery that included tests of attention and memory. The results revealed that women obtained higher total CRS raw scores than men. Moreover, the CRS predicted the learning curve, short-term and long-term memory, but not attentional and working memory performance. Thus, the CRS offers a new proxy of cognitive reserve based on cognitively stimulating activities performed by healthy elderly people. Following an active lifestyle throughout life was associated with better intellectual performance and positive effects on relevant aspects of quality of life.

  11. Validation of the Metacomprehension Scale

    Science.gov (United States)

    Moore; Zabrucky; Commander

    1997-10-01

    Evidence for the factorial, convergent and discriminant, and criterion-related validity of the Metacomprehension Scale (MCS) was examined in a sample of 237 young adults. The instrument was factorially heterogeneous but exhibited homogeneity within each of the seven subscales. Evidence for the convergent and discriminant validity of the MCS was examined by correlating the subscales from the MCS with subscales from metacognitive questionnaires measuring similar constructs from related domains. In general, correlations within constructs were larger than correlations between constructs, providing preliminary evidence of the convergent and discriminant validity of the MCS. The criterion-related validity of the MCS relative to other metacognitive measures was examined by using the metacognitive measures and the MCS to predict comprehension performance. The MCS predicted performance better than the other measures of metacognition and accounted for variance in performance not accounted for by the other measures. These results show promise for the value of self-assessments of metacomprehension. Copyright 1997Academic Press

  12. Bacterial Communities: Interactions to Scale

    Directory of Open Access Journals (Sweden)

    Reed M. Stubbendieck

    2016-08-01

    Full Text Available In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities.

  13. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  14. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  15. [Virginia Apgar and her scale].

    Science.gov (United States)

    van Gijn, Jan; Gijselhart, Joost P

    2012-01-01

    Virginia Apgar (1909-1974), born in New Jersey, managed to continue medical school despite the financial crisis of 1929, continued for a brief time in surgery and subsequently became one of the first specialists in anaesthesiology. In 1949 she was appointed to a professorship, the first woman to reach this rank at Columbia University in New York. She then dedicated herself to obstetric anaesthesiology and devised the well known scale for the initial assessment of newborn babies, according to 5 criteria. From 1959 she worked for the National Foundation for Infantile Paralysis (now March of Dimes), to expand its activities from prevention of poliomyelitis to other aspects of preventive child care, such as rubella vaccination and testing for rhesus antagonism. She remained single; in her private life she enjoyed fly fishing, took lessons in aviation and was an accomplished violinist.

  16. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  17. Large Scale Correlation Clustering Optimization

    CERN Document Server

    Bagon, Shai

    2011-01-01

    Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...

  18. Relating Biophysical Properties Across Scales

    CERN Document Server

    Flenner, Elijah; Neagu, Adrian; Kosztin, Ioan; Forgacs, Gabor

    2007-01-01

    A distinguishing feature of a multicellular living system is that it operates at various scales, from the intracellular to organismal. Very little is known at present on how tissue level properties are related to cell and subcellular properties. Modern measurement techniques provide quantitative results at both the intracellular and tissue level, but not on the connection between these. In the present work we outline a framework to address this connection. We specifically concentrate on the morphogenetic process of tissue fusion, by following the coalescence of two contiguous multicellular aggregates. The time evolution of this process can accurately be described by the theory of viscous liquids. We also study fusion by Monte Carlo simulations and a novel Cellular Particle Dynamics (CPD) model, which is similar to the earlier introduced Subcellular Element Model (Newman, 2005). Using the combination of experiments, theory and modeling we are able to relate the measured tissue level biophysical quantities to s...

  19. Hypoallometric scaling in international collaborations

    Science.gov (United States)

    Hsiehchen, David; Espinoza, Magdalena; Hsieh, Antony

    2016-02-01

    Collaboration is a vital process and dominant theme in knowledge production, although the effectiveness of policies directed at promoting multinational research remains ambiguous. We examined approximately 24 million research articles published over four decades and demonstrated that the scaling of international publications to research productivity for each country obeys a universal and conserved sublinear power law. Inefficient mechanisms in transborder team dynamics or organization as well as increasing opportunity costs may contribute to the disproportionate growth of international collaboration rates with increasing productivity among nations. Given the constrained growth of international relationships, our findings advocate a greater emphasis on the qualitative aspects of collaborations, such as with whom partnerships are forged, particularly when assessing research and policy outcomes.

  20. Small Scale High Speed Turbomachinery

    Science.gov (United States)

    London, Adam P. (Inventor); Droppers, Lloyd J. (Inventor); Lehman, Matthew K. (Inventor); Mehra, Amitav (Inventor)

    2015-01-01

    A small scale, high speed turbomachine is described, as well as a process for manufacturing the turbomachine. The turbomachine is manufactured by diffusion bonding stacked sheets of metal foil, each of which has been pre-formed to correspond to a cross section of the turbomachine structure. The turbomachines include rotating elements as well as static structures. Using this process, turbomachines may be manufactured with rotating elements that have outer diameters of less than four inches in size, and/or blading heights of less than 0.1 inches. The rotating elements of the turbomachines are capable of rotating at speeds in excess of 150 feet per second. In addition, cooling features may be added internally to blading to facilitate cooling in high temperature operations.

  1. Relating biophysical properties across scales.

    Science.gov (United States)

    Flenner, Elijah; Marga, Francoise; Neagu, Adrian; Kosztin, Ioan; Forgacs, Gabor

    2008-01-01

    A distinguishing feature of a multicellular living system is that it operates at various scales, from the intracellular to organismal. Genes and molecules set up the conditions for the physical processes to act, in particular to shape the embryo. As development continues the changes brought about by the physical processes lead to changes in gene expression. It is this coordinated interplay between genetic and generic (i.e., physical and chemical) processes that constitutes the modern understanding of early morphogenesis. It is natural to assume that in this multiscale process the smaller defines the larger. In case of biophysical properties, in particular, those at the subcellular level are expected to give rise to those at the tissue level and beyond. Indeed, the physical properties of tissues vary greatly from the liquid to solid. Very little is known at present on how tissue level properties are related to cell and subcellular properties. Modern measurement techniques provide quantitative results at both the intracellular and tissue level, but not on the connection between these. In the present work we outline a framework to address this connection. We specifically concentrate on the morphogenetic process of tissue fusion, by following the coalescence of two contiguous multicellular aggregates. The time evolution of this process can accurately be described by the theory of viscous liquids. We also study fusion by Monte Carlo simulations and a novel Cellular Particle Dynamics (CPD) model, which is similar to the earlier introduced Subcellular Element Model (SEM; Newman, 2005). Using the combination of experiments, theory and modeling we are able to relate the measured tissue level biophysical quantities to subcellular parameters. Our approach has validity beyond the particular morphogenetic process considered here and provides a general way to relate biophysical properties across scales.

  2. Strongly Scale-dependent Non-Gaussianity

    DEFF Research Database (Denmark)

    Riotto, Antonio; Sloth, Martin Snoager

    2010-01-01

    We discuss models of primordial density perturbations where the non-Gaussianity is strongly scale-dependent. In particular, the non-Gaussianity may have a sharp cut-off and be very suppressed on large cosmological scales, but sizeable on small scales. This may have an impact on probes of non...

  3. Development of Capstone Project Attitude Scales

    Science.gov (United States)

    Bringula, Rex P.

    2015-01-01

    This study attempted to develop valid and reliable Capstone Project Attitude Scales (CPAS). Among the scales reviewed, the Modified Fennema-Shermann Mathematics Attitude Scales was adapted in the construction of the CPAS. Usefulness, Confidence, and Gender View were the three subscales of the CPAS. Four hundred sixty-three students answered the…

  4. COVERS Neonatal Pain Scale: Development and Validation

    Directory of Open Access Journals (Sweden)

    Ivan L. Hand

    2010-01-01

    Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.

  5. The Attitudes toward Multiracial Children Scale.

    Science.gov (United States)

    Jackman, Charmain F.; Wagner, William G.; Johnson, J. T.

    2001-01-01

    Two studies evaluated items developed for the Attitudes Toward Multiracial Children Scale. Researchers administered the scale to diverse college students, revised it, then administered it again. The scale's psychometric properties were such that the instrument could be used to research adults' attitudes regarding psychosocial development of…

  6. Prediction of Ductile Fracture Surface Roughness Scaling

    DEFF Research Database (Denmark)

    Needleman, Alan; Tvergaard, Viggo; Bouchaud, Elisabeth

    2012-01-01

    Experimental observations have shown that the roughness of fracture surfaces exhibit certain characteristic scaling properties. Here, calculations are carried out to explore the extent to which a ductile damage/fracture constitutive relation can be used to model fracture surface roughness scaling....... The scaling properties of the predicted thickness average fracture surfaces are calculated and the results are discussed in light of experimental observations....

  7. Measuring Mental Imagery with Visual Analogue Scales.

    Science.gov (United States)

    Quilter, Shawn M.; Band, Jennie P.; Miller, Gary M.

    1999-01-01

    Investigates some of the psychometric characteristics of the results from visual-analogue scales used to measure mental imagery. Reports that the scores from visual-analogue scales are positively related to scores from longer pencil-and-paper measures of mental imagery. Implications and limitations for the use of visual-analogue scales to measure…

  8. Why Online Education Will Attain Full Scale

    Science.gov (United States)

    Sener, John

    2010-01-01

    Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…

  9. The Anti-Social Activities Attitude Scale.

    Science.gov (United States)

    Ribner, Sol; Chein, Isidor

    1979-01-01

    This article presents reliability and validity evidence on the Anti-Social Activities Attitude Scale. The scale is not intended to predict individual behavior but to assess the moral climate within a population as supportive or restrictive of antisocial acts. The scale itself is included. (SJL)

  10. Scale dependent inference in landscape genetics

    Science.gov (United States)

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  11. Scaling and multiscaling in financial markets

    OpenAIRE

    2000-01-01

    This paper reviews some of the phenomenological models which have been introduced to incorporate the scaling properties of financial data. It also illustrates a microscopic model, based on heterogeneous interacting agents, which provides a possible explanation for the complex dynamics of markets' returns. Scaling and multi-scaling analysis performed on the simulated data is in good quantitative agreement with the empirical results.

  12. An Aesthetic Value Scale of the Rorschach.

    Science.gov (United States)

    Insua, Ana Maria

    1981-01-01

    An aesthetic value scale of the Rorschach cards was built by the successive interval method. This scale was compared with the ratings obtained by means of the Semantic Differential Scales and was found to successfully differentiate sexes in their judgment of card attractiveness. (Author)

  13. Mineral scale management. Part II, Fundamental chemistry

    Science.gov (United States)

    Alan W. Rudie; Peter W. Hart

    2006-01-01

    The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...

  14. Mechanics over micro and nano scales

    CERN Document Server

    Chakraborty, Suman

    2011-01-01

    Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems

  15. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  16. 76 FR 3485 - Required Scale Tests

    Science.gov (United States)

    2011-01-20

    ..., Packers and Stockyards Administration 9 CFR Part 201 RIN 0580-AB10 Required Scale Tests AGENCY: Grain... first of the two scale tests between January 1 and June 30 of the calendar year. The remaining scale test must be completed between July 1 and December 31 of the calendar year. In addition, a...

  17. 76 FR 50881 - Required Scale Tests

    Science.gov (United States)

    2011-08-17

    ..., Packers and Stockyards Administration 9 CFR Part 201 RIN 0580-AB10 Required Scale Tests AGENCY: Grain... January 20, 2011, and on April 4, 2011, concerning required scale tests. Those documents defined ``limited...), concerning required scale tests. Those documents incorrectly defined limited seasonal basis in Sec....

  18. 76 FR 18348 - Required Scale Tests

    Science.gov (United States)

    2011-04-04

    ... Tests AGENCY: Grain Inspection, Packers and Stockyards Administration. ACTION: Correcting amendments... Register on January 20, 2011 (76 FR 3485), defining required scale tests. That document incorrectly defined... packer using such scales may use the scales within a 6-month period following each test. * * * * * Alan...

  19. Development of Capstone Project Attitude Scales

    Science.gov (United States)

    Bringula, Rex P.

    2015-01-01

    This study attempted to develop valid and reliable Capstone Project Attitude Scales (CPAS). Among the scales reviewed, the Modified Fennema-Shermann Mathematics Attitude Scales was adapted in the construction of the CPAS. Usefulness, Confidence, and Gender View were the three subscales of the CPAS. Four hundred sixty-three students answered the…

  20. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  1. [Visual circle scale (VCS)--a patient-friendly scale to measure pain compared to VAS and Likert scale].

    Science.gov (United States)

    Huber, J F; Hüsler, J; Zumstein, M D; Ruflin, G; Lüscher, M

    2007-01-01

    The visual analogue scale (VAS) and Likert scale (LS) are widely used but the patients might have difficulties to work with these scales and there might be errors in calculation. The visual circle scale (VCS) is a graphic construct with a simple grading to augment the understanding and ease for calculation. This study compares the different scales in orthopaedic patients for pain assessment postoperatively. In addition, the scales were rated by the patients for simplicity, understanding and global rating. Included were 65 patients (40 women) with an average age of 66 years with 330 pain assessments and 65 questionnaire ratings. The average pain was LS 42.7, VAS 39.3, VCS 44. The correlation coefficients r (Spearman) between all scales were > 0.89 and the same held also for sensitivity for change. The VCS was the scale preferred by > 50 % of the orthopaedic patients to assess the pain. The VCS is able to measure pain comparably to the known scales (VAS, Likert scale). From the patients point of view it is the preferred scale to work with.

  2. Likert and Guttman scaling of visual function rating scale questionnaires.

    Science.gov (United States)

    Massof, Robert W

    2004-12-01

    To test the assumptions underlying Likert scoring of visual function questionnaires. Questionnaires were administered to 284 low-vision subjects by telephone. Each subject was administered two of four questionnaires: ADVS, NEI VFQ-25 plus supplement, expanded VAQ, and VF-14. Z-scores for cumulative frequency of using each rating category across subjects are not linear with rating category rank and items are not the same difficulty for any of the questionnaires. Guttmann coefficients of reproducibility ranged from 57% for the ADVS to 51% for the NEI VFQ-25. Cronbach alphas ranged from 0.92 for the VF-14 to 0.96 for the NEI VFQ; however, inter-item consistency coefficients ranged from 0.24 for the VAQ to 0.45 for the NEI VFQ. Likert scores were significantly correlated between instruments, ranging from 0.66 for NEI VFQ vs ADVS to 0.90 for the VF-14 vs. ADVS. The rating scales of all four questionnaires fail to satisfy Likert's assumptions. Also, ratings are probabilistic, rather than deterministic, which means that the Likert model is not valid for these questionnaires. However, Likert scores for all four instruments are intercorrelated, suggesting that they are monotonic with the latent subject trait distributed in the low vision sample.

  3. The modified procedures in coercivity scaling*

    Directory of Open Access Journals (Sweden)

    Najgebauer Mariusz

    2015-09-01

    Full Text Available The paper presents a scaling approach to the analysis of coercivity. The Widom-based procedure of coercivity scaling has been tested for non-oriented electrical steel. Due to insufficient results, the scaling procedure was improved relating to the method proposed by Van den Bossche. The modified procedure of coercivity scaling gave better results, in comparison to the original approach. The influence of particular parameters and a range of measurement data used in the estimations on the final effect of the coercivity scaling were discussed.

  4. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models...

  5. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models...

  6. Scaling Region in Desynchronous Coupled Chaotic Maps

    Institute of Scientific and Technical Information of China (English)

    LI Xiao-Wen; XUE Yu; SHI Peng-Liang; HU Gang

    2005-01-01

    The largest Lyapunov exponent and the Lyapunov spectrum of a coupled map lattice are studied when the system state is desynchronous chaos. In the large system size limit a scaling region is found in the parameter space where the largest Lyapunov exponent is independent of the system size and the coupling strength. Some scaling relation between the Lyapunov spectrum distributions for different coupling strengths is found when the coupling strengths are taken in the scaling parameter region. The existence of the scaling domain and the scaling relation of Lyapunov spectra there are heuristically explained.

  7. Kibble-Zurek scaling in holography

    Science.gov (United States)

    Natsuume, Makoto; Okamura, Takashi

    2017-05-01

    The Kibble-Zurek (KZ) mechanism describes the generations of topological defects when a system undergoes a second-order phase transition via quenches. We study the holographic KZ scaling using holographic superconductors. The scaling can be understood analytically from a scaling analysis of the bulk action. The argument is reminiscent of the scaling analysis of the mean-field theory but is more subtle and is not entirely obvious. This is because the scaling is not the one of the original bulk theory but is an emergent one that appears only at the critical point. The analysis is also useful to determine the dynamic critical exponent z .

  8. Fluctuation scaling, Taylor's law, and crime.

    Directory of Open Access Journals (Sweden)

    Quentin S Hanley

    Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  9. Human mobility patterns at the smallest scales

    CERN Document Server

    Lind, Pedro G

    2014-01-01

    We present a study on human mobility at small spatial scales. Differently from large scale mobility, recently studied through dollar-bill tracking and mobile phone data sets within one big country or continent, we report Brownian features of human mobility at smaller scales. In particular, the scaling exponents found at the smallest scales is typically close to one-half, differently from the larger values for the exponent characterizing mobility at larger scales. We carefully analyze $12$-month data of the Eduroam database within the Portuguese university of Minho. A full procedure is introduced with the aim of properly characterizing the human mobility within the network of access points composing the wireless system of the university. In particular, measures of flux are introduced for estimating a distance between access points. This distance is typically non-euclidean, since the spatial constraints at such small scales distort the continuum space on which human mobility occurs. Since two different exponent...

  10. MULTI-SCALE GAUSSIAN PROCESSES MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhou Yatong; Zhang Taiyi; Li Xiaohe

    2006-01-01

    A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.

  11. Scaling Laws in the Distribution of Galaxies

    CERN Document Server

    Jones, B J T; Saar, E; Trimble, V; Jones, Bernard J. T.; Martinez, Vicent J.; Saar, Enn; Trimble, Virginia

    2004-01-01

    Research done during the previous century established our Standard Cosmological Model. There are many details still to be filled in, but few would seriously doubt the basic premise. Past surveys have revealed that the large-scale distribution of galaxies in the Universe is far from random: it is highly structured over a vast range of scales. To describe cosmic structures, we need to build mathematically quantifiable descriptions of structure. Identifying where scaling laws apply and the nature of those scaling laws is an important part of understanding which physical mechanisms have been responsible for the organization of clusters, superclusters of galaxies and the voids between them. Finding where these scaling laws are broken is equally important since this indicates the transition to different underlying physics. In describing scaling laws we are helped by making analogies with fractals: mathematical constructs that can possess a wide variety of scaling properties. We must beware, however, of saying that ...

  12. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone......, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  13. Clinical interpretation and use of stroke scales.

    Science.gov (United States)

    Kasner, Scott E

    2006-07-01

    No single outcome measure can describe or predict all dimensions of recovery and disability after acute stroke. Several scales have proven reliability and validity in stroke trials, including the National Institutes of Health stroke scale (NIHSS), the modified Rankin scale (mRS), the Barthel index (BI), the Glasgow outcome scale (GOS), and the stroke impact scale (SIS). Several scales have been combined in stroke trials to derive a global statistic to better define the effect of acute interventions, although this composite statistic is not clinically tenable. In practice, the NIHSS is useful for early prognostication and serial assessment, whereas the BI is useful for planning rehabilitative strategies. The mRS and GOS provide summary measures of outcome and might be most relevant to clinicians and patients considering early intervention. The SIS was designed to measure the patient's perspective on the effect of stroke. Familiarity with these scales could improve clinicians' interpretation of stroke research and their clinical decision-making.

  14. [Self-rating scales reveal psychopathy].

    Science.gov (United States)

    Dåderman, A; Lidberg, L

    1998-01-28

    Psychopathy is regarded as a dimensional concept--i.e., a person can be more or less psychopathic. This approach enables psychopathy to be measured with reliable, validated personality scales, and to be related to impairment of serontonergic function in the brain. Several personality inventories are described in the article, especially the Karolinska Scales of Personality, the Zuckerman Sensation Seeking Scales, form V, the Eysenck Personality Questionnaire, including an impulsiveness scale from the IVE (Impulsiveness-Venturesomeness-Empathy) inventory, and the old dimensional scale, the Marke-Nyman Personality Temperament scale based on the personality theory of Henrik Sjöbring. In this way both old and new, and both Swedish and foreign personality concepts are linked together. Personality scales are easy to use and enable better stability and validity of results to be attained.

  15. Collaboration and nested environmental governance: Scale dependency, scale framing, and cross-scale interactions in collaborative conservation.

    Science.gov (United States)

    Wyborn, Carina; Bixler, R Patrick

    2013-07-15

    The problem of fit between social institutions and ecological systems is an enduring challenge in natural resource management and conservation. Developments in the science of conservation biology encourage the management of landscapes at increasingly larger scales. In contrast, sociological approaches to conservation emphasize the importance of ownership, collaboration and stewardship at scales relevant to the individual or local community. Despite the proliferation of initiatives seeking to work with local communities to undertake conservation across large landscapes, there is an inherent tension between these scales of operation. Consequently, questions about the changing nature of effective conservation across scales abound. Through an analysis of three nested cases working in a semiautonomous fashion in the Northern Rocky Mountains in North America, this paper makes an empirical contribution to the literature on nested governance, collaboration and communication across scales. Despite different scales of operation, constituencies and scale frames, we demonstrate a surprising similarity in organizational structure and an implicit dependency between these initiatives. This paper examines the different capacities and capabilities of collaborative conservation from the local to regional to supra regional. We draw on the underexplored concept of 'scale-dependent comparative advantage' (Cash and Moser, 2000), to gain insight into what activities take place at which scale and what those activities contribute to nested governance and collaborative conservation. The comparison of these semiautonomous cases provides fruitful territory to draw lessons for understanding the roles and relationships of organizations operating at different scales in more connected networks of nested governance.

  16. Scaling Agile Infrastructure to People

    CERN Document Server

    Jones, B; Traylen, S; Arias, N Barrientos

    2015-01-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...

  17. Universal scaling in sports ranking

    CERN Document Server

    Deng, Weibing; Cai, Xu; Bulou, Alain; Wang, Qiuping A

    2011-01-01

    Ranking is a ubiquitous phenomenon in the human society. By clicking the web pages of Forbes, you may find all kinds of rankings, such as world's most powerful people, world's richest people, top-paid tennis stars, and so on and so forth. Herewith, we study a specific kind, sports ranking systems in which players' scores and prize money are calculated based on their performances in attending various tournaments. A typical example is tennis. It is found that the distributions of both scores and prize money follow universal power laws, with exponents nearly identical for most sports fields. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player will top the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simul...

  18. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  19. Scaling Agile Infrastructure to People

    Science.gov (United States)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  20. The scaling issue: scientific opportunities

    Science.gov (United States)

    Orbach, Raymond L.

    2009-07-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  1. Supersymmetry at the electroweak scale

    CERN Document Server

    Chankowski, P H

    1996-01-01

    The simplest interpretation of the global success of the Standard Model is that new physics decouples well above the electroweak scale. Supersymmetric extension of the Standard Model offers the possibility of light chargino and the right-handed stop (with masses below $M_Z$), and still maintaining the successful predictions of the Standard Model. The value of $R_b$ can then be enhanced up to $\\sim 0.218$ (the Standard Model value is $\\sim 0.216$). Light chargino and stop give important contribution to rare processes such as $b\\rightarrow s \\gamma$, $\\overline K^0-K^0$ and $\\overline B^0-B^0$ mixing but consistency with experimental results is maintained in a large region of the parameter space. The exotic four-jet events reported by ALEPH (if confirmed) may constitute a signal for supersymmetry with such a light spectrum and with explicitly broken $R-$parity. Their interpretation as pair production of charginos with $m_C\\sim 60$ GeV, with subsequent decay $C\\rightarrow \\tilde t_R b \\rightarrow dsb$ (where $m_...

  2. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  3. Reactive/Adsorptive transport in (partially-) saturated porous media: from pore scale to core scale

    NARCIS (Netherlands)

    Raoof, A.

    2011-01-01

    Pore-scale modeling provides opportunities to study transport phenomena in fundamental ways because detailed information is available at the microscopic pore scale. This offers the best hope for bridging the traditional gap that exists between pore scale and macro (lab) scale description of the proc

  4. UNIDIMENSIONALITY AND CUMULATIVENESS OF THE LONELINESS SCALE USING MOKKEN-SCALE-ANALYSIS FOR POLYCHOTOMOUS ITEMS

    NARCIS (Netherlands)

    MOORER, P; SUURMEIJER, TPBM

    1993-01-01

    The unidimensionality and cumulativeness of the Loneliness Scale of De Jong-Gierveld was investigated using the Mokken Scale Analysis for polychotomous items. 10 of the 11 items of the original Loneliness Scale constituted a unidimensional, cumulative scale, with a homogeneity coefficient H of 0.37

  5. The 21 May 2014 Mw 5.9 Bay of Bengal earthquake: macroseismic data suggest a high‐stress‐drop event

    Science.gov (United States)

    Martin, Stacey; Hough, Susan E.

    2015-01-01

    A modest but noteworthy Mw 5.9 earthquake occurred in the Bay of Bengal beneath the central Bengal fan at 21:51 Indian Standard Time (16:21 UTC) on 21 May 2014. Centered over 300 km from the eastern coastline of India (Fig. 1), it caused modest damage by virtue of its location and magnitude. However, shaking was very widely felt in parts of eastern India where earthquakes are uncommon. Media outlets reported as many as four fatalities. Although most deaths were blamed on heart attacks, the death of one woman was attributed by different sources to either a roof collapse or a stampede (see Table S1, available in the electronic supplement to this article). Across the state of Odisha, as many as 250 people were injured (see Table S1), most after jumping from balconies or terraces. Light damage was reported from a number of towns on coastal deltaic sediments, including collapsed walls and damage to pukka and thatched dwellings. Shaking was felt well inland into east‐central India and was perceptible in multistoried buildings as far as Chennai, Delhi, and Jaipur at distances of ≈1600  km (Table 1).

  6. Meso-scale machining capabilities and issues

    Energy Technology Data Exchange (ETDEWEB)

    BENAVIDES,GILBERT L.; ADAMS,DAVID P.; YANG,PIN

    2000-05-15

    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  7. Thermodynamic scaling behavior in genechips

    Directory of Open Access Journals (Sweden)

    Van Hummelen Paul

    2009-01-01

    Full Text Available Abstract Background Affymetrix Genechips are characterized by probe pairs, a perfect match (PM and a mismatch (MM probe differing by a single nucleotide. Most of the data preprocessing algorithms neglect MM signals, as it was shown that MMs cannot be used as estimators of the non-specific hybridization as originally proposed by Affymetrix. The aim of this paper is to study in detail on a large number of experiments the behavior of the average PM/MM ratio. This is taken as an indicator of the quality of the hybridization and, when compared between different chip series, of the quality of the chip design. Results About 250 different GeneChip hybridizations performed at the VIB Microarray Facility for Homo sapiens, Drosophila melanogaster, and Arabidopsis thaliana were analyzed. The investigation of such a large set of data from the same source minimizes systematic experimental variations that may arise from differences in protocols or from different laboratories. The PM/MM ratios are derived theoretically from thermodynamic laws and a link is made with the sequence of PM and MM probe, more specifically with their central nucleotide triplets. Conclusion The PM/MM ratios subdivided according to the different central nucleotides triplets follow qualitatively those deduced from the hybridization free energies in solution. It is shown also that the PM and MM histograms are related by a simple scale transformation, in agreement with what is to be expected from hybridization thermodynamics. Different quantitative behavior is observed on the different chip organisms analyzed, suggesting that some organism chips have superior probe design compared to others.

  8. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  9. Heritage and scale: settings, boundaries and relations

    DEFF Research Database (Denmark)

    Harvey, David

    2015-01-01

    While recent years have seen increasing interest in the geographies of heritage, very few scholars have interrogated the difference that scale makes. Indeed, in a world in which the nation state appears to be on the wane, the process of articulating heritage on whatever scale – whether of individ......While recent years have seen increasing interest in the geographies of heritage, very few scholars have interrogated the difference that scale makes. Indeed, in a world in which the nation state appears to be on the wane, the process of articulating heritage on whatever scale – whether...... relations. This paper examines how heritage is produced and practised, consumed and experienced, managed and deployed at a variety of scales, exploring how notions of scale, territory and boundedness have a profound effect on the heritage process. Drawing on the work of Doreen Massey and others, the paper...

  10. A clinimetric overview of scar assessment scales.

    Science.gov (United States)

    van der Wal, M B A; Verhaegen, P D H M; Middelkoop, E; van Zuijlen, P P M

    2012-01-01

    Standardized validated evaluation instruments are mandatory to increase the level of evidence in scar management. Scar assessment scales are potentially suitable for this purpose, but the most appropriate scale still needs to be determined. This review will elaborate on several clinically relevant scar features and critically discuss the currently available scar scales in terms of basic clinimetric requirements. Many current scales can produce reliable measurements but seem to require multiple observers to obtain these results reliably, which limits their feasibility in clinical practice. The validation process of scar scales is hindered by the lack of a "gold standard" in subjective scar assessment or other reliable objective instruments which are necessary for a good comparison. The authors conclude that there are scar scales available that can reliably measure scar quality. However, further research may lead to improvement of their clinimetric properties and enhance the level of evidence in scar research worldwide.

  11. Radiatively induced Fermi scale and unification

    CERN Document Server

    Alanne, Tommi

    2016-01-01

    We propose a framework, where the hierarchy between the unification and the Fermi scale emerges radiatively. This work tackles the long-standing question about the connection between the low Fermi scale and a more fundamental scale of Nature. As a concrete example, we study a Pati-Salam-type unification of Elementary-Goldstone-Higgs scenario, where the Standard Model scalar sector is replaced by an SU(4)-symmetric one, and the observed Higgs particle is an elementary pseudo-Goldstone boson. We construct a concrete model where the unification scale is fixed to a phenomenologically viable value, while the Fermi scale is generated radiatively. This scenario provides an interesting link between the unification and Fermi scale physics, and opens up prospects for exploring a wide variety of open problems in particle physics, ranging from neutrinos to cosmic inflation.

  12. Scales used in research and applications

    Directory of Open Access Journals (Sweden)

    Wanderson Lyrio Bermudes

    2016-10-01

    Full Text Available In scientific research, we always seek to excellence in methodology, since the definition of the best method is as important as the choice of the scale to be used. This study aims to identify the types of scales used in research and its applications. The four most common types of scale are: nominal, ordinal, interval and ratio. Among the attitude scales used in scientific research, we highlight the Thurstone and the Likert. The Thurstone scale is used to measure a probable human attitude without indicating the intensity. The Likert scale consists of five items ranging from complete disagreement to total agreement on certain statement. It differs from Thurstone’s due to the degree of intensity that is covered by its answers and it has been more used.

  13. Transmissibility scale-up in reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, W.; Gupta, A. [Oklahoma Univ., Oklahoma City, OK (United States)

    1999-11-01

    A study was conducted to develop efficient methods for scaling of petrophysical properties from high resolution geological models to the resolution of reservoir simulation. Data from the Gypsy Field located in northeastern Oklahoma near Lake Keystone was used to evaluate the proposed method. The petrophysical property which was scaled in this study was the transmissibility between two grid blocks. A linear flow scale-up of the transmissibility between two grid blocks was conducted. It was determined that the scale-up of the productivity index is both important and necessary for determining the radial flow around the wellbore. Special consideration was needed for the pinch-out grid blocks in the system. Fine-scale and coarse-scale reservoir models were used to evaluate the feasibility of this proposed method. Performance predictions were compared with various reservoir flow case studies. 21 refs., 2 tabs., 20 figs.

  14. Neutrino footprint in Large Scale Structure

    CERN Document Server

    Jimenez, Raul; Verde, Licia

    2016-01-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys, implying a direct determination of the absolute neutrino mass scale. The measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. Detection of a lack of small-scale power, however, could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties can be related to the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature can not be easily mimicked by systematic uncertainties or modifications in ...

  15. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    Science.gov (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  16. Quantum quench and scaling of entanglement entropy

    Science.gov (United States)

    Caputa, Paweł; Das, Sumit R.; Nozaki, Masahiro; Tomiya, Akio

    2017-09-01

    Global quantum quench with a finite quench rate which crosses critical points is known to lead to universal scaling of correlation functions as functions of the quench rate. In this work, we explore scaling properties of the entanglement entropy of a subsystem in a harmonic chain during a mass quench which asymptotes to finite constant values at early and late times and for which the dynamics is exactly solvable. When the initial state is the ground state, we find that for large enough subsystem sizes the entanglement entropy becomes independent of size. This is consistent with Kibble-Zurek scaling for slow quenches, and with recently discussed ;fast quench scaling; for quenches fast compared to physical scales, but slow compared to UV cutoff scales.

  17. Diffusiophoresis at the macro-scale

    CERN Document Server

    Mauger, C; Machicoane, N; Bourgoin, M; Cottin-Bizonne, C; Ybert, C; Raynal, F

    2015-01-01

    Diffusiophoresis, a ubiquitous phenomenon which induces particle transport whenever solute gradients are present, was recently put forward in the context of microsystems and shown to strongly impact colloidal transport (from patterning to mixing) at such scales. In the present work, we show experimentally that this nanoscale-rooted mechanism can actually induce changes in the \\textit{macro-scale mixing} of colloids by chaotic advection. Rather than the usual decay of standard deviation of concentration, which is a global parameter, we use different multi-scale tools available for chaotic flows or intermittent turbulent mixing, like concentration spectra, or second and fourth moments of probability density functions of scalar gradients. Not only those tools can be used in open flows (when the mean concentration is not constant), but also they allow for a scale by scale analysis. Strikingly, diffusiophoresis is shown to affect all scales, although more particularly the smallest one, resulting in a change of sca...

  18. Weyl's Scale Invariance And The Standard Model

    CERN Document Server

    Gold, B S

    2005-01-01

    This paper is an extension of the work by Dr. Subhash Rajpoot, Ph.D. and Dr. Hitoshi Nishino, Ph.D. I introduce Weyl's scale invariance as an additional local symmetry in the standard model of electroweak interactions. An inevitable consequence is the introduction of general relativity coupled to scalar fields a la Dirac and an additional vector particle called the Weylon. This paper shows that once Weyl's scale invariance is broken, the phenomenon (a) generates Newton's gravitational constant GN and (b) triggers spontaneous symmetry breaking in the normal manner resulting in masses for the conventional fermions and bosons. The scale at which Weyl's sclale symmetry breaks is of order Planck mass. If right-handed neutrinos are also introduced, their absence at present energy scales is attributed to their mass which is tied to the scale where scale invariance breaks.

  19. Kolmogorov Dissipation scales in Weakly Ionized Plasmas

    CERN Document Server

    Krishan, V

    2009-01-01

    In a weakly ionized plasma, the evolution of the magnetic field is described by a "generalized Ohm's law" that includes the Hall effect and the ambipolar diffusion terms. These terms introduce additional spatial and time scales which play a decisive role in the cascading and the dissipation mechanisms in magnetohydrodynamic turbulence. We determine the Kolmogorov dissipation scales for the viscous, the resistive and the ambipolar dissipation mechanisms. The plasma, depending on its properties and the energy injection rate, may preferentially select one of the these dissipation scales. thus determining the shortest spatial scale of the supposedly self-similar spectral distribution of the magnetic field. The results are illustrated taking the partially ionized part of the solar atmosphere as an example. Thus the shortest spatial scale of the supposedly self-similar spectral distribution of the solar magnetic field is determined by any of the four dissipation scales given by the viscosity, the Spizer resistivity...

  20. Amorphous silica scale in cooling waters

    Energy Technology Data Exchange (ETDEWEB)

    Midkiff, W.S.; Foyt, H.P.

    1976-01-01

    In 1968, most of the evaporation cooled recirculating water systems at Los Alamos Scientific Laboratory were nearly inoperable due to scale. These systems, consisting of cooling towers, evaporative water coolers, evaporative condensers, and air washers had been operated on continuous blowdown without chemical treatment. The feedwater contained 80 mg/l silica. A successful program of routine chemical addition in the make-up water was begun. Blends of chelants, dispersants and corrosion inhibitors were found to gradually remove old scale, prevent new scale, and keep corrosion to less than an indicated rate of one mil per year. An explanation has been proposed that amorphous silica by itself does not form a troublesome scale. When combined with a crystal matrix such as calcite, the resultant silica containing scale can be quite troublesome. Rapid buildup of silica containing scale can be controlled and prevented by preventing formation of crystals from other constituents in the water such as hardness or iron. (auth)

  1. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  2. Further validation of the Indecisiveness Scale.

    Science.gov (United States)

    Gayton, W F; Clavin, R H; Clavin, S L; Broida, J

    1994-12-01

    Scores on the Indecisiveness Scale have been shown to be correlated with scores on measures of obsessive-compulsive tendencies and perfectionism for women. This study examined the validity of the Indecisiveness Scale with 41 men whose mean age was 21.1 yr. Indecisiveness scores were significantly correlated with scores on measures of obsessive-compulsive tendencies and perfectionism. Also, undeclared majors had a significantly higher mean on the Indecisiveness Scale than did declared majors.

  3. Scale-Space Theory in Computer Vision

    OpenAIRE

    1994-01-01

    A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over certain ranges of scale. "Scale-Space Theory in Computer Vision" describes a formal theory for representing the notion of scale in image data, and shows how this theory applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects rang...

  4. Resource Complementarity and IT Economies of Scale

    DEFF Research Database (Denmark)

    Woudstra, Ulco; Berghout, Egon; Tan, Chee-Wee

    2017-01-01

    In this study, we explore economies of scale for IT infrastructure and application services. An in-depth appreciation of economies of scale is imperative for an adequate understanding of the impact of IT investments. Our findings indicate that even low IT spending organizations can make...... a difference by devoting at least 60% of their total IT budget on IT infrastructure in order to foster economies of scale and extract strategic benefits....

  5. STABILIZATION OF EXPANSIVE SOIL USING MILL SCALE

    OpenAIRE

    Y.I.Murthy

    2012-01-01

    The present paper deals with the evaluation of the mechanical properties of black cotton soil mixed with mill scale in varying proportions and comparing the same with the results of pure black cotton soil. The mechanical properties of mill scale and black cotton soil are individually determined first and then the two are combined in varying proportions. The properties like plastic limit, CBR and Permeability of the same are evaluated. It is found that mixing mill scale in varying proportions ...

  6. Socially responsible marketing decisions - scale development

    Directory of Open Access Journals (Sweden)

    Dina Lončarić

    2009-07-01

    Full Text Available The purpose of this research is to develop a measurement scale for evaluating the implementation level of the concept of social responsibility in taking marketing decisions, in accordance with a paradigm of the quality-of-life marketing. A new scale of "socially responsible marketing decisions" has been formed and its content validity, reliability and dimensionality have been analyzed. The scale has been tested on a sample of the most successful Croatian firms. The research results lead us to conclude that the scale has satisfactory psychometric characteristics but that it is necessary to improve it by generating new items and by testing it on a greater number of samples.

  7. Scale Mismatches in Management of Urban Landscapes

    Directory of Open Access Journals (Sweden)

    Christine Alfsen-Norodom

    2006-12-01

    Full Text Available Urban landscapes constitute the future environment for most of the world’s human population. An increased understanding of the urbanization process and of the effects of urbanization at multiple scales is, therefore, key to ensuring human well-being. In many conventional natural resource management regimes, incomplete knowledge of ecosystem dynamics and institutional constraints often leads to institutional management frameworks that do not match the scale of ecological patterns and processes. In this paper, we argue that scale mismatches are particularly pronounced in urban landscapes. Urban green spaces provide numerous important ecosystem services to urban citizens, and the management of these urban green spaces, including recognition of scales, is crucial to the well-being of the citizens. From a qualitative study of the current management practices in five urban green spaces within the Greater Stockholm Metropolitan Area, Sweden, we found that 1 several spatial, temporal, and functional scales are recognized, but the cross-scale interactions are often neglected, and 2 spatial and temporal meso-scales are seldom given priority. One potential effect of the neglect of ecological cross-scale interactions in these highly fragmented landscapes is a gradual reduction in the capacity of the ecosystems to provide ecosystem services. Two important strategies for overcoming urban scale mismatches are suggested: 1 development of an integrative view of the whole urban social–ecological landscape, and 2 creation of adaptive governance systems to support practical management.

  8. Improving the Factor Structure of Psychological Scales

    Science.gov (United States)

    Zhang, Xijuan; Savalei, Victoria

    2015-01-01

    Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structure as compared with the Likert format. We tested this hypothesis on three popular psychological scales: the Rosenberg Self-Esteem scale, the Conscientiousness subscale of the Big Five Inventory, and the Beck Depression Inventory II. Scales in both formats showed comparable reliabilities. However, scales in the Expanded format had better (i.e., lower and more theoretically defensible) dimensionalities than scales in the Likert format, as assessed by both exploratory factor analyses and confirmatory factor analyses. We encourage further study and wider use of the Expanded format, particularly when a scale’s dimensionality is of theoretical interest. PMID:27182074

  9. Modeling and simulation with operator scaling

    CERN Document Server

    Cohen, Serge; Rosinski, Jan

    2009-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical applications. A classification of operator stable Levy processes in two dimensions is provided according to their exponents and symmetry groups. We conclude with some remarks and extensions to general operator self-similar processes.

  10. Large-Scale Damage Control Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Performs large‑scale fire protection experiments that simulate actual Navy platform conditions. Remote control firefighting systems are also tested....

  11. Hierarchical Scaling in Systems of Natural Cities

    CERN Document Server

    Chen, Yanguang

    2016-01-01

    Hierarchies can be modeled by a set of exponential functions, from which we can derive a set of power laws indicative of scaling. These scaling laws are followed by many natural and social phenomena such as cities, earthquakes, and rivers. This paper is devoted to revealing the scaling patterns in systems of natural cities by reconstructing the hierarchy with cascade structure. The cities of America, Britain, France, and Germany are taken as examples to make empirical analyses. The hierarchical scaling relations can be well fitted to the data points within the scaling ranges of the size and area of the natural cities. The size-number and area-number scaling exponents are close to 1, and the allometric scaling exponent is slightly less than 1. The results suggest that natural cities follow hierarchical scaling laws and hierarchical conservation law. Zipf's law proved to be one of the indications of the hierarchical scaling, and the primate law of city-size distribution represents a local pattern and can be mer...

  12. The Mirage of the Fermi Scale

    DEFF Research Database (Denmark)

    Antipin, Oleg; Sannino, Francesco; Tuominen, Kimmo

    2013-01-01

    The discovery of a light Higgs boson at LHC may be suggesting that we need to revise our model building paradigms to understand the origin of the weak scale. We explore the possibility that the Fermi scale is not fundamental but rather a derived one, i.e. a low energy mirage. We show that this sc......The discovery of a light Higgs boson at LHC may be suggesting that we need to revise our model building paradigms to understand the origin of the weak scale. We explore the possibility that the Fermi scale is not fundamental but rather a derived one, i.e. a low energy mirage. We show...

  13. Ergodicity breakdown and scaling from single sequences

    Energy Technology Data Exchange (ETDEWEB)

    Kalashyan, Armen K. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Buiatti, Marco [Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119 Universite Rene Descartes - Paris 5 45, rue des Saints Peres, 75270 Paris Cedex 06 (France); Cognitive Neuroimaging Unit - INSERM U562, Service Hospitalier Frederic Joliot, CEA/DRM/DSV, 4 Place du general Leclerc, 91401 Orsay Cedex (France); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Dipartimento di Fisica ' E.Fermi' - Universita di Pisa and INFM, Largo Pontecorvo 3, 56127 Pisa (Italy); Istituto dei Processi Chimico, Fisici del CNR Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy)], E-mail: grigo@df.unipi.it

    2009-01-30

    In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.

  14. Dense Correspondences across Scenes and Scales.

    Science.gov (United States)

    Tau, Moria; Hassner, Tal

    2016-05-01

    We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods.

  15. Scaling in stratocumulus fields: an emergent property

    CERN Document Server

    Yuan, Tianle

    2015-01-01

    Marine stratocumulus clouds play a critical role in the Earth's climate system. They display an amazing array of complex behaviors at many different spatiotemporal scales. Precipitation in these clouds is in general very light, but it is vital for clouds' systematic evolution and organization. Here we identify areas of high liquid water path within these clouds as potentially precipitating, or pouches. They are breeding grounds for stratocumuli to change their organization form. We show, using different satellite data sets, that the size distribution of these pouches show a universal scaling. We argue that such scaling is an emergent property of the cloud system, which results from numbers interactions at the microscopic scale.

  16. The rapid disability rating scale-2.

    Science.gov (United States)

    Linn, M W; Linn, B S

    1982-06-01

    A revised version of the Rapid Disability Rating Scale (RDRS-2) is presented. Item definitions have been sharpened and directions expanded to indicate that ratings are based upon the patient's performance in regard to behavior, and that prosthesis normally used by the patient should be included in the assessment. Three items have been added to increase the breadth of the scale. Response items have been changed from three-point to four-point ratings in order to increase group discrimination amd make the scale more sensitive to changes in treatment. The new appraisals of reliability, factor structure, and validity are reported, along with the potential uses of the scale.

  17. On Definition of Geograghic Variable Scaling

    Institute of Scientific and Technical Information of China (English)

    ZHONG Yexun; LI Zhanyuan; HUANG Hu

    2006-01-01

    According to the description effect range, the method of scaling system in cartography is with the precision of four grades: nominal scaling, ordinal scaling, interval scaling and ratio scaling. The authors have researched their innate character and inherent relations. The essence of evaluation partialordering set (A, ≤) is the mapping of evaluation object (X, ≤) under fixed condition. The collection and classification of xi∈X corresponds how to express Aj∈A, i.e.,xi∈Xf(xi)=Aj∈A, Aj is the image of xi under mapping f. The function relation between evaluation partial-ordering set (A, ≤) and evaluation object (X, ≤) is decided by the space character and the way of collection and classification. The different space character and way of collection and classification will produce the different express method and evaluation result, accordingly the authors give out the mathematical definitions for nominal scaling, ordinal scaling, interval scaling and ratio scaling respectively. These results have been proved through the examples.

  18. Scaling in ANOVA-simultaneous component analysis.

    Science.gov (United States)

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  19. Ontogenetic and interspecific metabolic scaling in insects.

    Science.gov (United States)

    Maino, James L; Kearney, Michael R

    2014-12-01

    Design constraints imposed by increasing size cause metabolic rate in animals to increase more slowly than mass. This ubiquitous biological phenomenon is referred to as metabolic scaling. However, mechanistic explanations for interspecific metabolic scaling do not apply to ontogenetic size changes within a species, implying different mechanisms for scaling phenomena. Here, we show that the dynamic energy budget theory approach of compartmentalizing biomass into reserve and structural components provides a unified framework for understanding ontogenetic and interspecific metabolic scaling. We formulate the theory for insects and show that it can account for ontogenetic metabolic scaling during the embryonic and larval phases, as well as the U-shaped respiration curve during pupation. After correcting for the predicted ontogenetic scaling effects, which we show to follow universal curves, the scaling of respiration between species is approximated by a three-quarters power law, supporting past empirical studies on insect metabolic scaling and our theoretical predictions. The ability to explain ontogenetic and interspecific metabolic scaling effects under one consistent framework suggests that the partitioning of biomass into reserve and structure is a necessary foundation to a general metabolic theory.

  20. Predicting scale formation during electrodialytic nutrient recovery.

    Science.gov (United States)

    Thompson Brewster, Emma; Ward, Andrew J; Mehta, Chirag M; Radjenovic, Jelena; Batstone, Damien J

    2017-03-01

    Electro-concentration of nutrients from waste streams is a promising technology to enable resource recovery, but has several operational concerns. One key concern is the formation of inorganic scale on the concentrate side of cation exchange membranes when recovering nutrients from wastewaters containing calcium, magnesium, phosphorous and carbonate, commonly present in anaerobic digester rejection water. Electrodialytic nutrient recovery was trialed on anaerobic digester rejection water in a laboratory scale electro-concentration unit without treatment (A), following struvite recovery (B), and following struvite recovery as well as concentrate controlled at pH 5 for scaling control (C). Treatment A resulted in large amount of scale, while treatment B significantly reduced the amount of scale formation with reduction in magnesium phosphates, and treatment C reduced the amount of scale further by limiting the formation of calcium carbonates. Treatment C resulted in an 87 ± 7% by weight reduction in scale compared to treatment A. A mechanistic model for the inorganic processes was validated using a previously published general precipitation model based on saturation index. The model attributed the reduction in struvite scale to the removal of phosphate during the struvite pre-treatment, and the reduction in calcium carbonate scale to pH control resulting in the stripping of carbonate as carbon dioxide gas. This indicates that multiple strategies may be required to control precipitation, and that mechanistic models can assist in developing a combined approach.

  1. Scale-PC shielding analysis sequences

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, S.M.

    1996-05-01

    The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.

  2. Reliability of the Ego-Grasping Scale.

    Science.gov (United States)

    Lester, David

    2012-04-01

    Research using Knoblauch and Falconer's Ego-Grasping Scale is reviewed. Using a sample of 695 undergraduate students, the scale had moderate reliability (Cronbach alpha, odd-even numbered items, and test-retest), but a principal-components analysis with a varimax rotation identified five components, indicating heterogeneity in the content of the items. Lower Ego-Grasping scores appear to be associated with better psychological health. The scale has been translated and used with Korean, Kuwaiti, and Turkish students, indicating that the scale can be useful in cross-cultural studies.

  3. Water scaling in the North Sea oil and gas fields and scale prediction: An overview

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, M.

    1996-12-31

    Water-scaling is a common and major production chemistry problem in the North Sea oil and gas fields and scale prediction has been an important means to assess the potential and extent of scale deposition. This paper presents an overview of sulphate and carbonate scaling problems in the North Sea and a review of several widely used and commercially available scale prediction software. In the paper, the water chemistries and scale types and severities are discussed relative of the geographical distribution of the fields in the North Sea. The theories behind scale prediction are then briefly described. Five scale or geochemical models are presented and various definitions of saturation index are compared and correlated. Views are the expressed on how to predict scale precipitation under some extreme conditions such as that encountered in HPHT reservoirs. 15 refs., 7 figs., 9 tabs.

  4. Scale-sensitive governance of the environment

    NARCIS (Netherlands)

    Padt, F.; Opdam, P.F.M.; Polman, N.B.P.; Termeer, C.J.A.M.

    2014-01-01

    Sensitivity to scales is one of the key challenges in environmental governance. Climate change, food production, energy supply, and natural resource management are examples of environmental challenges that stretch across scales and require action at multiple levels. Governance systems are typically

  5. An Application of Entropy in Survey Scale

    Directory of Open Access Journals (Sweden)

    Özgül Vupa

    2009-10-01

    Full Text Available This study demonstrates an application of entropy for information theory in the field of survey scale. Based on computer anxiety scale we obtain that the desired information may be achieved with fewer questions. In particular, one question is insufficient and two questions are necessary for a survey subscale.

  6. Anchoring the Panic Disorder Severity Scale

    Science.gov (United States)

    Keough, Meghan E.; Porter, Eliora; Kredlow, M. Alexandra; Worthington, John J.; Hoge, Elizabeth A.; Pollack, Mark H.; Shear, M. Katherine; Simon, Naomi M.

    2012-01-01

    The Panic Disorder Severity Scale (PDSS) is a clinician-administered measure of panic disorder symptom severity widely used in clinical research. This investigation sought to provide clinically meaningful anchor points for the PDSS both in terms of clinical severity as measured by the Clinical Global Impression-Severity Scale (CGI-S) and to extend…

  7. OVERVIEW OF SCALE 6.2

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T [ORNL; Dunn, Michael E [ORNL; Wiarda, Dorothea [ORNL; Celik, Cihangir [ORNL; Bekar, Kursat B [ORNL; Williams, Mark L [ORNL; Peplow, Douglas E. [ORNL; Perfetti, Christopher M [ORNL; Gauld, Ian C [ORNL; Wieselquist, William A [ORNL; Lefebvre, Jordan P [ORNL; Lefebvre, Robert A [ORNL; Havluj, Frantisek [Nuclear Research Institute, Rez, Czech Republic; Skutnik, Steven [The University of Tennessee; Dugan, Kevin [Texas A& M University

    2013-01-01

    SCALE is an industry-leading suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded CE Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.

  8. Developmental Work Personality Scale: An Initial Analysis.

    Science.gov (United States)

    Strauser, David R.; Keim, Jeanmarie

    2002-01-01

    The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive…

  9. Scaling laws in the distribution of galaxies

    NARCIS (Netherlands)

    Jones, BJT; Martinez, VJ; Saar, E; Trimble, [No Value

    2004-01-01

    Past surveys have revealed that the large-scale distribution of galaxies in the universe is far from random: it is highly structured over a vast range of scales. Surveys being currently undertaken and being planned for the next decades will provide a wealth of information about this structure. The u

  10. Happiness Scale Interval Study. Methodological Considerations

    Science.gov (United States)

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…

  11. Cardinal Scales for Public Health Evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Policy studies often evaluate health for a population by summing the individuals' health as measured by a scale that is ordinal or that depends on risk attitudes. We develop a method using a different type of preferences, called preference intensity or cardinal preferences, to construct scales...

  12. Adolescent Time Attitude Scale: Adaptation into Turkish

    Science.gov (United States)

    Çelik, Eyüp; Sahranç, Ümit; Kaya, Mehmet; Turan, Mehmet Emin

    2017-01-01

    This research is aimed at examining the validity and reliability of the Turkish version of the Time Attitude Scale. Data was collected from 433 adolescents; 206 males and 227 females participated in the study. Confirmatory factor analysis performed to discover the structural validity of the scale. The internal consistency method was used for…

  13. Probing the GUT Scale with Neutrino Oscillations

    Science.gov (United States)

    Eddine Ennadifi, Salah

    In the light of the theoretical and experimental developments in neutrino sector and their imprtance, we study its connection with new physics above the electroweak scale MEW ~ 102GeV . In particular, by considering the neutrino oscillations with the possible effective mass, we investigate, according to the experimental data, the underlying GUT scale MGUT ~ 1015GeV .

  14. Scaling Laws for Mesoscale and Microscale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Spletzer, Barry

    1999-08-23

    The set of laws developed and presented here is by no means exhaustive. Techniques have been present to aid in the development of additional scaling laws and to combine these and other laws to produce additional useful relationships. Some of the relationships produced here have yielded perhaps surprising results. Examples include the fifth order scaling law for electromagnetic motor torque and the zero order scaling law for capacitive motor power. These laws demonstrate important facts about actuators in small-scale systems. The primary intent of this introduction into scaling law analysis is to provide needed tools to examine possible areas of the research in small-scale systems and direct research toward more fruitful areas. Numerous examples have been included to show the validity of developing scaling laws based on first principles and how real world systems tend to obey these laws even when many other variables may potentially come into play. Development of further laws may well serve to provide important high-level direction to the continued development of small-scale systems.

  15. Multiscaling behavior of atomic-scale friction

    Science.gov (United States)

    Jannesar, M.; Jamali, T.; Sadeghi, A.; Movahed, S. M. S.; Fesler, G.; Meyer, E.; Khoshnevisan, B.; Jafari, G. R.

    2017-06-01

    The scaling behavior of friction between rough surfaces is a well-known phenomenon. It might be asked whether such a scaling feature also exists for friction at an atomic scale despite the absence of roughness on atomically flat surfaces. Indeed, other types of fluctuations, e.g., thermal and instrumental fluctuations, become appreciable at this length scale and can lead to scaling behavior of the measured atomic-scale friction. We investigate this using the lateral force exerted on the tip of an atomic force microscope (AFM) when the tip is dragged over the clean NaCl (001) surface in ultra-high vacuum at room temperature. Here the focus is on the fluctuations of the lateral force profile rather than its saw-tooth trend; we first eliminate the trend using the singular value decomposition technique and then explore the scaling behavior of the detrended data, which contains only fluctuations, using the multifractal detrended fluctuation analysis. The results demonstrate a scaling behavior for the friction data ranging from 0.2 to 2 nm with the Hurst exponent H =0.61 ±0.02 at a 1 σ confidence interval. Moreover, the dependence of the generalized Hurst exponent, h (q ) , on the index variable q confirms the multifractal or multiscaling behavior of the nanofriction data. These results prove that fluctuation of nanofriction empirical data has a multifractal behavior which deviates from white noise.

  16. Scaling laws predict global microbial diversity.

    Science.gov (United States)

    Locey, Kenneth J; Lennon, Jay T

    2016-05-24

    Scaling laws underpin unifying theories of biodiversity and are among the most predictively powerful relationships in biology. However, scaling laws developed for plants and animals often go untested or fail to hold for microorganisms. As a result, it is unclear whether scaling laws of biodiversity will span evolutionarily distant domains of life that encompass all modes of metabolism and scales of abundance. Using a global-scale compilation of ∼35,000 sites and ∼5.6⋅10(6) species, including the largest ever inventory of high-throughput molecular data and one of the largest compilations of plant and animal community data, we show similar rates of scaling in commonness and rarity across microorganisms and macroscopic plants and animals. We document a universal dominance scaling law that holds across 30 orders of magnitude, an unprecedented expanse that predicts the abundance of dominant ocean bacteria. In combining this scaling law with the lognormal model of biodiversity, we predict that Earth is home to upward of 1 trillion (10(12)) microbial species. Microbial biodiversity seems greater than ever anticipated yet predictable from the smallest to the largest microbiome.

  17. Stochastic dynamic equations on general time scales

    Directory of Open Access Journals (Sweden)

    Martin Bohner

    2013-02-01

    Full Text Available In this article, we construct stochastic integral and stochastic differential equations on general time scales. We call these equations stochastic dynamic equations. We provide the existence and uniqueness theorem for solutions of stochastic dynamic equations. The crucial tool of our construction is a result about a connection between the time scales Lebesgue integral and the Lebesgue integral in the common sense.

  18. Getting to Scale with Good Educational Practice.

    Science.gov (United States)

    Elmore, Richard F.

    1996-01-01

    School organization and incentive structures help thwart large-scale adoption of innovative educational practices. Evidence from the progressive movement and past curriculum reform efforts suggest that wide-scale reforms are ineffective under current conditions. Change requires external normative structures, organizations that focus intrinsic…

  19. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  20. Saturation and geometrical scaling in small systems

    CERN Document Server

    Praszalowicz, Michal

    2016-01-01

    Saturation and geometrical scaling (GS) of gluon distributions are a consequence of the non-linear evolution equations of QCD. We argue that in pp GS holds for the inelastic cross-section rather than for the multiplicity distributions. We also discuss possible fluctuations of the proton saturation scale in pA collisions at the LHC.

  1. Designing the Nuclear Energy Attitude Scale.

    Science.gov (United States)

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  2. Statistics for Locally Scaled Point Patterns

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.

    2006-01-01

    scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...

  3. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  4. Student Engagement Scale: Development, Reliability and Validity

    Science.gov (United States)

    Gunuc, Selim; Kuzu, Abdullah

    2015-01-01

    In this study, the purpose was to develop a student engagement scale for higher education. The participants were 805 students. In the process of developing the item pool regarding the scale, related literature was examined in detail and interviews were held. Six factors--valuing, sense of belonging, cognitive engagement, peer relationships…

  5. Price Discrimination, Economies of Scale, and Profits.

    Science.gov (United States)

    Park, Donghyun

    2000-01-01

    Demonstrates that it is possible for economies of scale to induce a price-discriminating monopolist to sell in an unprofitable market where the average cost always exceeds the price. States that higher profits in the profitable market caused by economies of scale may exceed losses incurred in the unprofitable market. (CMK)

  6. Scale inhibition study by turbidity measurement.

    Science.gov (United States)

    Tantayakom, V; Sreethawong, T; Fogler, H Scott; de Moraes, F F; Chavadej, S

    2005-04-01

    The concept of a critical supersaturation ratio (CSSR) has been used to characterize the effectiveness of different types of scale inhibitors, inhibitor concentration, and precipitating solution pH in order to prevent the formation of barium sulfate scale. The scale inhibitors used in this work were aminotrimethylene phosphonic acid (ATMP), diethylenetriaminepentamethylene phosphonic acid (DTPMP), and phosphinopolycarboxylic acid polymer (PPCA). The CSSR at which barium sulfate precipitates was obtained as a function of time for different precipitation conditions and was used as an index to evaluate the effect of the precipitation conditions. The results showed that the CSSRs decrease with increasing elapsed time after mixing the precipitating solutions, but increases with increasing scale inhibitor concentration and solution pH. The CSSR varies linearly with the log of the scale inhibitor concentration and with the precipitating solution pH. A SEM analysis showed that the higher the scale inhibitor concentration and solution pH, the smaller and more spherical the BaSO4 precipitates. Analysis of the particle size distribution revealed that increasing the elapsed time, the scale inhibitor concentration, and precipitating solution pH, all produce a broader particle size distribution and a smaller mean diameter of the BaSO4 precipitates. DTPMP and PPCA were the most effective BaSO4 scale inhibitors per ionizable proton and the most effective on a concentration basis, respectively.

  7. Phrase Completions: An Alternative to Likert Scales.

    Science.gov (United States)

    Hodge, David R.; Gillespie, David

    2003-01-01

    Delineates the problems with Likert scales, with a particular emphasis on multidimensionality and coarse response categories, and proposes a new measurement method called "phrase completions," which has been designed to circumvent the problems inherent in Likert scales. Also conducts an exploratory test, in which Likert items were adapted to…

  8. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in whi

  9. Achieving scale strategies for sustained competitive performance.

    Science.gov (United States)

    Grube, Mark E; Gish, Ryan S; Tkach, Sasha N

    2008-05-01

    Growth to achieve scale requires the following strategic initiatives: Having a clear understanding of what the organization is and what it wants to become. Ensuring a structured and rigorous growth process. Leveraging size to achieve benefits of scale. Recognizing the importance of physicians, ambulatory care, and primary care. Establishing and maintaining accountability as growth occurs.

  10. Scale invariant Volkov–Akulov supergravity

    Directory of Open Access Journals (Sweden)

    S. Ferrara

    2015-10-01

    Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  11. Developing a News Media Literacy Scale

    Science.gov (United States)

    Ashley, Seth; Maksl, Adam; Craft, Stephanie

    2013-01-01

    Using a framework previously applied to other areas of media literacy, this study developed and assessed a measurement scale focused specifically on critical news media literacy. Our scale appears to successfully measure news media literacy as we have conceptualized it based on previous research, demonstrated through assessments of content,…

  12. Some integral inequalities on time scales

    Institute of Scientific and Technical Information of China (English)

    Adnan Tuna; Servet Kutukcu

    2008-01-01

    In this article, we study the reverse Holder type inequality and Holder in-equality in two dimensional case on time scales. We also obtain many integral inequalities by using H(o)lder inequalities on time scales which give Hardy's inequalities as spacial cases.

  13. Designing the Nuclear Energy Attitude Scale.

    Science.gov (United States)

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  14. Adjustment of the Internal Tax Scale

    CERN Multimedia

    2013-01-01

    In application of Article R V 2.03 of the Staff Regulations, the internal tax scale has been adjusted with effect on 1 January 2012. The new scale may be consulted via the CERN Admin e-guide.  The notification of internal annual tax certificate for the financial year 2012 takes into account this adjustment. HR Department (Tel. 73907)

  15. Scaling analysis of meteorite shower mass distributions

    DEFF Research Database (Denmark)

    Oddershede, Lene; Meibom, A.; Bohr, Jakob

    1998-01-01

    Meteorite showers are the remains of extraterrestrial objects which are captivated by the gravitational field of the Earth. We have analyzed the mass distribution of fragments from 16 meteorite showers for scaling. The distributions exhibit distinct scaling behavior over several orders of magnetude...

  16. Topological scaling and gap filling at crisis

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, K. Gabor [Department of Mathematics, University of Kansas, Lawrence, Kansas 66045 (United States); Institute for Theoretical Physics, Eoetvoes University, P.O. Box 32, Budapest, H-1518, (Hungary); Lai, Ying-Cheng [Department of Mathematics and Department of Electrical Engineering, Arizona State University, Tempe, Arizona 85287 (United States); Tel, Tamas [Institute for Theoretical Physics, Eoetvoes University, P.O. Box 32, Budapest, H-1518, (Hungary); Grebogi, Celso [Institute for Plasma Research, Department of Mathematics, and Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742 (United States)

    2000-05-01

    Scaling laws associated with an interior crisis of chaotic dynamical systems are studied. We argue that open gaps of the chaotic set become densely filled at the crisis due to the sudden appearance of unstable periodic orbits with extremely long periods. We formulate a scaling theory for the associated growth of the topological entropy. (c) 2000 The American Physical Society.

  17. Education, Wechler's Full Scale IQ and "g."

    Science.gov (United States)

    Colom, Roberto; Abad, Francisco J.; Garcia, Luis F.; Juan-Espinosa, Manuel

    2002-01-01

    Investigated whether average Full Scale IQ (FSIQ) differences can be attributed to "g" using the Spanish standardization sample of the Wechsler Adult Intelligence Scale III (WAIS III) (n=703 females and 666 men). Results support the conclusion that WAIS III FSIQ does not directly or exclusively measure "g" across the full range…

  18. Facilitating Internet-Scale Code Retrieval

    Science.gov (United States)

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  19. Transdisciplinary Application of Cross-Scale Resilience

    Science.gov (United States)

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are re...

  20. Negative Life Events Scale for Students (NLESS)

    Science.gov (United States)

    Buri, John R.; Cromett, Cristina E.; Post, Maria C.; Landis, Anna Marie; Alliegro, Marissa C.

    2015-01-01

    Rationale is presented for the derivation of a new measure of stressful life events for use with students [Negative Life Events Scale for Students (NLESS)]. Ten stressful life events questionnaires were reviewed, and the more than 600 items mentioned in these scales were culled based on the following criteria: (a) only long-term and unpleasant…

  1. Reliability of Multi-Category Rating Scales

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  2. Determining the scale in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bornyakov, V.G. [Institute for High Energy Physics, Protvino (Russian Federation); Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Far Eastern Federal Univ., Vladivostok (Russian Federation). School of Biomedicine; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Hudspith, R. [York Univ., Toronto, ON (Canada). Dept. of Physics and Astronomy; and others

    2015-12-15

    We discuss scale setting in the context of 2+1 dynamical fermion simulations where we approach the physical point in the quark mass plane keeping the average quark mass constant. We have simulations at four beta values, and after determining the paths and lattice spacings, we give an estimation of the phenomenological values of various Wilson flow scales.

  3. Multi-scale modeling of softening materials

    NARCIS (Netherlands)

    Lloberas Valls, O.; Simone, A.; Sluys, L.J.

    2008-01-01

    This paper presents an assessment of a two-scale framework for the study of softening materials. The procedure is based on a hierarchical Finite Element (FE) scheme in which computations are performed both at macro and mesoscopic scale levels. The methodology is chosen specifically to remain valid

  4. The Pain Catastrophizing Scale: Development and Validation.

    Science.gov (United States)

    Sullivan, Michael J. L.; And Others

    1995-01-01

    A series of 4 studies involving 547 college students and community adults report the development of the Pain Catastrophizing Scale, its validity with clinical and nonclinical samples, and its correlation with measures of related constructs. The scale provides information about heightened responses to aversive procedures or events. (SLD)

  5. Test Scaling and Value-Added Measurement

    Science.gov (United States)

    Ballou, Dale

    2009-01-01

    Conventional value-added assessment requires that achievement be reported on an interval scale. While many metrics do not have this property, application of item response theory (IRT) is said to produce interval scales. However, it is difficult to confirm that the requisite conditions are met. Even when they are, the properties of the data that…

  6. Kalman plus weights: a time scale algorithm

    Science.gov (United States)

    Greenhall, C. A.

    2001-01-01

    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  7. Scaling service delivery in a failed state

    NARCIS (Netherlands)

    Muilerman, Sander; Vellema, Sietze

    2016-01-01

    The increased use of sustainability standards in the international trade in cocoa challenges companies to find effective modes of service delivery to large numbers of small-scale farmers. A case study of the Sustainable Tree Crops Program targeting the small-scale cocoa producers in Côte d’Ivoire

  8. The executive personal finance scale: item analyses.

    Science.gov (United States)

    Lester, David; Spinella, Marcello

    2007-12-01

    A scale devised to measure executive personal money management was examined for its factorial structure using 138 college students. On the whole, the factor analysis confirmed the subscale structure of the scale, but the Planning subscale appeared to consist of two distinct components, investment behavior and saving behavior.

  9. Transdisciplinary Application of Cross-Scale Resilience

    Science.gov (United States)

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are re...

  10. The minimum scale of grooving on faults

    NARCIS (Netherlands)

    Candela, T.; Brodsky, E.E.

    2016-01-01

    At the field scale, nearly all fault surfaces contain grooves generated as one side of the fault slips past the other. Grooves are so common that they are one of the key indicators of principal slip surfaces. Here, we show that at sufficiently small scales, grooves do not exist on fault surfaces. A

  11. The minimum scale of grooving on faults

    NARCIS (Netherlands)

    Candela, T.; Brodsky, E.E.

    2016-01-01

    At the field scale, nearly all fault surfaces contain grooves generated as one side of the fault slips past the other. Grooves are so common that they are one of the key indicators of principal slip surfaces. Here, we show that at sufficiently small scales, grooves do not exist on fault surfaces. A

  12. Optimal scaling of paired comparison data

    NARCIS (Netherlands)

    van de Velden, M.

    2004-01-01

    In this paper we consider the analysis of paired comparisons using optimal scaling techniques. In particular, we will, inspired by Guttman's approach for quantifying paired comparisons, formulate a new method to obtain optimal scaling values for the subjects. We will compare our results with those o

  13. Exploring Scaling: From Concept to Applications

    Science.gov (United States)

    Milner-Bolotin, Marina

    2009-01-01

    This paper discusses the concept of scaling and its biological and engineering applications. Scaling, in a scientific context, means proportional adjustment of the dimensions of an object so that the adjusted and original objects have similar shapes yet different dimensions. The paper provides an example of a hands-on, minds-on activity on scaling…

  14. The Career-Related Parent Support Scale.

    Science.gov (United States)

    Turner, Sherri L.; Alliman-Brissett, Annette; Lapan, Richard T.; Udipi, Sharanya; Ergun, Damla

    2003-01-01

    The authors describe the construction of the Career-Related Parent Support Scale and examine the validity of the scale scores within a sample of at-risk middle school adolescents. Four empirically distinct parent support factors were confirmed along A. Bandura's sources of self-efficacy information. Gender and ethnic differences in perceived…

  15. Uncertainty Consideration in Watershed Scale Models

    Science.gov (United States)

    Watershed scale hydrologic and water quality models have been used with increasing frequency to devise alternative pollution control strategies. With recent reenactment of the 1972 Clean Water Act’s TMDL (total maximum daily load) component, some of the watershed scale models are being recommended ...

  16. Wide-Scale Adoption of Photovoltaic Energy

    DEFF Research Database (Denmark)

    Yang, Yongheng; Enjeti, Prasad; Blaabjerg, Frede

    2015-01-01

    of wide-scale penetration of single-phase PV systems in the distributed grid, disconnection under grid faults can contribute to 1) voltage flickers, 2) power outages, and 3) system instability. This article explores grid code modifications for a wide-scale adoption of PV systems in the distribution grid...

  17. Scaling service delivery in a failed state

    NARCIS (Netherlands)

    Muilerman, Sander; Vellema, Sietze

    2017-01-01

    The increased use of sustainability standards in the international trade in cocoa challenges companies to find effective modes of service delivery to large numbers of small-scale farmers. A case study of the Sustainable Tree Crops Program targeting the small-scale cocoa producers in Côte d’Ivoire

  18. Scaling Health Information Systems in Developing Countries

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Neilsen, Petter

    2006-01-01

    This article addresses the issues of scaling health information system in the context of developing countries by taking a case study from Ethiopia. Concepts of information infrastructure have been used as an analytical lens to better understand scaling of Health Information systems. More...

  19. Scale invariant Volkov–Akulov supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, S., E-mail: sergio.ferrara@cern.ch [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); INFN – Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Italy); Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Porrati, M., E-mail: mp9@nyu.edu [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); CCPP, Department of Physics, NYU, 4 Washington Pl., New York, NY 10003 (United States); Sagnotti, A., E-mail: sagnotti@sns.it [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); Scuola Normale Superiore and INFN, Piazza dei Cavalieri 7, 56126 Pisa (Italy)

    2015-10-07

    A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  20. Tube-bending scale/protractor

    Science.gov (United States)

    Millett, A. U.

    1977-01-01

    Combination protractor and scale for measuring tube bends has novel pivot that allows tube to remain in contact with scale arms for all bend angles. Device permits rapid and accurate scribing and measurement of mockup fluid lines to obtain production data.