WorldWideScience

Sample records for shaking intensity distribution

  1. Forecasting probabilistic seismic shaking for greater Tokyo from 400 years of intensity observations (Invited)

    Science.gov (United States)

    Bozkurt, S.; Stein, R. S.; Toda, S.

    2009-12-01

    The long recorded history of earthquakes in Japan affords an opportunity to forecast seismic shaking exclusively from past shaking. We calculate the time-averaged (Poisson) probability of severe shaking by using more than 10,000 intensity observations recorded since AD 1600 in a 350-km-wide box centered on Tokyo. Unlike other hazard assessment methods, source and site effects are included without modeling, and we do not need to know the size or location of any earthquake or the location and slip rate of any fault. The two key assumptions are that the slope of the observed frequency-intensity relation at every site is the same; and that the 400-year record is long enough to encompass the full range of seismic behavior. Tests we conduct here suggest that both assumptions are sound. The resulting 30-year probability of IJMA≥6 shaking (~PGA≥0.9 g or MMI≥IX) is 30-40% in Tokyo, Kawasaki, and Yokohama, and 10-15% in Chiba and Tsukuba. This result means that there is a 30% chance that 4 million people would be subjected to IJMA≥6 shaking during an average 30-year period. We also produce exceedance maps of peak ground acceleration for building code regulations, and calculate short-term hazard associated with a hypothetical catastrophe bond. Our results resemble an independent assessment developed from conventional seismic hazard analysis for greater Tokyo. Over 10000 intensity observations stored and analyzed using geostatistical tools of GIS. Distribution of historical data is shown on this figure.

  2. ShakeCast: Automating and Improving the Use of ShakeMap for Post-Earthquake Decision- Making and Response

    Science.gov (United States)

    Lin, K.; Wald, D. J.

    2007-12-01

    ShakeCast is a freely available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users" facilities, sends notifications of potential damage to responsible parties, and generates facility damage maps and other Web-based products for emergency managers and responders. ShakeMap, a tool used to portray the extent of potentially damaging shaking following an earthquake, provides overall information regarding the affected areas. When a potentially damaging earthquake occurs, utility and other lifeline managers, emergency responders, and other critical users have an urgent need for information about the impact on their particular facilities so they can make appropriate decisions and take quick actions to ensure safety and restore system functionality. To this end, ShakeCast estimates the potential damage to a user's widely distributed facilities by comparing the complex shaking distribution with the potentially highly variable damageability of their inventory to provide a simple, hierarchical list and maps showing structures or facilities most likely impacted. All ShakeMap and ShakeCast files and products are non-propriety to simplify interfacing with existing users" response tools and to encourage user-made enhancement to the software. ShakeCast uses standard RSS and HTTP requests to communicate with the USGS Web servers that host ShakeMaps, which are widely-distributed and heavily mirrored. The RSS approach allows ShakeCast users to initiate and receive selected ShakeMap products and information on software updates. To assess facility damage estimates, ShakeCast users can combine measured or estimated ground motion parameters with damage relationships that can be pre-computed, use one of these ground motion parameters as input, and produce a multi-state discrete output of damage likelihood. Presently three common approaches are being used to provide users with an

  3. Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria

    Science.gov (United States)

    del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan

    2017-04-01

    The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.

  4. Energy distribution of the 'shake off' electrons at the 152Eu decay

    International Nuclear Information System (INIS)

    Mitrokhovich, N.F.

    2008-01-01

    On the special vacuum installation of coincidences of g-quanta and beta-particles with low energy electrons, including e 0 -electrons of the secondary electron emission (gamma beta e 0 -coincidences) for the first time the energy spectrum of 'shake off' electrons at 152 Eu decay is investigated in the range of 200 - 1700 eV. Registration of electrons of 'shake off' is carried out on e 0 -electrons of the secondary electron emission, created by them. By realization of threshold measurements the integral spectrum was obtained and on this basis the differential spectrum is computed. It is established, that the continuum of 'shake off' electrons is low energy and practically finishes at 400 eV. In the region of 300 eV the maximum energetic distribution is observed

  5. ShakeCast: Automating and improving the use of shakemap for post-earthquake deeision-making and response

    Science.gov (United States)

    Wald, D.; Lin, K.-W.; Porter, K.; Turner, Loren

    2008-01-01

    When a potentially damaging earthquake occurs, utility and other lifeline managers, emergency responders, and other critical users have an urgent need for information about the impact on their particular facilities so they can make appropriate decisions and take quick actions to ensure safety and restore system functionality. ShakeMap, a tool used to portray the extent of potentially damaging shaking following an earthquake, on its own can be useful for emergency response, loss estimation, and public information. However, to take full advantage of the potential of ShakeMap, we introduce ShakeCast. ShakeCast facilitates the complicated assessment of potential damage to a user's widely distributed facilities by comparing the complex shaking distribution with the potentially highly variable damageability of their inventory to provide a simple, hierarchical list and maps of structures or facilities most likely impacted. ShakeCast is a freely available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users' facilities, sends notifications of potential damage to responsible parties, and generates facility damage maps and other Web-based products for both public and private emergency managers and responders. ?? 2008, Earthquake Engineering Research Institute.

  6. Earthquake Magnitude and Shaking Intensity Dependent Fragility Functions for Rapid Risk Assessment of Buildings

    Directory of Open Access Journals (Sweden)

    Marie-José Nollet

    2018-01-01

    Full Text Available An integrated web application, referred to as ER2 for rapid risk evaluator, is under development for a user-friendly seismic risk assessment by the non-expert public safety community. The assessment of likely negative consequences is based on pre-populated databases of seismic, building inventory and vulnerability parameters. To further accelerate the computation for near real-time analyses, implicit building fragility curves were developed as functions of the magnitude and the intensity of the seismic shaking defined with a single intensity measure, input spectral acceleration at 1.0 s implicitly considering the epicentral distance and local soil conditions. Damage probabilities were compared with those obtained with the standard fragility functions explicitly considering epicentral distances and local site classes in addition to the earthquake magnitudes and respective intensity of the seismic shaking. Different seismic scenarios were considered first for 53 building classes common in Eastern Canada, and then a reduced number of 24 combined building classes was proposed. Comparison of results indicate that the damage predictions with implicit fragility functions for short (M ≤ 5.5 and medium strong motion duration (5.5 < M ≤ 7.5 show low variation with distance and soil class, with average error of less than 3.6%.

  7. iShake: Mobile Phones as Seismic Sensors (Invited)

    Science.gov (United States)

    Dashti, S.; Reilly, J.; Bray, J. D.; Bayen, A. M.; Glaser, S. D.; Mari, E.

    2010-12-01

    . In general, iPhone and iPod Touch sensors slightly over-estimated ground motion energy (i.e., Arias Intensity, Ia). However, the mean acceleration response spectrum of the seven iPhones compared remarkably well with that of the reference high quality accelerometers. The error in the recorded intensity parameters was dependent on the characteristics of the input ground motion, particularly its PGA and Ia, and increased for stronger motions. The use of a high-friction device cover (e.g., rubber iPhone covers) on unsecured phones yielded substantially improved data by minimizing independent phone movement. Useful information on the ground motion characteristics was even extracted from unsecured phones during intense shaking events. The insight gained from these experiments is valuable in distilling information from a large number of imperfect signals from phones that may not be rigidly connected to the ground. With these ubiquitous measurement devices, a more accurate and rapid portrayal of the damage distribution during an earthquake can be provided to emergency responders and to the public.

  8. An Atlas of ShakeMaps and population exposure catalog for earthquake loss modeling

    Science.gov (United States)

    Allen, T.I.; Wald, D.J.; Earle, P.S.; Marano, K.D.; Hotovec, A.J.; Lin, K.; Hearne, M.G.

    2009-01-01

    We present an Atlas of ShakeMaps and a catalog of human population exposures to moderate-to-strong ground shaking (EXPO-CAT) for recent historical earthquakes (1973-2007). The common purpose of the Atlas and exposure catalog is to calibrate earthquake loss models to be used in the US Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER). The full ShakeMap Atlas currently comprises over 5,600 earthquakes from January 1973 through December 2007, with almost 500 of these maps constrained-to varying degrees-by instrumental ground motions, macroseismic intensity data, community internet intensity observations, and published earthquake rupture models. The catalog of human exposures is derived using current PAGER methodologies. Exposure to discrete levels of shaking intensity is obtained by correlating Atlas ShakeMaps with a global population database. Combining this population exposure dataset with historical earthquake loss data, such as PAGER-CAT, provides a useful resource for calibrating loss methodologies against a systematically-derived set of ShakeMap hazard outputs. We illustrate two example uses for EXPO-CAT; (1) simple objective ranking of country vulnerability to earthquakes, and; (2) the influence of time-of-day on earthquake mortality. In general, we observe that countries in similar geographic regions with similar construction practices tend to cluster spatially in terms of relative vulnerability. We also find little quantitative evidence to suggest that time-of-day is a significant factor in earthquake mortality. Moreover, earthquake mortality appears to be more systematically linked to the population exposed to severe ground shaking (Modified Mercalli Intensity VIII+). Finally, equipped with the full Atlas of ShakeMaps, we merge each of these maps and find the maximum estimated peak ground acceleration at any grid point in the world for the past 35 years. We subsequently compare this "composite ShakeMap" with existing global

  9. U.S. Geological Survey's ShakeCast: A cloud-based future

    Science.gov (United States)

    Wald, David J.; Lin, Kuo-Wan; Turner, Loren; Bekiri, Nebi

    2014-01-01

    When an earthquake occurs, the U. S. Geological Survey (USGS) ShakeMap portrays the extent of potentially damaging shaking. In turn, the ShakeCast system, a freely-available, post-earthquake situational awareness application, automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users’ facilities, sends notifications of potential damage to responsible parties, and generates facility damage assessment maps and other web-based products for emergency managers and responders. ShakeCast is particularly suitable for earthquake planning and response purposes by Departments of Transportation (DOTs), critical facility and lifeline utilities, large businesses, engineering and financial services, and loss and risk modelers. Recent important developments to the ShakeCast system and its user base are described. The newly-released Version 3 of the ShakeCast system encompasses advancements in seismology, earthquake engineering, and information technology applicable to the legacy ShakeCast installation (Version 2). In particular, this upgrade includes a full statistical fragility analysis framework for general assessment of structures as part of the near real-time system, direct access to additional earthquake-specific USGS products besides ShakeMap (PAGER, DYFI?, tectonic summary, etc.), significant improvements in the graphical user interface, including a console view for operations centers, and custom, user-defined hazard and loss modules. The release also introduces a new adaption option to port ShakeCast to the "cloud". Employing Amazon Web Services (AWS), users now have a low-cost alternative to local hosting, by fully offloading hardware, software, and communication obligations to the cloud. Other advantages of the "ShakeCast Cloud" strategy include (1) Reliability and robustness of offsite operations, (2) Scalability naturally accommodated, (3), Serviceability, problems reduced due to software and hardware uniformity, (4

  10. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  11. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  12. The ShakeMap Atlas for the City of Naples, Italy

    Science.gov (United States)

    Pierdominici, Simona; Faenza, Licia; Camassi, Romano; Michelini, Alberto; Ercolani, Emanuela; Lauciani, Valentino

    2016-04-01

    Naples is one of the most vulnerable cities in the world because it is threatened by several natural and man-made hazards: earthquakes, volcanic eruptions, tsunamis, landslides, hydrogeological disasters, and morphologic alterations due to human interference. In addition, the risk is increased by the high density of population (Naples and the surrounding area are among the most populated in Italy), and by the type and condition of buildings and monuments. In light of this, it is crucial to assess the ground shaking suffered by the city. We take into account and integrate data information from five Italian databases and catalogues (DBMI11; CPTI11; CAMAL11; MOLAL08; ITACA) to build a reliable ShakeMap atlas for the area and to recreate the seismic history of the city from historical to recent times (1293 to 1999). This large amount of data gives the opportunity to explore several sources of information, expanding the completeness of our data set in both time and magnitude. 84 earthquakes have been analyzed and for each event, a Shakemap set has been computed using an ad hoc implementation developed for this application: (1) specific ground-motion prediction equations (GMPEs) accounting for the different attenuation properties in volcanic areas compared with the tectonic ones, and (2) detailed local microzonation to include the site effects. The ShakeMap atlas has two main applications: a) it is an important instrument in seismic risk management. It quantifies the level of shaking suffered by a city during its history, and it could be implemented to the quantification of the number of people exposed to certain degrees of shaking. Intensity data provide the evaluation of the damage caused by earthquakes; the damage is closely linked with the ground shaking, building type, and vulnerability, and it is not possible to separate these contributions; b) the Atlas can be used as starting point for Bayesian estimation of seismic hazard. This technique allows for the merging

  13. Insights into earthquake hazard map performance from shaking history simulations

    Science.gov (United States)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  14. Earthquake Early Warning ShakeAlert System: Testing and certification platform

    Science.gov (United States)

    Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah

    2017-01-01

    Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has

  15. Finite-Fault and Other New Capabilities of CISN ShakeAlert

    Science.gov (United States)

    Boese, M.; Felizardo, C.; Heaton, T. H.; Hudnut, K. W.; Hauksson, E.

    2013-12-01

    Over the past 6 years, scientists at Caltech, UC Berkeley, the Univ. of Southern California, the Univ. of Washington, the US Geological Survey, and ETH Zurich (Switzerland) have developed the 'ShakeAlert' earthquake early warning demonstration system for California and the Pacific Northwest. We have now started to transform this system into a stable end-to-end production system that will be integrated into the daily routine operations of the CISN and PNSN networks. To quickly determine the earthquake magnitude and location, ShakeAlert currently processes and interprets real-time data-streams from several hundred seismic stations within the California Integrated Seismic Network (CISN) and the Pacific Northwest Seismic Network (PNSN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently being shared with around 160 individuals, companies, and emergency response organizations to gather feedback about the system performance, to educate potential users about EEW, and to identify needs and applications of EEW in a future operational warning system. To improve the performance during large earthquakes (M>6.5), we have started to develop, implement, and test a number of new algorithms for the ShakeAlert system: the 'FinDer' (Finite Fault Rupture Detector) algorithm provides real-time estimates of locations and extents of finite-fault ruptures from high-frequency seismic data. The 'GPSlip' algorithm estimates the fault slip along these ruptures using high-rate real-time GPS data. And, third, a new type of ground-motion prediction models derived from over 415,000 rupture simulations along active faults in southern California improves MMI intensity predictions for large earthquakes with consideration of finite-fault, rupture directivity, and basin response effects. FinDer and GPSlip are currently being real-time and offline tested in a separate internal

  16. Isolating social influences on vulnerability to earthquake shaking: identifying cost-effective mitigation strategies.

    Science.gov (United States)

    Bhloscaidh, Mairead Nic; McCloskey, John; Pelling, Mark; Naylor, Mark

    2013-04-01

    Until expensive engineering solutions become more universally available, the objective targeting of resources at demonstrably effective, low-cost interventions might help reverse the trend of increasing mortality in earthquakes. Death tolls in earthquakes are the result of complex interactions between physical effects, such as the exposure of the population to strong shaking, and the resilience of the exposed population along with supporting critical infrastructures and institutions. The identification of socio-economic factors that contribute to earthquake mortality is crucial to identifying and developing successful risk management strategies. Here we develop a quantitative methodology more objectively to assess the ability of communities to withstand earthquake shaking, focusing on, in particular, those cases where risk management performance appears to exceed or fall below expectations based on economic status. Using only published estimates of the shaking intensity and population exposure for each earthquake, data that is available for earthquakes in countries irrespective of their level of economic development, we develop a model for mortality based on the contribution of population exposure to shaking only. This represents an attempt to remove, as far as possible, the physical causes of mortality from our analysis (where we consider earthquake engineering to reduce building collapse among the socio-economic influences). The systematic part of the variance with respect to this model can therefore be expected to be dominated by socio-economic factors. We find, as expected, that this purely physical analysis partitions countries in terms of basic socio-economic measures, for example GDP, focusing analytical attention on the power of economic measures to explain variance in observed distributions of earthquake risk. The model allows the definition of a vulnerability index which, although broadly it demonstrates the expected income-dependence of vulnerability to

  17. Earthquake early Warning ShakeAlert system: West coast wide production prototype

    Science.gov (United States)

    Kohler, Monica D.; Cochran, Elizabeth S.; Given, Douglas; Guiwits, Stephen; Neuhauser, Doug; Hensen, Ivan; Hartog, Renate; Bodin, Paul; Kress, Victor; Thompson, Stephen; Felizardo, Claude; Brody, Jeff; Bhadha, Rayo; Schwarz, Stan

    2017-01-01

    Earthquake early warning (EEW) is an application of seismological science that can give people, as well as mechanical and electrical systems, up to tens of seconds to take protective actions before peak earthquake shaking arrives at a location. Since 2006, the U.S. Geological Survey has been working in collaboration with several partners to develop EEW for the United States. The goal is to create and operate an EEW system, called ShakeAlert, for the highest risk areas of the United States, starting with the West Coast states of California, Oregon, and Washington. In early 2016, the Production Prototype v.1.0 was established for California; then, in early 2017, v.1.2 was established for the West Coast, with earthquake notifications being distributed to a group of beta users in California, Oregon, and Washington. The new ShakeAlert Production Prototype was an outgrowth from an earlier demonstration EEW system that began sending test notifications to selected users in California in January 2012. ShakeAlert leverages the considerable physical, technical, and organizational earthquake monitoring infrastructure of the Advanced National Seismic System, a nationwide federation of cooperating seismic networks. When fully implemented, the ShakeAlert system may reduce damage and injury caused by large earthquakes, improve the nation’s resilience, and speed recovery.

  18. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    Science.gov (United States)

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and

  19. Aerial shaking performance of wet Anna's hummingbirds

    Science.gov (United States)

    Ortega-Jimenez, Victor Manuel; Dudley, Robert

    2012-01-01

    External wetting poses problems of immediate heat loss and long-term pathogen growth for vertebrates. Beyond these risks, the locomotor ability of smaller animals, and particularly of fliers, may be impaired by water adhering to the body. Here, we report on the remarkable ability of hummingbirds to perform rapid shakes in order to expel water from their plumage even while in flight. Kinematic performance of aerial versus non-aerial shakes (i.e. those performed while perching) was compared. Oscillation frequencies of the head, body and tail were lower in aerial shakes. Tangential speeds and accelerations of the trunk and tail were roughly similar in aerial and non-aerial shakes, but values for head motions in air were twice as high when compared with shakes while perching. Azimuthal angular amplitudes for both aerial and non-aerial shakes reached values greater than 180° for the head, greater than 45° for the body trunk and slightly greater than 90° for the tail and wings. Using a feather on an oscillating disc to mimic shaking motions, we found that bending increased average speeds by up to 36 per cent and accelerations of the feather tip up to fourfold relative to a hypothetical rigid feather. Feather flexibility may help to enhance shedding of water and reduce body oscillations during shaking. PMID:22072447

  20. Response of base-isolated nuclear structures to extreme earthquake shaking

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Manish, E-mail: mkumar2@buffalo.edu; Whittaker, Andrew S.; Constantinou, Michael C.

    2015-12-15

    Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.

  1. Response of base-isolated nuclear structures to extreme earthquake shaking

    International Nuclear Information System (INIS)

    Kumar, Manish; Whittaker, Andrew S.; Constantinou, Michael C.

    2015-01-01

    Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.

  2. Behavioral Response in the Immediate Aftermath of Shaking: Earthquakes in Christchurch and Wellington, New Zealand, and Hitachi, Japan

    Directory of Open Access Journals (Sweden)

    Ihnji Jon

    2016-11-01

    Full Text Available This study examines people’s response actions in the first 30 min after shaking stopped following earthquakes in Christchurch and Wellington, New Zealand, and Hitachi, Japan. Data collected from 257 respondents in Christchurch, 332 respondents in Hitachi, and 204 respondents in Wellington revealed notable similarities in some response actions immediately after the shaking stopped. In all four events, people were most likely to contact family members and seek additional information about the situation. However, there were notable differences among events in the frequency of resuming previous activities. Actions taken in the first 30 min were weakly related to: demographic variables, earthquake experience, contextual variables, and actions taken during the shaking, but were significantly related to perceived shaking intensity, risk perception and affective responses to the shaking, and damage/infrastructure disruption. These results have important implications for future research and practice because they identify promising avenues for emergency managers to communicate seismic risks and appropriate responses to risk area populations.

  3. Shake Table Testing of an Elevator System in a Full-Scale Five-Story Building.

    Science.gov (United States)

    Wang, Xiang; Hutchinson, Tara C; Astroza, Rodrigo; Conte, Joel P; Restrepo, José I; Hoehler, Matthew S; Ribeiro, Waldir

    2017-03-01

    This paper investigates the seismic performance of a functional traction elevator as part of a full-scale five-story building shake table test program. The test building was subjected to a suite of earthquake input motions of increasing intensity, first while the building was isolated at its base, and subsequently while it was fixed to the shake table platen. In addition, low-amplitude white noise base excitation tests were conducted while the elevator system was placed in three different configurations, namely, by varying the vertical location of its cabin and counterweight, to study the acceleration amplifications of the elevator components due to dynamic excitations. During the earthquake tests, detailed observation of the physical damage and operability of the elevator as well as its measured response are reported. Although the cabin and counterweight sustained large accelerations due to impact during these tests, the use of well-restrained guide shoes demonstrated its effectiveness in preventing the cabin and counterweight from derailment during high-intensity earthquake shaking. However, differential displacements induced by the building imposed undesirable distortion of the elevator components and their surrounding support structure, which caused damage and inoperability of the elevator doors. It is recommended that these aspects be explicitly considered in elevator seismic design.

  4. Measurement of circulation time distribution in a shaking vessel; Yodo kakuhan sonai no junkan jikan bunpu no sokutei

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Y; Hiraoka, S; Tada, Y; Ue, T [Nagoya Institute of Technology, Nagoya (Japan); Koh, S [Toyo Engineering Corp., Tokyo (Japan); Lee, Y [Keimyung University, (Korea, Republic of)

    1996-01-20

    The circulation time distribution of a liquid in a horizontally shaking vessel was observed by tracing particle motion with the liquid. The distribution was affected by the operating conditions. The distribution was monotonous and the flow in the vessel was almost rotational flow at large Fr number. The circulation flow rate q{sub c} derived from the mean circulation time was correlated with Nq{sub c} = 22Fr{sup 2.1}Re{sup 0.2}(d/D){sup -2.6}, where Nq{sub c}, Fr and Re were the dimensionless numbers defined as q{sub c}/Nd{sup 3}, N{sup 2}D/g and Nd{sup 2}/{nu}, respectively, and q{sub c} was defined as {pi}D{sup 2}H/4t{sub c}. It was found from the comparison of the mean circulation time with the mixing time that complete mixing was achieved after about 16 circulations. 12 refs., 9 figs., 1 tab.

  5. Instrumental shaking thresholds for seismically induced landslides and preliminary report on landslides triggered by the October 17, 1989, Loma Prieta, California earthquake

    Science.gov (United States)

    Harp, E.L.

    1993-01-01

    The generation of seismically induced landslide depends on the characteristics of shaking as well as mechanical properties of geologic materials. A very important parameter in the study of seismically induced landslide is the intensity based on a strong-motion accelerogram: it is defined as Arias intensity and is proportional to the duration of the shaking record as well as the amplitude. Having a theoretical relationship between Arias intensity, magnitude and distance it is possible to predict how far away from the seismic source landslides are likely to occur for a given magnitude earthquake. Field investigations have established that the threshold level of Arias intensity depends also on site effects, particularly the fracture characteristics of the outcrops present. -from Author

  6. Shake Warning: Helping People Stay Safe With Lots of Small Boxes in the Ground to Warn Them About Strong Shaking

    Science.gov (United States)

    Reusch, M.

    2017-12-01

    A group of people at schools are joining with the group of people in control of making pictures of the state of rocks on the ground and water in our land. They are working on a plan to help all people be safe in the case of very big ground shaking (when ground breaks in sight or under ground). They will put many small boxes all over the states in the direction of where the sun sets to look for the first shake that might be a sign of an even bigger shake to come. They tell a big computer (with much power) in several large cities in those states. These computers will decide if the first shake is a sign of a very large and close ground shake, a far-away ground shake, a small but close ground shake, or even just a sign of a shake that people wanted to make. If it is a sign of a close and really big shake, then the computers will tell the phones and computers of many people to help them take safe steps before the big shaking arrives where they are. This warning might be several seconds or maybe a couple of minutes. People will be able to hide, take cover, and hold on under tables and desks in case things fall from walls and places up high in their home and work. Doctors will be able to pause hard work and boxes that move people up and down in homes, businesses, and stores will be able to stop on the next floor and open their doors to let people out and not get stuck. It will help slow down trains to be safe and not fly off of the track as well as it will help to shut off water and air that warms homes and is used for when you make food hot. To make this plan become real, people who work for these groups are putting more small boxes in areas where there are not enough and that there are many people. They are also putting small boxes in places where there are no boxes but the big shake might come from that direction. There are problems to get past such as needing many more small boxes, more people to help with this plan, and getting all people who live in these areas to

  7. Reconstituting botulinum toxin drugs: shaking, stirring or what?

    Science.gov (United States)

    Dressler, Dirk; Bigalke, Hans

    2016-05-01

    Most botulinum toxin (BT) drugs are stored as powders which need to be reconstituted with normal saline before clinical use. As botulinum neurotoxin (BNT), the therapeutically active ingredient, is a large double-stranded protein the process of reconstitution should be performed with special attention to mechanical stress applied. We wanted to test the mechanical stability of BNT during the reconstitution process. For this, 100 MU onabotulinumtoxinA (Botox(®), Irvine, CA, USA) was reconstituted with 2.0 ml of NaCl/H2O. Gentle reconstitution (GR) was performed with a 5 ml syringe, a 0.90 × 70 mm injection needle, one cycle of injection-aspiration-injection and two gentle shakes of the vial. Aggressive reconstitution (AR) was performed with a 5 ml syringe, a 0.40 × 40 mm injection needle, ten injection-aspiration-injection cycles and 30 s of continuous shaking of the vial. AR increased the time to paralysis in the mouse hemidiaphragm assay (HDA) from 72.0 ± 4.6 to 106.0 ± 16.0 min (*p = 0.002, two-tailed t test after Kolmogorov-Smirnova test with Lilliefors correction for normal distribution). Construction of a calibration curve revealed that the increase in the time to paralysis was correlated with a loss of potency of from 100 to 58 MU (-42 %). BT users should use large diameter injection needles for reconstitution, apply two or three injection-aspiration-injection cycles and, maybe, shake the vials a few times to rinse the entire glass wall. Aggressive reconstitution with small diameter needles, prolonged injection-aspiration-injection and violent shaking should be avoided.

  8. MyShake - Smartphone seismic network powered by citizen scientists

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.; Strauss, J. A.

    2017-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It is driven by the citizen scientists that run MyShake on their personal smartphones. It has two components: an android application running on the smartphones to detect earthquake-like motion, and a network detection algorithm to aggregate results from multiple smartphones to confirm when an earthquake occurs. The MyShake application was released to the public on Feb 12th 2016. Within the first year, more than 250,000 people downloaded MyShake app around the world. There are more than 500 earthquakes recorded by the smartphones in this period, including events in Chile, Argentina, Mexico, Morocco, Greece, Nepal, New Zealand, Taiwan, Japan, and across North America. Currently, we are working on earthquake early warning with MyShake network and the shaking data provided by MyShake is a unique dataset that can be used for the research community.

  9. Mango Shake

    Science.gov (United States)

    ... this page: https://medlineplus.gov/recipe/mangoshake.html Mango Shake To use the sharing features on this page, please enable JavaScript. Prep time: 5 minutes Cook time: 0 minutes ... cup low-fat (1 percent) milk 4 Tbsp frozen mango juice (or 1 fresh pitted mango) 1 small ...

  10. A revised ground-motion and intensity interpolation scheme for shakemap

    Science.gov (United States)

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  11. Development and utilization of USGS ShakeCast for rapid post-earthquake assessment of critical facilities and infrastructure

    Science.gov (United States)

    Wald, David J.; Lin, Kuo-wan; Kircher, C.A.; Jaiswal, Kishor; Luco, Nicolas; Turner, L.; Slosky, Daniel

    2017-01-01

    The ShakeCast system is an openly available, near real-time post-earthquake information management system. ShakeCast is widely used by public and private emergency planners and responders, lifeline utility operators and transportation engineers to automatically receive and process ShakeMap products for situational awareness, inspection priority, or damage assessment of their own infrastructure or building portfolios. The success of ShakeCast to date and its broad, critical-user base mandates improved software usability and functionality, including improved engineering-based damage and loss functions. In order to make the software more accessible to novice users—while still utilizing advanced users’ technical and engineering background—we have developed a “ShakeCast Workbook”, a well documented, Excel spreadsheet-based user interface that allows users to input notification and inventory data and export XML files requisite for operating the ShakeCast system. Users will be able to select structure based on a minimum set of user-specified facility (building location, size, height, use, construction age, etc.). “Expert” users will be able to import user-modified structural response properties into facility inventory associated with the HAZUS Advanced Engineering Building Modules (AEBM). The goal of the ShakeCast system is to provide simplified real-time potential impact and inspection metrics (i.e., green, yellow, orange and red priority ratings) to allow users to institute customized earthquake response protocols. Previously, fragilities were approximated using individual ShakeMap intensity measures (IMs, specifically PGA and 0.3 and 1s spectral accelerations) for each facility but we are now performing capacity-spectrum damage state calculations using a more robust characterization of spectral deamnd.We are also developing methods for the direct import of ShakeMap’s multi-period spectra in lieu of the assumed three-domain design spectrum (at 0.3s for

  12. CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning

    Science.gov (United States)

    Bose, Maren; Graves, Robert; Gill, David; Callaghan, Scott; Maechling, Phillip J.

    2014-01-01

    Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking in the Los Angeles (LA) basin. These thresholds are used to construct a simple and robust EEW algorithm: to

  13. Roles of Pauli correlations, channel couplings, and shake-off in ion-induced KL/sup v/ and K2L/sup v/ multiple-vacancy production

    International Nuclear Information System (INIS)

    Becker, R.L.; Ford, A.L.; Reading, J.F.

    1983-01-01

    Cross sections for target K-plus-L-shell multiple-vacancy production by ions can be inferred from experimental measurements of K x-ray and Auger satellite intensities. The theory of K/sup n/L/sup v/ multiple-vacancy distributions has been generalized from the single-particle model (the statistically independent electron approximation) to the independent Fermi particle model. The Pauli correlations (electron exchange terms) are found to nearly cancel in many cases because of a tendency toward random phases. This results in the first quantal demonstration that the vacancy distribution is nearly binomial (but slightly narrower). Calculations have been generalized from the traditional first-order approximations to unitary approximations (first Magnus and coupled-channels) which correctly predict the saturation of the mean vacancy probability with increasing projectile charge. The recent availability of satellite and hypersatellite data for the same collision system makes possible the beginning of an investigation of the effects of increased removal energies and increased shaking in hypersatellites (K 2 L/sup v/) as compared with satellites. We review our unified treatment of ion-plus-shaking induced amplitudes for L-vacancy production accompanying ion-generated K-holes. Calculations for C 6+ + Ne satellite and hypersatellite vacancy distributions are presented

  14. Project of Near-Real-Time Generation of ShakeMaps and a New Hazard Map in Austria

    Science.gov (United States)

    Jia, Yan; Weginger, Stefan; Horn, Nikolaus; Hausmann, Helmut; Lenhardt, Wolfgang

    2016-04-01

    Target-orientated prevention and effective crisis management can reduce or avoid damage and save lives in case of a strong earthquake. To achieve this goal, a project for automatic generated ShakeMaps (maps of ground motion and shaking intensity) and updating the Austrian hazard map was started at ZAMG (Zentralanstalt für Meteorologie und Geodynamik) in 2015. The first goal of the project is set for a near-real-time generation of ShakeMaps following strong earthquakes in Austria to provide rapid, accurate and official information to support the governmental crisis management. Using newly developed methods and software by SHARE (Seismic Hazard Harmonization in Europe) and GEM (Global Earthquake Model), which allows a transnational analysis at European level, a new generation of Austrian hazard maps will be ultimately calculated. More information and a status of our project will be given by this presentation.

  15. Shaking table control taking account of reaction force. Two-degree-of-freedom controller design of shaking-table acceleration control

    International Nuclear Information System (INIS)

    Hironaka, Koji; Suzuki, Kitami; Narutaki, Mamoru; Tagawa, Yasutaka

    2011-01-01

    When carrying out seismic performance examination on a structure by using a shaking table, it is important to reproduce the acceleration faithfully. In the conventional method, we transformed the acceleration wave into a displacement wave and used a hydraulic actuator for displacement control. However, this method had several disadvantages related to disturbance rejection, tracking performance, and stability. In this study, we have developed a full-closed compensation, in which the shaking-table acceleration is assumed the feedback signal for the acceleration wave of the reference signal. Also we adopt the dual model matching (DMM) control technique in order to design a controller. To confirm the disturbance rejection performance and to investigate the effect of enhancing the reproducibility of the shaking-target waveform by DMM control, we perform an experiment using a one-degree-of-freedom specimen placed on a shaking table driven by a hydraulic actuator. (author)

  16. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  17. Shake-off processes at the electron transitions in atoms

    International Nuclear Information System (INIS)

    Matveev, V.I.; Parilis, Eh.S.

    1982-01-01

    Elementary processes in multielectron atoms - radiative and Auger transitions, photoionization and ionization by an electron impact etc. are usually followed by the relaxation of electron shells. The conditions under which such multielectron problem could be solved in the shake-off approximation are considered. The shake-off processes occurring. as a result of the electron transitions are described from the general point of view. The common characteristics and peculiar features of this type of excitation in comparison with the electron shake-off under nuclear transformations are pointed out. Several electron shake-off processes are considered, namely: radiative Auger effect, the transition ''two electrons-one photon'', dipole ionization, spectral line broadening, post collision interaction, Auger decay stimulated by collision with fast electrons, three-electron Auger transitions: double and half Auger effect. Their classification is given according to the type of the electron transition causing the shake-off process. The experimental data are presented and the methods of theoretical description are reviewed. Other similar effects, which could follow the transitions in electron shells are pointed out. The deduction of shake-off approximation is presented, and it is pointed out that this approach is analogous to the distorted waves approximation in the theory of scattering. It was shown that in atoms the shake-off approximation is a very effective method, which allows to obtain the probability of different electronic effects

  18. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake

  19. Shaking table testing of mechanical components

    International Nuclear Information System (INIS)

    Jurukovski, D.; Taskov, Lj.; Mamucevski, D.; Petrovski, D.

    1995-01-01

    Presented is the experience of the Institute of Earthquake Engineering and Engineering Seismology, Skopje, Republic of Macedonia in seismic qualification of mechanical components by shaking table testing. Technical data and characteristics for the three shaking tables available at the Institute are given. Also, for characteristic mechanical components tested at the Institute laboratories, basic data such as producer, testing investor, description of the component, testing regulation, testing equipment and final user of the results. (author)

  20. Raspberry Shake- A World-Wide Citizen Seismograph Network

    Science.gov (United States)

    Christensen, B. C.; Blanco Chia, J. F.

    2017-12-01

    Raspberry Shake was conceived as an inexpensive plug-and-play solution to satisfy the need for universal, quick and accurate earthquake detections. First launched on Kickstarter's crowdfunding platform in July of 2016, the Raspberry Shake project was funded within hours of the launch date and, by the end of the campaign, reached more than 1000% of its initial funding goal. This demonstrated for the first time that there exists a strong interest among Makers, Hobbyists and Do It Yourselfers for personal seismographs. From here, a citizen scientist network was created and it has steadily been growing. The Raspberry Shake network is currently being used in conjunction with publicly available broadband data from the GSN and other state-run seismic networks available through the IRIS, Geoscope and GEOFON data centers to detect and locate earthquakes large and small around the globe. Raspberry Shake looks well positioned to improve local monitoring of earthquakes on a global scale, deepen community's understanding of earthquakes, and serve as a formidable teaching tool. We present the main results of the project, the current state of the network, and the new Raspberry Shake models that are being built.

  1. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and

  2. Shaking table testing of electrical equipment in Argentina

    International Nuclear Information System (INIS)

    Carmona, J.S.; Zabala, F.; Santalucia, J.; Sisterna, C.; Magrini, M.; Oldecop, L.

    1995-01-01

    This paper describes the testing facility, the methodology applied and the results obtained in the seismic qualification tests of different types of electric equipment. These tests were carried out on a shaking table that was developed and built at the Earthquake Research Institute of the National University of San Juan, Argentine. The equipment tested consist of 500 KV and 132 KV current transformers, a 500 KV voltage transformer, a 145 KV disconnecter and a relay cabinet. The acceleration response of the tested equipment was measured at several locations distributed along its height, and strains were measured at critical points by strain gauges cemented on the base of the porcelain insulator. All the information was recorded with a data acquisition system at a sampling rate of 200 times per second in each channel. The facility developed at this Institute is the largest one in operation in Argentina at present and the equipment tested is the highest, heaviest and more slender one which has been seismically qualified on a shaking table in this country. These tests have been a valuable experience in the field of structural dynamic testing applied to equipment of hydroelectric and nuclear power plants. (author)

  3. Earthquake shaking hazard estimates and exposure changes in the conterminous United States

    Science.gov (United States)

    Jaiswal, Kishor S.; Petersen, Mark D.; Rukstales, Kenneth S.; Leith, William S.

    2015-01-01

    A large portion of the population of the United States lives in areas vulnerable to earthquake hazards. This investigation aims to quantify population and infrastructure exposure within the conterminous U.S. that are subjected to varying levels of earthquake ground motions by systematically analyzing the last four cycles of the U.S. Geological Survey's (USGS) National Seismic Hazard Models (published in 1996, 2002, 2008 and 2014). Using the 2013 LandScan data, we estimate the numbers of people who are exposed to potentially damaging ground motions (peak ground accelerations at or above 0.1g). At least 28 million (~9% of the total population) may experience 0.1g level of shaking at relatively frequent intervals (annual rate of 1 in 72 years or 50% probability of exceedance (PE) in 50 years), 57 million (~18% of the total population) may experience this level of shaking at moderately frequent intervals (annual rate of 1 in 475 years or 10% PE in 50 years), and 143 million (~46% of the total population) may experience such shaking at relatively infrequent intervals (annual rate of 1 in 2,475 years or 2% PE in 50 years). We also show that there is a significant number of critical infrastructure facilities located in high earthquake-hazard areas (Modified Mercalli Intensity ≥ VII with moderately frequent recurrence interval).

  4. Regression analysis of MCS Intensity and peak ground motion data in Italy

    Science.gov (United States)

    Faenza, L.; Michelini, A.

    2009-04-01

    Intensity scales are historically important because no instrumentation is necessary, and useful measurements of earthquake shaking can be made by an unequipped observer. The use of macroseismics data are essential for the revision of historical seismicity and of great importance for seismic hazard assessment of vulnerable areas. The procedure ShakeMap (Wald et al., Earthquake Spectra., 15, 1999) provides instrumentally based estimates of intensity maps. In Italy, intensities have been hitherto reported through the use of the MCS (Mercalli, Cancani Sieberg) intensity scale. The DBMI2004 (and the most recent DBMI08) report intensities for earthquakes in Italy that date back to Roman age. In order to exploit fully the potential of such a long intensity catalogue for past large events and with the aim of presenting ShakeMaps using an intensity scale consistent with that of the past, we have ri-calibrated the relationships between MCS intensity and observed peak ground motion (PGM) values in terms of both peak-ground acceleration and peak-ground velocities. To this end, we have used the two most updataed and complete dataset available for Italy - the strong motion Itaca database and the DBMI08 macroseismic database. In this work we have first assembled a data set consisting of PGM-intensity pairs and we have then determined the most suitable regressions parameters. Many tests have been made to quantify the accuracy and robustness of the results. The new instrumental intensity scale is going to be adopted for mapping the level of shaking resulting from earthquakes in Italy replacing the instrumental Modified Mercalli scale currently in use (Michelini et al., SRL, 79, 2008) and to determine shakemaps for historical events.

  5. MyEEW: A Smartphone App for the ShakeAlert System

    Science.gov (United States)

    Strauss, J. A.; Allen, S.; Allen, R. M.; Hellweg, M.

    2015-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The UC Berkeley Seismological Laboratory has created a smartphone app called MyEEW, which interfaces with the ShakeAlert system to deliver early warnings to individual users. Many critical facilities (transportation, police, and fire) have control rooms, which could run a centralized interface, but our ShakeAlert Beta Testers have also expressed their need for mobile options. This app augments the basic ShakeAlert Java desktop applet by allowing workers off-site (or merely out of hearing range) to be informed of coming hazards. MyEEW receives information from the ShakeAlert system to provide users with real-time information about shaking that is about to happen at their individual location. It includes a map, timer, and earthquake information similar to the Java desktop User Display. The app will also feature educational material to help users craft their own response and resiliency strategies. The app will be open to UC Berkeley Earthquake Research Affiliates members for testing in the near future.

  6. Practices of shake-flask culture and advances in monitoring CO2 and O2.

    Science.gov (United States)

    Takahashi, Masato; Aoyagi, Hideki

    2018-05-01

    About 85 years have passed since the shaking culture was devised. Since then, various monitoring devices have been developed to measure culture parameters. O 2 consumed and CO 2 produced by the respiration of cells in shaking cultures are of paramount importance due to their presence in both the culture broth and headspace of shake flask. Monitoring in situ conditions during shake-flask culture is useful for analysing the behaviour of O 2 and CO 2 , which interact according to Henry's law, and is more convenient than conventional sampling that requires interruption of shaking. In situ monitoring devices for shake-flask cultures are classified as direct or the recently developed bypass type. It is important to understand the characteristics of each type along with their unintended effect on shake-flask cultures, in order to improve the existing devices and culture conditions. Technical developments in the bypass monitoring devices are strongly desired in the future. It is also necessary to understand the mechanism underlying conventional shake-flask culture. The existing shaking culture methodology can be expanded into next-generation shake-flask cultures constituting a novel culture environment through a judicious selection of monitoring devices depending on the intended purpose of shake-flask culture. Construction and sharing the databases compatible with the various types of the monitoring devices and measurement instruments adapted for shaking culture can provide a valuable resource for broadening the application of cells with shake-flask culture.

  7. Numerical evaluation of the intensity transport equation for well-known wavefronts and intensity distributions

    Science.gov (United States)

    Campos-García, Manuel; Granados-Agustín, Fermín.; Cornejo-Rodríguez, Alejandro; Estrada-Molina, Amilcar; Avendaño-Alejo, Maximino; Moreno-Oliva, Víctor Iván.

    2013-11-01

    In order to obtain a clearer interpretation of the Intensity Transport Equation (ITE), in this work, we propose an algorithm to solve it for some particular wavefronts and its corresponding intensity distributions. By simulating intensity distributions in some planes, the ITE is turns into a Poisson equation with Neumann boundary conditions. The Poisson equation is solved by means of the iterative algorithm SOR (Simultaneous Over-Relaxation).

  8. Earthquake Monitoring with the MyShake Global Smartphone Seismic Network

    Science.gov (United States)

    Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.

    2017-12-01

    Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located 20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.

  9. On the shake-off probability for atomic systems

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.C.F., E-mail: toniufrj@gmail.com [Instituto de Física, Universidade Federal do Rio de Janeiro, P.O. Box 68528, 21941-972 Rio de Janeiro, RJ (Brazil); Almeida, D.P. [Departamento de Física, Universidade Federal de Santa Catarina, 88040-900 Florianópolis (Brazil)

    2016-07-15

    Highlights: • The scope is to find the relationship among SO probabilities, Z and electron density. • A scaling law is suggested, allowing us to find the SO probabilities for atoms. • SO probabilities have been scaled as a function of target Z and polarizability. - Abstract: The main scope in this work has been upon the relationship between shake-off probabilities, target atomic number and electron density. By comparing the saturation values of measured double-to-single photoionization ratios from the literature, a simple scaling law has been found, which allows us to predict the shake-off probabilities for several elements up to Z = 54 within a factor 2. The electron shake-off probabilities accompanying valence shell photoionization have been scaled as a function of the target atomic number, Z, and polarizability, α. This behavior is in qualitative agreement with the experimental results.

  10. Phonon shake-up satellites in x-ray absorption: an operator approach

    International Nuclear Information System (INIS)

    Bryant, G.W.

    1980-01-01

    The phonon shake-up that occurs when the linear and quadratic phonon potentials both change during x-ray absorption is considered. Full account of all quadratic terms and the competition between linear and quadratic shake-up effects is made. Many previous studies of quadratic phonon shake-up have used a wavefunction approach. The phonon matrix elements have been determined by explicit evaluation of the overlap integrals. However, an equations of motion approach is used to transform the time evolution operator to a form that allows an exact evaluation of the phonon matrix elements needed to describe the spectra. This theory is used to determine the strengths of the phonon shake-up satellites in x-ray absorption spectra at zero temperature. An exact expression is obtained for the strength of each satellite. During quadratic shake-up, two phonon transitions and phonon frequency shifts occur. Both effects significantly change the strength of a a satellite from that predicted for linear shake-up alone. Inclusion of the two phonon transitions enhances the high-energy satellites. Inclusion of the frequency shifts can either broaden the spectra or increase the strength of the zero phonon lines depending on the sign of the frequency shift. (author)

  11. Shaking table qualification tests of mechanical and electrical components

    International Nuclear Information System (INIS)

    Jurukovski, D.

    1993-01-01

    This presentation covers the experience of the Institute of Earthquake Engineering and Engineering Seismology, Skopje, Republic of Macedonia in seismic qualification of mechanical components by shaking table testing. The characteristics of the biaxial seismic and single component shaking tables used at the Institute are given. Some examples of the experience from performed test for reactor components are included

  12. Distribution Channel Intensity among Table Water Producers in Nigeria

    Directory of Open Access Journals (Sweden)

    Joseph Edewor Agbadudu

    2017-09-01

    Full Text Available Planning for and making reasonable decisions regarding reaching the target market with an organization’s product is a critical task on the part of management, which involves a careful evaluation and selection of its channel structure and intensity.This study therefore examines distribution channel intensity among table water producers in Edo State, Nigeria. The focus of the study is to ascertain the variables that significantly predict distribution intensity among the firms in the table water industry in Edo State. The study seeks to proffer answer to fundamental question of why brands within a single category of a given consumer good differ significantly in their distribution intensity. Using a survey research design, the data used for this study were obtained by taking a sample of 110 table water firms within the three senatorial districts in the State. The data obtained were presented and analyzed using different statistical tools such as mean and multiple regression through Statistical Packages for Social Sciences (SPSS version 22 software. Findings revealed that manufacturers’ target focus, manufacturers’ support program, brand quality and level of firm’s technological advancement were significant predictors of distribution channel intensity among the industrial players in table water industry in the State. Based on the findings, the study recommended that table water firms within the State can secure a competitive edge over their fellow counterpart in the industry by designing an optimal distribution intensity that will meet up their marketing objectives. It is also recommended that the adoption of modern technology in form of online sales is an efficient way of sales and distribution which could be used to enhance their distribution techniques if there is a need to cut down on middle men due to increased cost. The study concluded that optimal distribution intensity could be achieved not by mere imitation of competitors but through

  13. Shaking alone induces de novo conversion of recombinant prion proteins to β-sheet rich oligomers and fibrils.

    Directory of Open Access Journals (Sweden)

    Carol L Ladner-Keay

    Full Text Available The formation of β-sheet rich prion oligomers and fibrils from native prion protein (PrP is thought to be a key step in the development of prion diseases. Many methods are available to convert recombinant prion protein into β-sheet rich fibrils using various chemical denaturants (urea, SDS, GdnHCl, high temperature, phospholipids, or mildly acidic conditions (pH 4. Many of these methods also require shaking or another form of agitation to complete the conversion process. We have identified that shaking alone causes the conversion of recombinant PrP to β-sheet rich oligomers and fibrils at near physiological pH (pH 5.5 to pH 6.2 and temperature. This conversion does not require any denaturant, detergent, or any other chemical cofactor. Interestingly, this conversion does not occur when the water-air interface is eliminated in the shaken sample. We have analyzed shaking-induced conversion using circular dichroism, resolution enhanced native acidic gel electrophoresis (RENAGE, electron microscopy, Fourier transform infrared spectroscopy, thioflavin T fluorescence and proteinase K resistance. Our results show that shaking causes the formation of β-sheet rich oligomers with a population distribution ranging from octamers to dodecamers and that further shaking causes a transition to β-sheet fibrils. In addition, we show that shaking-induced conversion occurs for a wide range of full-length and truncated constructs of mouse, hamster and cervid prion proteins. We propose that this method of conversion provides a robust, reproducible and easily accessible model for scrapie-like amyloid formation, allowing the generation of milligram quantities of physiologically stable β-sheet rich oligomers and fibrils. These results may also have interesting implications regarding our understanding of prion conversion and propagation both within the brain and via techniques such as protein misfolding cyclic amplification (PMCA and quaking induced conversion (QuIC.

  14. A new wireless system for decentralised measurement of physiological parameters from shake flasks

    Directory of Open Access Journals (Sweden)

    Illmann Lutz

    2006-02-01

    Full Text Available Abstract Background Shake flasks are widely used because of their low price and simple handling. Many researcher are, however, not aware of the physiological consequences of oxygen limitation and substrate overflow metabolism that occur in shake flasks. Availability of a wireless measuring system brings the possibilities for quality control and design of cultivation conditions. Results Here we present a new wireless solution for the measurement of pH and oxygen from shake flasks with standard sensors, which allows data transmission over a distance of more than 100 metres in laboratory environments. This new system was applied to monitoring of cultivation conditions in shake flasks. The at-time monitoring of the growth conditions became possible by simple means. Here we demonstrate that with typical protocols E. coli shake flask cultures run into severe oxygen limitation and the medium is strongly acidified. Additionally the strength of the new system is demonstrated by continuous monitoring of the oxygen level in methanol-fed Pichia pastoris shake flask cultures, which allows the optimisation of substrate feeding for preventing starvation or methanol overfeed. 40 % higher cell density was obtained by preventing starvation phases which occur in standard shake flask protocols by adding methanol when the respiration activity decreased in the cultures. Conclusion The here introduced wireless system can read parallel sensor data over long distances from shake flasks that are under vigorous shaking in cultivation rooms or closed incubators. The presented technology allows centralised monitoring of decentralised targets. It is useful for the monitoring of pH and dissolved oxygen in shake flask cultures. It is not limited to standard sensors, but can be easily adopted to new types of sensors and measurement places (e.g., new sensor points in large-scale bioreactors.

  15. Automated Detection of Branch Shaking Locations for Robotic Cherry Harvesting Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suraj Amatya

    2017-10-01

    Full Text Available Automation in cherry harvesting is essential to reduce the demand for seasonal labor for cherry picking and reduce the cost of production. The mechanical shaking of tree branches is one of the widely studied and used techniques for harvesting small tree fruit crops like cherries. To automate the branch shaking operation, different methods of detecting branches and cherries in full foliage canopies of the cherry tree have been developed previously. The next step in this process is the localization of shaking positions in the detected tree branches for mechanical shaking. In this study, a method of locating shaking positions for automated cherry harvesting was developed based on branch and cherry pixel locations determined using RGB images and 3D camera images. First, branch and cherry regions were located in 2D RGB images. Depth information provided by a 3D camera was then mapped on to the RGB images using a standard stereo calibration method. The overall root mean square error in estimating the distance to desired shaking points was 0.064 m. Cherry trees trained in two different canopy architectures, Y-trellis and vertical trellis systems, were used in this study. Harvesting testing was carried out by shaking tree branches at the locations selected by the algorithm. For the Y-trellis system, the maximum fruit removal efficiency of 92.9% was achieved using up to five shaking events per branch. However, maximum fruit removal efficiency for the vertical trellis system was 86.6% with up to four shakings per branch. However, it was found that only three shakings per branch would achieve a fruit removal percentage of 92.3% and 86.4% in Y and vertical trellis systems respectively.

  16. Installation, care, and maintenance of wood shake and shingle siding

    Science.gov (United States)

    Jack Dwyer; Tony Bonura; Arnie Nebelsick; Sam Williams; Christopher G. Hunt

    2011-01-01

    This article gives general guidelines for selection, installation, finishing, and maintenance of wood shakes and shingles. The authors gathered information from a variety of sources: research publications on wood finishing, technical data sheets from paint manufacturers, installation instructions for shake and shingle siding, and interviews with experts having...

  17. The Great California ShakeOut: Science-Based Preparedness Advocacy

    Science.gov (United States)

    Benthien, M. L.

    2009-12-01

    The Great Southern California ShakeOut in November 2008 was the largest earthquake drill in U.S. history, involving over 5 million southern Californians through a broad-based outreach program, media partnerships, and public advocacy by hundreds of partners. The basis of the drill was a comprehensive scenario for a magnitude 7.8 earthquake on the southern San Andreas fault, which would cause broad devastation. In early 2009 the decision was made to hold the drill statewide on the third Thursday of October each year (October 15 in 2009). Results of the 2008 and 2009 drills will be shared in this session. In addition, prospects of early warning systems will be described, that will one day provide the needed seconds before strong shaking arrives in which critical systems and be shut down, and people can do what they've been practicing in the ShakeOut drills: drop, cover, and hold on. A key aspect of the ShakeOut is the integration of a comprehensive earthquake scenario (incorporating earth science, engineering, policy, economics, public health, and other disciplines) and the lessons learned from decades of social science research about why people get prepared. The result is a “teachable moment” on par with having an actual earthquake (often followed by increased interest in getting ready for earthquakes). ShakeOut creates the sense of urgency that is needed for people, organizations, and communities to get prepared, to practice what to do to be safe, and to learn what plans need to be improved.

  18. Installation, care, and maintenance of wood shake and shingle roofs

    Science.gov (United States)

    Tony Bonura; Jack Dwyer; Arnie Nebelsick; Brent Stuart; R. Sam Williams; Christopher Hunt

    2011-01-01

    This article gives general guidelines for selection, installation, finishing, and maintenance of wood shake and shingle roofs. The authors have gathered information from a variety of sources: research publications on wood finishing, technical data sheets from paint manufacturers, installation instructions for shake and shingle roofs, and interviews with experts having...

  19. [A Case of Middle Cerebral Artery Stenosis Presented with Limb-Shaking TIA].

    Science.gov (United States)

    Uno, Junji; Mineta, Haruyuki; Ren, Nice; Takagishi, Sou; Nagaoka, Shintarou; Kameda, Katsuharu; Maeda, Kazushi; Ikai, Yoshiaki; Gi, Hidefuku

    2016-07-01

    Involuntary movement is a rare clinical manifestation of transient ischemic attack (TIA). However, limb-shaking TIA is well described presentation of carotid occlusive disease. We present the case of a patient who developed limb-shaking TIA associated with high-grade stenosis of middle cerebral artery (M1), which was treated with percutaneous transluminal angioplasty (PTA). The procedure was performed successfully without complication and the symptom disappeared immediately after the procedure. The patient remained free of symptoms at the 38-month follow-up. There was no tendency of restenosis of M1. In this case, PTA was technically feasible and beneficial for limb-shaking TIA with M1 stenosis. Limb-shaking TIA can be a symptom of high-grade stenosis of M1.

  20. Preparation of edge states by shaking boundaries

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Z.C. [Department of Physics, Fuzhou University, Fuzhou 350002 (China); Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024 (China); Hou, S.C. [Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, Sichuan (China); Wang, L.C. [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Yi, X.X., E-mail: yixx@nenu.edu.cn [Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024 (China)

    2016-10-15

    Preparing topological states of quantum matter, such as edge states, is one of the most important directions in condensed matter physics. In this work, we present a proposal to prepare edge states in Aubry–André–Harper (AAH) model with open boundaries, which takes advantage of Lyapunov control to design operations. We show that edge states can be obtained with almost arbitrary initial states. A numerical optimalization for the control is performed and the dependence of control process on the system size is discussed. The merit of this proposal is that the shaking exerts only on the boundaries of the model. As a by-product, a topological entangled state is achieved by elaborately designing the shaking scheme.

  1. Correlation researches of the outgoing directions 'shake-off' electron and positron at β+ - decay

    International Nuclear Information System (INIS)

    Mitrokhovich, N.F.; Kupryashkin, V.T.; Sidorenko, L.P.

    2012-01-01

    The correlation properties electron 'shake-off' at β + -decay is studied. The measurements were fulfilled in compare with such properties 'shake-off' electron at β - -decay for explanation mechanism, accountable for correlation motion 'shake-off' electron and main particle (electron at β'--decay and positron at β + -decay). 152 Eu decay was used for it. The measurements were performed on the installation of coincidences of γ-quanta with electrons and low energy electrons, including of e 0 -electrons of the secondary electron emission (γγee 0 -coincidences). The registration of electrons 'shake-off' implemented on e 0 -electrons, created by them. On obtained data, the space correlation of electron 'shake-off' with positron at β + -decay in direction forward is much less that those correlating s hake-off - electron at β - -decay. 'Shake-off'-electrons at β + -decay are predominantly moving in large solid angles relate positron. The mechanism, accountable for it, is proposed

  2. ShakeMapple : tapping laptop motion sensors to map the felt extents of an earthquake

    Science.gov (United States)

    Bossu, Remy; McGilvary, Gary; Kamb, Linus

    2010-05-01

    There is a significant pool of untapped sensor resources available in portable computer embedded motion sensors. Included primarily to detect sudden strong motion in order to park the disk heads to prevent damage to the disks in the event of a fall or other severe motion, these sensors may also be tapped for other uses as well. We have developed a system that takes advantage of the Apple Macintosh laptops' embedded Sudden Motion Sensors to record earthquake strong motion data to rapidly build maps of where and to what extent an earthquake has been felt. After an earthquake, it is vital to understand the damage caused especially in urban environments as this is often the scene for large amounts of damage caused by earthquakes. Gathering as much information from these impacts to determine where the areas that are likely to be most effected, can aid in distributing emergency services effectively. The ShakeMapple system operates in the background, continuously saving the most recent data from the motion sensors. After an earthquake has occurred, the ShakeMapple system calculates the peak acceleration within a time window around the expected arrival and sends that to servers at the EMSC. A map plotting the felt responses is then generated and presented on the web. Because large-scale testing of such an application is inherently difficult, we propose to organize a broadly distributed "simulated event" test. The software will be available for download in April, after which we plan to organize a large-scale test by the summer. At a specified time, participating testers will be asked to create their own strong motion to be registered and submitted by the ShakeMapple client. From these responses, a felt map will be produced representing the broadly-felt effects of the simulated event.

  3. Optimization of gold ore Sumbawa separation using gravity method: Shaking table

    Science.gov (United States)

    Ferdana, Achmad Dhaefi; Petrus, Himawan Tri Bayu Murti; Bendiyasa, I. Made; Prijambada, Irfan Dwidya; Hamada, Fumio; Sachiko, Takahi

    2018-04-01

    Most of artisanal small gold mining in Indonesia has been using amalgamation method, which caused negative impact to the environment around ore processing area due to the usage of mercury. One of the more environmental-friendly method for gold processing is gravity method. Shaking table is one of separation equipment of gravity method used to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as rotational speed shaking, particle size and deck slope. In this research, the range of rotational speed shaking was between 100 rpm and 200 rpm, the particle size was between -100 + 200 mesh and -200 + 300 mesh and deck slope was between 3° and 7°. Gold concentration in concentrate was measured by EDX. The result shows that the optimum condition is obtained at a shaking speed of 200 rpm, with a slope of 7° and particle size of -100 + 200 mesh.

  4. GIS-based seismic shaking slope vulnerability map of Sicily (Central Mediterranean)

    Science.gov (United States)

    Nigro, Fabrizio; Arisco, Giuseppe; Perricone, Marcella; Renda, Pietro; Favara, Rocco

    2010-05-01

    permanent displacement potentially induced by an seismic scenario. Such methodologies found on the consideration that the conditions of seismic stability and the post-seismic functionality of engineering structures are tightly related to the entity of the permanent deformations that an earthquake can induce. Regarding the existing simplified procedures among slope stability models, Newmark's model is often used to derive indications about slope instabilities due to earthquakes. In this way, we have evaluated the seismically-induced landslides hazard in Sicily (Central Mediterranean) using the Newmark-like model. In order to determine the map distribution of the seismic ground-acceleration from an earthquake scenario, the attenuation-law of Sabetta & Pugliese has been used, analyzing some seismic recordings occurred in Italy. Also, by evaluating permanent displacements, the correlation of Ambraseys & Menu has been assumed. The seismic shaking slope vulnerability map of Sicily has been carried out using GIS application, also considering max seismic ground-acceleration peak distribution (in terms of exceedance probability for fixed time), slope acclivity, cohesion/angle of internal friction of outcropping rocks, allowing the zoning of the unstable slopes under seismic forces.

  5. Inducible limb-shaking transitory ischemic attacks

    DEFF Research Database (Denmark)

    Rosenbaum, Sverre; Ovesen, Christian; Futrell, Nancy

    2016-01-01

    with exercise-induced weakness associated with tremor in his right arm. His left internal carotid artery was occluded at the bifurcation. Administration of statin and antiplatelet did not relieve his symptoms, and his stereotypic, exercise-induced "limb-shaking" episodes persisted. He underwent successful...

  6. Calculating computer-generated optical elements to produce arbitrary intensity distributions

    International Nuclear Information System (INIS)

    Findlay, S.; Nugent, K.A.; Scholten, R.E.

    2000-01-01

    Full text: We describe preliminary investigation into using a computer to generate optical elements (CGOEs) with phase-only variation, that will produce an arbitrary intensity distribution in a given image plane. An iterative calculation cycles between the CGOE and the image plane and modifies each according to the appropriate constraints. We extend this to the calculation of defined intensity distributions in two separated planes by modifying both phase and intensity at the CGOE

  7. Enhancement of phototrophic hydrogen production by Rhodobacter sphaeroides ZX-5 using a novel strategy - shaking and extra-light supplementation approach

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xu; Wang, Yong-Hong; Zhang, Si-Liang; Chu, Ju; Zhang, Ming; Huang, Ming-Zhi; Zhuang, Ying-Ping [State Key Laboratory of Bioreactor Engineering, P.O. Box 329, East China University of Science and Technology, 130 Meilong Road, Shanghai 200237 (China)

    2009-12-15

    Biohydrogen has gained attention due to its potential as a sustainable alternative to conventional methods for hydrogen production. In this study, the effect of light intensity as well as cultivation method (standing- and shaking-culture) on the cell growth and hydrogen production of Rhodobacter sphaeroides ZX-5 were investigated in 38-ml anaerobic photobioreactor with RCVBN medium. Thus, a novel shaking and extra-light supplementation (SELS) approach was developed to enhance the phototrophic H{sub 2} production by R. sphaeroides ZX-5 using malate as the sole carbon source. The optimum illumination condition for shaking-culture by strain ZX-5 increased to 7000-8000 lux, markedly higher than that for standing-culture (4000-5000 lux). Under shaking and elevated illumination (7000-8000 lux), the culture was effective in promoting photo-H{sub 2} production, resulting in a 59% and 56% increase of the maximum and average hydrogen production rate, respectively, in comparison with the culture under standing and 4000-5000 lux conditions. The highest hydrogen-producing rate of 165.9 ml H{sub 2}/l h was observed under the application of SELS approach. To our knowledge, this record is currently the highest hydrogen production rate of non-immobilized purple non-sulphur (PNS) bacteria. This optimal performance of photo-H{sub 2} production using SELS approach is a favorable choice of sustainable and economically feasible strategy to improve phototrophic H{sub 2} production efficiency. (author)

  8. Vortex shaking study of REBCO tape with consideration of anisotropic characteristics

    Science.gov (United States)

    Liang, Fei; Qu, Timing; Zhang, Zhenyu; Sheng, Jie; Yuan, Weijia; Iwasa, Yukikazu; Zhang, Min

    2017-09-01

    The second generation high temperature superconductor, specifically REBCO, has become a new research focus in the development of a new generation of high-field (>25 T) magnets. One of the main challenges in the application of the magnets is the current screening problem. Previous research shows that for magnetized superconducting stacks and bulks the application of an AC field in plane with the circulating current will lead to demagnetization due to vortex shaking, which provides a possible solution to remove the shielding current. This paper provides an in-depth study, both experimentally and numerically, to unveil the vortex shaking mechanism of REBCO stacks. A new experiment was carried out to measure the demagnetization rate of REBCO stacks exposed to an in-plane AC magnetic field. Meanwhile, 2D finite element models, based on the E-J power law, are developed for simulating the vortex shaking effect of the AC magnetic field. Qualitative agreement was obtained between the experimental and the simulation results. Our results show that the applied in-plane magnetic field leads to a sudden decay of trapped magnetic field in the first half shaking cycle, which is caused by the magnetic field dependence of critical current. Furthermore, the decline of demagnetization rate with the increase of tape number is mainly due to the cross-magnetic field being screened by the top and bottom stacks during the shaking process, which leads to lower demagnetization rate of inner layers. We also demonstrate that the frequency of the applied AC magnetic field has little impact on the demagnetization process. Our modeling tool and findings perfect the vortex shaking theory and provide helpful guidance for eliminating screening current in the new generation REBCO magnets.

  9. Assessment of liquefaction potential during earthquakes by arias intensity

    Science.gov (United States)

    Kayen, R.E.; Mitchell, J.K.

    1997-01-01

    An Arias intensity approach to assess the liquefaction potential of soil deposits during earthquakes is proposed, using an energy-based measure of the severity of earthquake-shaking recorded on seismograms of the two horizontal components of ground motion. Values representing the severity of strong motion at depth in the soil column are associated with the liquefaction resistance of that layer, as measured by in situ penetration testing (SPT, CPT). This association results in a magnitude-independent boundary that envelopes initial liquefaction of soil in Arias intensity-normalized penetration resistance space. The Arias intensity approach is simple to apply and has proven to be highly reliable in assessing liquefaction potential. The advantages of using Arias intensity as a measure of earthquake-shaking severity in liquefaction assessment are: Arias intensity is derived from integration of the entire seismogram wave form, incorporating both the amplitude and duration elements of ground motion; all frequencies of recorded motion are considered; and Arias intensity is an appropriate measure to use when evaluating field penetration test methodologies that are inherently energy-based. Predictor equations describing the attenuation of Arias intensity as a function of earthquake magnitude and source distance are presented for rock, deep-stiff alluvium, and soft soil sites.

  10. Spectral intensity distribution of trapped fermions

    Indian Academy of Sciences (India)

    Trapped fermions; local density approximation; spectral intensity distribution function. ... Thus, cold atomic systems allow us to study interesting ... In fermions, synthetic non-Abelian gauge ... energy eigenstates of the isotropic harmonic oscillator [26–28]. ... d i=1. (ni + 1. 2. )ω0. In calculating the SIDF exactly these eigenfunc-.

  11. Precipitation intensity probability distribution modelling for hydrological and construction design purposes

    International Nuclear Information System (INIS)

    Koshinchanov, Georgy; Dimitrov, Dobri

    2008-01-01

    The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: - The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; - As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. - Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; - Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; - Next method is considering only

  12. Relative seismic shaking vulnerability microzonation using an ...

    Indian Academy of Sciences (India)

    the relative seismic shaking vulnerability for built structures of different height categories within adjacent ..... monitor for possible changes in the microzonation results over time ..... The vehicle's ... A Garmin GPS 12XL was used to determine the.

  13. Intensive Versus Distributed Aphasia Therapy: A Nonrandomized, Parallel-Group, Dosage-Controlled Study.

    Science.gov (United States)

    Dignam, Jade; Copland, David; McKinnon, Eril; Burfein, Penni; O'Brien, Kate; Farrell, Anna; Rodriguez, Amy D

    2015-08-01

    Most studies comparing different levels of aphasia treatment intensity have not controlled the dosage of therapy provided. Consequently, the true effect of treatment intensity in aphasia rehabilitation remains unknown. Aphasia Language Impairment and Functioning Therapy is an intensive, comprehensive aphasia program. We investigated the efficacy of a dosage-controlled trial of Aphasia Language Impairment and Functioning Therapy, when delivered in an intensive versus distributed therapy schedule, on communication outcomes in participants with chronic aphasia. Thirty-four adults with chronic, poststroke aphasia were recruited to participate in an intensive (n=16; 16 hours per week; 3 weeks) versus distributed (n=18; 6 hours per week; 8 weeks) therapy program. Treatment included 48 hours of impairment, functional, computer, and group-based aphasia therapy. Distributed therapy resulted in significantly greater improvements on the Boston Naming Test when compared with intensive therapy immediately post therapy (P=0.04) and at 1-month follow-up (P=0.002). We found comparable gains on measures of participants' communicative effectiveness, communication confidence, and communication-related quality of life for the intensive and distributed treatment conditions at post-therapy and 1-month follow-up. Aphasia Language Impairment and Functioning Therapy resulted in superior clinical outcomes on measures of language impairment when delivered in a distributed versus intensive schedule. The therapy progam had a positive effect on participants' functional communication and communication-related quality of life, regardless of treatment intensity. These findings contribute to our understanding of the effect of treatment intensity in aphasia rehabilitation and have important clinical implications for service delivery models. © 2015 American Heart Association, Inc.

  14. Recovering from the ShakeOut earthquake

    Science.gov (United States)

    Wein, Anne; Johnson, Laurie; Bernknopf, Richard

    2011-01-01

    Recovery from an earthquake like the M7.8 ShakeOut Scenario will be a major endeavor taking many years to complete. Hundreds of Southern California municipalities will be affected; most lack recovery plans or previous disaster experience. To support recovery planning this paper 1) extends the regional ShakeOut Scenario analysis into the recovery period using a recovery model, 2) localizes analyses to identify longer-term impacts and issues in two communities, and 3) considers the regional context of local recovery.Key community insights about preparing for post-disaster recovery include the need to: geographically diversify city procurement; set earthquake mitigation priorities for critical infrastructure (e.g., airport), plan to replace mobile homes with earthquake safety measures, consider post-earthquake redevelopment opportunities ahead of time, and develop post-disaster recovery management and governance structures. This work also showed that communities with minor damages are still sensitive to regional infrastructure damages and their potential long-term impacts on community recovery. This highlights the importance of community and infrastructure resilience strategies as well.

  15. Shake-table testing of a self-centering precast reinforced concrete frame with shear walls

    Science.gov (United States)

    Lu, Xilin; Yang, Boya; Zhao, Bin

    2018-04-01

    The seismic performance of a self-centering precast reinforced concrete (RC) frame with shear walls was investigated in this paper. The lateral force resistance was provided by self-centering precast RC shear walls (SPCW), which utilize a combination of unbonded prestressed post-tensioned (PT) tendons and mild steel reinforcing bars for flexural resistance across base joints. The structures concentrated deformations at the bottom joints and the unbonded PT tendons provided the self-centering restoring force. A 1/3-scale model of a five-story self-centering RC frame with shear walls was designed and tested on a shake-table under a series of bi-directional earthquake excitations with increasing intensity. The acceleration response, roof displacement, inter-story drifts, residual drifts, shear force ratios, hysteresis curves, and local behaviour of the test specimen were analysed and evaluated. The results demonstrated that seismic performance of the test specimen was satisfactory in the plane of the shear wall; however, the structure sustained inter-story drift levels up to 2.45%. Negligible residual drifts were recorded after all applied earthquake excitations. Based on the shake-table test results, it is feasible to apply and popularize a self-centering precast RC frame with shear walls as a structural system in seismic regions.

  16. ShakeAlert—An earthquake early warning system for the United States west coast

    Science.gov (United States)

    Burkett, Erin R.; Given, Douglas D.; Jones, Lucile M.

    2014-08-29

    Earthquake early warning systems use earthquake science and the technology of monitoring systems to alert devices and people when shaking waves generated by an earthquake are expected to arrive at their location. The seconds to minutes of advance warning can allow people and systems to take actions to protect life and property from destructive shaking. The U.S. Geological Survey (USGS), in collaboration with several partners, has been working to develop an early warning system for the United States. ShakeAlert, a system currently under development, is designed to cover the West Coast States of California, Oregon, and Washington.

  17. Influence of trunk or bough shaking on the performance and costs of mechanical harvesting of olives

    OpenAIRE

    Peça, José; Dias, António; Pinheiro, Anacleto; Santos, Luís S.S. dos; Almeida, Arlindo; Lopes, João; Reynolds, Domingos

    2002-01-01

    Field trials carried out in Portugal showed the penalty to be paid, both in terms of work rate and costs, whenever, due to tree geometry and size, trees had to be bough shaked rather than the normal trunk shaking. If an olive orchard with trees requiring two bough shakings could be adapted to an entirely trunk shaking orchard, simulation shows an increment between 9% and 33% in the work rate at harvesting and a reduction between 4% and 22% in harvesting cost per kilogram of olive, assuming a ...

  18. The shaking signal of the honey bee informs workers to prepare for greater activity

    OpenAIRE

    Seeley, Thomas D.; Weidenmüller, Anja; Kühnholz, Susanne

    2010-01-01

    One of the most conspicuous activities o f worker bees inside a hive is the shaking of other wc~rliersT. his shaking has long been suspected to be a communication behavior, but its information content and function have until recently remained mysterious. Prior studies of the colony-level patterns of the production of ihc shaking signal supgest strongly that this signal semes to arouse workers to greater activity, such as at times of good foraging. Data from our obsenrauons of mdividual bees h...

  19. Analisis Permintaan Produk Nutrisi Shake Mix dari Herbalife di Kota Pekanbaru

    OpenAIRE

    Ekwarso, Hendro; Silitonga, William

    2015-01-01

    This research was conducted to determine the factors that influence the demand of nutrition product Shake Mix from Herbalife among reproductive age consumers in Pekanbaru city. The data used in this research is primary and secondary data. Using descriptive data analysis methods. The population in this research is peoples on productive age who consume Shake Mix sampled 100 respondents using the Slovin formula. The result of this research showed that the factors that influence the demand of Sha...

  20. Economic resilience lessons from the ShakeOut earthquake scenario

    Science.gov (United States)

    Wein, A.; Rose, A.

    2011-01-01

    Following a damaging earthquake, “business interruption” (BI)—reduced production of goods and services—begins and continues long after the ground shaking stops. Economic resilience reduces BI losses by making the best use of the resources available at a given point in time (static resilience) or by speeding recovery through repair and reconstruction (dynamic resilience), in contrast to mitigation that prevents damage in the first place. Economic resilience is an important concept to incorporate into economic loss modeling and in recovery and contingency planning. Economic resilience framework includes the applicability of resilience strategies to production inputs and output, demand- and supply-side effects, inherent and adaptive abilities, and levels of the economy. We use our resilience framework to organize and share strategies that enhance economic resilience, identify overlooked resilience strategies, and present evidence and structure of resilience strategies for economic loss modelers. Numerous resilience strategies are compiled from stakeholder discussions about the ShakeOut Scenario (Jones et. al. 2008). Modeled results of ShakeOut BI sector losses reveal variable effectiveness of resilience strategies for lengthy disruptions caused by fire-damaged buildings and water service outages. Resilience is a complement to mitigation and may, in fact, have cost and all-hazards advantages.

  1. Numerical simulations of rubber bearing tests and shaking table tests

    International Nuclear Information System (INIS)

    Hirata, K.; Matsuda, A.; Yabana, S.

    2002-01-01

    Test data concerning rubber bearing tests and shaking table tests of base-isolated model conducted by CRIEPI are provided to the participants of Coordinated Research Program (CRP) on 'Intercomparison of Analysis Methods for predicting the behaviour of Seismically Isolated Nuclear Structure', which is organized by International Atomic Energy Agency (IAEA), for the comparison study of numerical simulation of base-isolated structure. In this paper outlines of the test data provided and the numerical simulations of bearing tests and shaking table tests are described. Using computer code ABAQUS, numerical simulations of rubber bearing tests are conducted for NRBs, LRBs (data provided by CRIEPI) and for HDRs (data provided by ENEA/ENEL and KAERI). Several strain energy functions are specified according to the rubber material test corresponding to each rubber bearing. As for lead plug material in LRB, mechanical characteristics are reevaluated and are made use of. Simulation results for these rubber bearings show satisfactory agreement with the test results. Shaking table test conducted by CRIEPI is of a base isolated rigid mass supported by LRB. Acceleration time histories, displacement time histories of the isolators as well as cyclic loading test data of the LRB used for the shaking table test are provided to the participants of the CRP. Simulations of shaking table tests are conducted for this rigid mass, and also for the steel frame model which is conducted by ENEL/ENEA. In the simulation of the rigid mass model test, where LRBs are used, isolators are modeled either by bilinear model or polylinear model. In both cases of modeling of isolators, simulation results show good agreement with the test results. In the case of the steel frame model, where HDRs are used as isolators, bilinear model and polylinear model are also used for modeling isolators. The response of the model is simulated comparatively well in the low frequency range of the floor response, however, in

  2. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  3. Influence of temperature rise distribution in second harmonic generation crystal on intensity distributions of output second harmonic wave

    International Nuclear Information System (INIS)

    Li Wei; Feng Guoying; Li Gang; Huang Yu; Zhang Qiuhui

    2009-01-01

    Second-harmonic generation (SHG) of high-intensity laser with an SHG crystal for type I angle phase matching has been studied by the use of a split-step algorithm based on the fast Fourier transform and a fourth-order Runge-Kutta (R-K) integrator. The transverse walk-off effect, diffraction, the second-order and the third-order nonlinear effects have been taken into consideration. Influences of a temperature rise distribution of the SHG crystal on the refractive indices of ordinary wave and extraordinary wave have been discussed. The rules of phase mismatching quantity, intensity distribution of output beam and frequency conversion efficiency varying with the temperature rise distribution of the SHG crystal have been analyzed quantitatively. The calculated results indicate that in a high power frequency conversion system, the temperature rise distribution of SHG crystal would result in the phase mismatching of fundamental and harmonic waves, leading to the variation of intensity distribution of the output beam and the decrease of the conversion efficiency. (authors)

  4. Measurement of intensity distribution of CSR in LEBRA PXR beamline

    International Nuclear Information System (INIS)

    Nakao, Keisuke; Sakai, Takeshi; Hayakawa, Ken; Tanaka, Toshinari; Hayakawa, Yasushi; Nogami, Kyoko; Inagaki, Manabu; Sei, Norihiro

    2014-01-01

    Last year, the intensity of Coherent Synchrotron Radiation (CSR) in LEBRA PXR beamline was measured. As a result, it turned out that the intensity of CSR was stronger than anticipation. It is suggested that Coherent Edge Radiation (CER) is mixed with CSR. Then, in order to confirm whether CER is contained, the intensity distribution of CSR was measured. The result of the experiment is reported in this paper. (author)

  5. Resonant photoemission at core-level shake-up thresholds: Valence-band satellites in nickel

    International Nuclear Information System (INIS)

    Bjoerneholm, O.; Andersen, J.N.; Wigren, C.; Nilsson, A.; Nyholm, R.; Ma; Ortensson, N.

    1990-01-01

    Three-hole satellites (3d 7 final-state configuration) in the nickel valence-band photoelectron spectrum have been identified at 13 and 18 eV binding energy with use of synchrotron radiation from the MAX storage ring. The three-hole satellites show resonances at photon energies close to the threshold for excitation of 3p 5 3d 9 core-hole shake-up states. The 13-eV satellite also shows a resonance directly at the 3p threshold. This is interpreted as an interference between the direct three-hole ionization and a shake-up transition in the Auger decay of the 3p hole. This shake-up process is also identified directly in the M 2,3 M 4,5 M 4,5 Auger spectrum

  6. The ShakeOut Earthquake Scenario - A Story That Southern Californians Are Writing

    Science.gov (United States)

    Perry, Suzanne; Cox, Dale; Jones, Lucile; Bernknopf, Richard; Goltz, James; Hudnut, Kenneth; Mileti, Dennis; Ponti, Daniel; Porter, Keith; Reichle, Michael; Seligson, Hope; Shoaf, Kimberley; Treiman, Jerry; Wein, Anne

    2008-01-01

    The question is not if but when southern California will be hit by a major earthquake - one so damaging that it will permanently change lives and livelihoods in the region. How severe the changes will be depends on the actions that individuals, schools, businesses, organizations, communities, and governments take to get ready. To help prepare for this event, scientists of the U.S. Geological Survey (USGS) have changed the way that earthquake scenarios are done, uniting a multidisciplinary team that spans an unprecedented number of specialties. The team includes the California Geological Survey, Southern California Earthquake Center, and nearly 200 other partners in government, academia, emergency response, and industry, working to understand the long-term impacts of an enormous earthquake on the complicated social and economic interactions that sustain southern California society. This project, the ShakeOut Scenario, has applied the best current scientific understanding to identify what can be done now to avoid an earthquake catastrophe. More information on the science behind this project will be available in The ShakeOut Scenario (USGS Open-File Report 2008-1150; http://pubs.usgs.gov/of/2008/1150/). The 'what if?' earthquake modeled in the ShakeOut Scenario is a magnitude 7.8 on the southern San Andreas Fault. Geologists selected the details of this hypothetical earthquake by considering the amount of stored strain on that part of the fault with the greatest risk of imminent rupture. From this, seismologists and computer scientists modeled the ground shaking that would occur in this earthquake. Engineers and other professionals used the shaking to produce a realistic picture of this earthquake's damage to buildings, roads, pipelines, and other infrastructure. From these damages, social scientists projected casualties, emergency response, and the impact of the scenario earthquake on southern California's economy and society. The earthquake, its damages, and

  7. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    Science.gov (United States)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  8. Force distribution is more important than its intensity!

    Directory of Open Access Journals (Sweden)

    Alberto Consolaro

    2014-01-01

    Full Text Available A common question about root resorption is raised in orthodontic practice: What is more important, the intensity of force or its distribution along the root, periodontal and alveolar structures? Diffuse distribution of forces applied to periodontal tissues during tooth movement tends not to promote neither extensive areas of cell matrix hyalinization nor significant death of cementoblasts that lead to root resorption. However, focal distribution or concentration of forces within a restricted area - as it occurs in tipping movements, even with forces of lower intensity - tend to induce extensive areas of hyalinization and focal death of cementoblasts, which is commonly associated with root resorption. In tipping movements, the apical regions tend to concentrate more forces in addition to wounding the cementoblasts due to the smaller dimension of their root structure as well as their cone shape. For this reason, there is an increase in root resorption. In the cervical region, on the other hand, the large area resulting from a large diameter and bone crown deflection tends to reduce the effects of forces, even when they are more concentrated, thus rarely inducing death of cementoblasts and root resorption.

  9. ShakeNet: a portable wireless sensor network for instrumenting large civil structures

    Science.gov (United States)

    Kohler, Monica D.; Hao, Shuai; Mishra, Nilesh; Govindan, Ramesh; Nigbor, Robert

    2015-08-03

    We report our findings from a U.S. Geological Survey (USGS) National Earthquake Hazards Reduction Program-funded project to develop and test a wireless, portable, strong-motion network of up to 40 triaxial accelerometers for structural health monitoring. The overall goal of the project was to record ambient vibrations for several days from USGS-instrumented structures. Structural health monitoring has important applications in fields like civil engineering and the study of earthquakes. The emergence of wireless sensor networks provides a promising means to such applications. However, while most wireless sensor networks are still in the experimentation stage, very few take into consideration the realistic earthquake engineering application requirements. To collect comprehensive data for structural health monitoring for civil engineers, high-resolution vibration sensors and sufficient sampling rates should be adopted, which makes it challenging for current wireless sensor network technology in the following ways: processing capabilities, storage limit, and communication bandwidth. The wireless sensor network has to meet expectations set by wired sensor devices prevalent in the structural health monitoring community. For this project, we built and tested an application-realistic, commercially based, portable, wireless sensor network called ShakeNet for instrumentation of large civil structures, especially for buildings, bridges, or dams after earthquakes. Two to three people can deploy ShakeNet sensors within hours after an earthquake to measure the structural response of the building or bridge during aftershocks. ShakeNet involved the development of a new sensing platform (ShakeBox) running a software suite for networking, data collection, and monitoring. Deployments reported here on a tall building and a large dam were real-world tests of ShakeNet operation, and helped to refine both hardware and software. 

  10. Enhanced xanthan production process in shake flasks and pilot ...

    African Journals Online (AJOL)

    Enhanced xanthan production process in shake flasks and pilot scale bioreactors using industrial semidefined medium. ... by the type and concentration of the different carbon and nitrogen source as well as other medium components. The

  11. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    Science.gov (United States)

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  12. Laboratory shake flask batch tests can predict field biodegradation of aniline in the Rhine

    DEFF Research Database (Denmark)

    Toräng, Lars; Reuschenbach, P.; Müller, B.

    2001-01-01

    .7 degreesC, respectively. This field rate estimate was compared with results from 38 laboratory shake flask batch tests with Rhine water which averaged 1.5 day(-1) at 15 degreesC and 2.0 day(-1) at 20 degreesC. These results indicate that laboratory shake flask batch tests with low concentrations of test...

  13. Corroborating a new probabilistic seismic hazard assessment for greater Tokyo from historical intensity observations

    Science.gov (United States)

    Bozkurt, S.; Stein, R.; Toda, S.

    2006-12-01

    The long recorded history of earthquakes in Japan affords an opportunity to forecast seismic shaking exclusively from past observations of shaking. For this we analyzed 10,000 intensity observations recorded during AD 1600-2000 in a 350 x 350 km area centered on Tokyo in a Geographic Information System. A frequency-intensity curve is found for each 5 x 5 km cell, and from this the probability of exceeding any intensity level can be estimated. The principal benefits of this approach is that it builds the fewest possible assumptions into a probabilistic seismic forecast, it includes site and source effects without imposing this behavior, and we do not need to know the size or location of any earthquake or the location and slip rate of any fault. The cost is that we must abandon any attempt to make a time-dependent forecast, which could be quite different. We believe the method is suitable to many applications of probabilistic seismic hazard assessment, and to other regions. The two key assumptions are that the slope of the observed frequency-intensity relation at every site is the same, and that the 400-year record is long enough to encompass the full range of seismic behavior. Tests we conduct suggest that both assumptions are sound. The resulting 30-year probability of IJMA>=6 shaking (roughly equivalent to PGA>=0.9 g or MMI=IX-X) is 30-40% in Tokyo, Kawasaki, and Yokohama, and 10-15% in Chiba and Tsukuba, the range reflecting spatial variability and curve-fitting alternatives. The strongest shaking is forecast along the margins of Tokyo Bay, within the river sediments extending northwest from Tokyo, and at coastal sites near the plate boundary faults. We also produce long- term exceedance maps of peak ground acceleration for building code regulations, and short-term hazard maps associated with hypothetical catastrophe bonds. Our results for greater Tokyo resemble our independent Poisson probability developed from conventional seismic hazard analysis, as well as

  14. Polydisperse-particle-size-distribution function determined from intensity profile of angularly scattered light

    International Nuclear Information System (INIS)

    Alger, T.W.

    1979-01-01

    A new method for determining the particle-size-distribution function of a polydispersion of spherical particles is presented. The inversion technique for the particle-size-distribution function is based upon matching the measured intensity profile of angularly scattered light with a summation of the intensity contributions of a series of appropriately spaced, narrowband, size-distribution functions. A numerical optimization technique is used to determine the strengths of the individual bands that yield the best agreement with the measured scattered-light-intensity profile. Because Mie theory is used, the method is applicable to spherical particles of all sizes. Several numerical examples demonstrate the application of this inversion method

  15. Up-Streaming Process for Glucose Oxidase by Thermophilic Penicillium sp. in Shake Flask

    OpenAIRE

    Muhammad Mohsin JAVED; Aroosh SHABIR; Sana ZAHOOR; Ikram UL-HAQ

    2012-01-01

    The present study is concerned with the production of glucose oxidase (GOD) from thermophilic Penicillium sp. in 250 mL shake flask. Fourteen different strains of thermophilic Penicillium sp. were isolated from the soil and were screened for glucose oxidase production. IIBP-13 strain gave maximum extra-cellular glucose oxidase production as compared to other isolates. Effect of submerged fermentation in shaking and static conditions, different carbon sources and incubation period on the produ...

  16. Experimental/analytical approaches to modeling, calibrating and optimizing shaking table dynamics for structural dynamic applications

    Science.gov (United States)

    Trombetti, Tomaso

    This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral

  17. MyShake: A smartphone seismic network for earthquake early warning and beyond.

    Science.gov (United States)

    Kong, Qingkai; Allen, Richard M; Schreier, Louis; Kwon, Young-Woo

    2016-02-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.

  18. Limb Shaking as a Manifestation of Low-flow Transient Ischemic Attacks

    Directory of Open Access Journals (Sweden)

    Mohana P. Maddula

    2010-03-01

    Full Text Available Limb shaking presenting as rhythmic involuntary hyperkinetic movements may represent as severe bilateral occlusive carotid disease. This unusual form of transient ischemic attack is often misdiagnosed as focal motor seizures. However, careful assessment reveals a lack of usual seizure characteristics such as a jacksonian march or facial involvement. The movements also appear to be precipitated by activities that lower blood pressure. We present two cases of patients with severe bilateral carotid stenosis leading to limb-shaking transient ischemic attacks. There was complete stenosis in the internal carotid artery (ICA contralateral to the jerking limb, combined with significant stenosis in the ipsilateral ICA. Cerebral perfusion on the occluded ICA side was maintained through collateral circulation from the opposite ICA and posterior circulation. When blood pressure was lowered orthostatically or by medication, the resulting cerebral hypoperfusion manifested as limb jerking. Recognition of limb shaking as a rare form of transient ischemic attack and differentiating it from focal motor epilepsy can facilitate early identification of critical carotid stenosis, allowing for appropriate interventions and thus reducing the risk of a disabling stroke. We recommend that clinicians should consider carotid disease in elderly patients presenting with orthostatic or episodic movement disorders.

  19. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  20. Assessing a Tornado Climatology from Global Tornado Intensity Distributions

    OpenAIRE

    Feuerstein, B.; Dotzek, N.; Grieser, J.

    2005-01-01

    Recent work demonstrated that the shape of tornado intensity distributions from various regions worldwide is well described by Weibull functions. This statistical modeling revealed a strong correlation between the fit parameters c for shape and b for scale regardless of the data source. In the present work it is shown that the quality of the Weibull fits is optimized if only tornado reports of F1 and higher intensity are used and that the c–b correlation does indeed reflect a universal featur...

  1. Formation of a laser beam with a doughnut intensity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, I I; Troitskii, IU V; Iakushkin, S V

    1986-02-01

    The conditions of the simultaneous generation of TEM01 and TEM10 modes forming a beam with a doughnut intensity distribution are investigated. In the case of a complete suppression of the TEM00 mode, the ratio of the intensity at the crest of the ring to the intensity at the ring axis reaches 200 and is limited by dispersion in the optical elements of the resonator. Operation with mutual locking of TEM01 and TEM10 modes has been achieved which is characterized by complete spatial coherence of the ring-shaped beam.

  2. Modeling continuous seismic velocity changes due to ground shaking in Chile

    Science.gov (United States)

    Gassenmeier, Martina; Richter, Tom; Sens-Schönfelder, Christoph; Korn, Michael; Tilmann, Frederik

    2015-04-01

    In order to investigate temporal seismic velocity changes due to earthquake related processes and environmental forcing, we analyze 8 years of ambient seismic noise recorded by the Integrated Plate Boundary Observatory Chile (IPOC) network in northern Chile between 18° and 25° S. The Mw 7.7 Tocopilla earthquake in 2007 and the Mw 8.1 Iquique earthquake in 2014 as well as numerous smaller events occurred in this area. By autocorrelation of the ambient seismic noise field, approximations of the Green's functions are retrieved. The recovered function represents backscattered or multiply scattered energy from the immediate neighborhood of the station. To detect relative changes of the seismic velocities we apply the stretching method, which compares individual autocorrelation functions to stretched or compressed versions of a long term averaged reference autocorrelation function. We use time windows in the coda of the autocorrelations, that contain scattered waves which are highly sensitive to minute changes in the velocity. At station PATCX we observe seasonal changes in seismic velocity as well as temporary velocity reductions in the frequency range of 4-6 Hz. The seasonal changes can be attributed to thermal stress changes in the subsurface related to variations of the atmospheric temperature. This effect can be modeled well by a sine curve and is subtracted for further analysis of short term variations. Temporary velocity reductions occur at the time of ground shaking usually caused by earthquakes and are followed by a recovery. We present an empirical model that describes the seismic velocity variations based on continuous observations of the local ground acceleration. Our hypothesis is that not only the shaking of earthquakes provokes velocity drops, but any small vibrations continuously induce minor velocity variations that are immediately compensated by healing in the steady state. We show that the shaking effect is accumulated over time and best described by

  3. Shake-up transitions in S 2p, S 2s and F 1s photoionization of the SF6 molecule

    International Nuclear Information System (INIS)

    Decleva, P; Fronzoni, G; Kivimaeki, A; Alvarez Ruiz, J; Svensson, S

    2009-01-01

    Shake-up transitions occurring upon core photoionization in the SF 6 molecule have been studied experimentally and theoretically. The S 2p, S 2s and F 1s shake-up satellite photoelectron spectra were measured using Al Ka radiation at 1487 eV photon energy. They have been interpreted with the aid of ab initio configuration interaction calculations in the sudden-limit approximation. For the S 2p spectrum, conjugate shake-up transitions were also calculated. Clear evidence of conjugate processes is observed in the S 2p shake-up spectrum measured at 230 eV photon energy. The experimental and theoretical S 2p and S 2s shake-up spectra show very similar structures mainly due to orbital relaxation involving S 3s and 3p participation. For the calculation of the F 1s shake-up spectrum, the symmetry lowering of the molecule in the final states was considered, resulting in a good agreement with the experiment.

  4. Shake table test of soil-pile groups-bridge structure interaction in liquefiable ground

    Science.gov (United States)

    Tang, Liang; Ling, Xianzhang; Xu, Pengju; Gao, Xia; Wang, Dongsheng

    2010-03-01

    This paper describes a shake table test study on the seismic response of low-cap pile groups and a bridge structure in liquefiable ground. The soil profile, contained in a large-scale laminar shear box, consisted of a horizontally saturated sand layer overlaid with a silty clay layer, with the simulated low-cap pile groups embedded. The container was excited in three El Centro earthquake events of different levels. Test results indicate that excessive pore pressure (EPP) during slight shaking only slightly accumulated, and the accumulation mainly occurred during strong shaking. The EPP was gradually enhanced as the amplitude and duration of the input acceleration increased. The acceleration response of the sand was remarkably influenced by soil liquefaction. As soil liquefaction occurred, the peak sand displacement gradually lagged behind the input acceleration; meanwhile, the sand displacement exhibited an increasing effect on the bending moment of the pile, and acceleration responses of the pile and the sand layer gradually changed from decreasing to increasing in the vertical direction from the bottom to the top. A jump variation of the bending moment on the pile was observed near the soil interface in all three input earthquake events. It is thought that the shake table tests could provide the groundwork for further seismic performance studies of low-cap pile groups used in bridges located on liquefiable groun.

  5. SHAKING TABLE TESTS ON SEISMIC DEFORMATION OF PILE SUPPORTED PIER

    Science.gov (United States)

    Fujita, Daiki; Kohama, Eiji; Takenobu, Masahiro; Yoshida, Makoto; Kiku, Hiroyoshi

    The seismic deformation characeteristics of a pile supported pier was examined with the shake table test, especially focusing on the pier after its deformation during earthquakes. The model based on the similitude of the fully-plastic moment in piles was prepared to confirm the deformation and stress characteristic after reaching the fully-plastic moment. Moreover, assuming transportation of emergency supplies and occurrence of after shock in the post-disaster period, the pile supported pier was loaded with weight after reaching fully-plastic moment and excited with the shaking table. As the result, it is identified that the displacement of the pile supported pier is comparatively small if bending strength of piles does not decrease after reaching fully-plastic moment due to nonoccourrence of local backling or strain hardening.

  6. Shaking table test and verification of development of an ...

    Indian Academy of Sciences (India)

    A full-scale multiple degrees of freedom shaking table is tested toverify the energy dissipation of this proposed AIC, including test building without control, with passive control added involving various stiffness ratios and also with synchronic control added involving various stiffness ratios. Shock absorption of displacement ...

  7. Neuropeptide Y inhibits hippocampal seizures and wet dog shakes

    DEFF Research Database (Denmark)

    Woldbye, D P; Madsen, T M; Larsen, P J

    1996-01-01

    effects in the dentate gyrus and subiculum, but also in areas to which epileptiform EEG activity spreads before reverberating. In addition, NPY strongly reduced seizure-related 'wet dog shakes' (WDS). This is consistent with previous studies showing that the dentate gyrus is essential for the generation...

  8. Inverse identification of intensity distributions from multiple flux maps in concentrating solar applications

    International Nuclear Information System (INIS)

    Erickson, Ben; Petrasch, Jörg

    2012-01-01

    Radiative flux measurements at the focal plane of solar concentrators are typically performed using digital cameras in conjunction with Lambertian targets. To accurately predict flux distributions on arbitrary receiver geometries directional information about the radiation is required. Currently, the directional characteristics of solar concentrating systems are predicted via ray tracing simulations. No direct experimental technique to determine intensities of concentrating solar systems is available. In the current paper, multiple parallel flux measurements at varying distances from the focal plane together with a linear inverse method and Tikhonov regularization are used to identify the directional and spatial intensity distribution at the solution plane. The directional binning feature of an in-house Monte Carlo ray tracing program is used to provide a reference solution. The method has been successfully applied to two-dimensional concentrators, namely parabolic troughs and elliptical troughs using forward Monte Carlo ray tracing simulations that provide the flux maps as well as consistent, associated intensity distribution for validation. In the two-dimensional case, intensity distributions obtained from the inverse method approach the Monte Carlo forward solution. In contrast, the method has not been successful for three dimensional and circular symmetric concentrator geometries.

  9. Shaking table test and simulation analysis on failure characteristics of seismic isolation system

    International Nuclear Information System (INIS)

    Fukushima, Yasuaki; Iizuka, Maao; Satoh, Nobuhisa; Yoshikawa, Kazuhide; Katoh, Asao; Tanimoto, Eisuke

    2000-01-01

    Seismic safety and dynamic characteristics of the rubber bearing breaks of three types of base isolation system, natural rubber bearing + steel damper, lead rubber bearing and high damping rubber bearing, for nuclear power plant facilities were conducted by confirmed shaking table tests. The simulation analyses were conducted for the shaking table tests until the rubber broke. These results demonstrate that the dynamic behavior of base isolation system could be simulated closely until the rubber broke using simple analytical model based on static test. (author)

  10. Shaking Table Experiment of Trampoline Effect

    Science.gov (United States)

    Aoi, S.; Kunugi, T.; Fujiwara, H.

    2010-12-01

    It has been widely thought that soil response to ground shaking do not experience asymmetry in ground motion. An extreme vertical acceleration near four times gravity was recorded during the 2008 Iwate-Miyagi earthquake at IWTH25 station. This record is distinctly asymmetric in shape; the waveform envelope amplitude is about 1.6 times larger in the upward direction compared to the downward direction. To explain this phenomenon, Aoi et al. (2008) proposed a simple model of a mass bouncing on a trampoline. In this study we perform a shaking table experiment of a soil prototype to try to reproduce the asymmetric ground motion and to investigate the physics of this asymmetric behavior. A soil chamber made of an acrylic resin cylinder with 200 mm in diameter and 500 mm in height was tightly anchored to the shaking table and vertically shaken. We used four different sample materials; Toyoura standard sands, grass beads (particle size of 0.1 and 0.4 mm) and sawdust. Sample was uniformly stacked to a depth of 450 mm and, to measure the vertical motions, accelerometers was installed inside the material (at depths of 50, 220, and 390 mm) and on the frame of the chamber. Pictures were taken from a side by a high speed camera (1000 frames/sec) to capture the motions of particles. The chamber was shaken by sinusoidal wave (5, 10, and 20 Hz) with maximum amplitudes from 0.1 to 4.0 g. When the accelerations roughly exceeded gravity, for all samples, granular behaviors of sample materials became dominant and the asymmetric motions were successfully reproduced. Pictures taken by the high speed camera showed that the motions of the particles are clearly different from the motion of the chamber which is identical to the sinusoidal motion of the shaking table (input motion). Particles are rapidly flung up and freely pulled down by gravity, and the downward motion of the particles is slower than the upward motion. It was also observed that the timing difference of the falling motions

  11. Cosmic ray intensity distribution in the vertical direction to solar equator plane

    International Nuclear Information System (INIS)

    Nosaka, Toru; Mori, Satoru; Sagisaka, Shuji.

    1983-01-01

    The data of the annual variation of cosmic ray intensity measured by neutron detectors were used to study the distribution of cosmic ray intensity vertical to the solar equator plane and its long term variation. The data used were obtained at Deep River, Kiel, Kerguelen Island, McMurdo, Ottawa, and Mt. Washington. All data showed annual variation. The patterns and degree of variation obtained in northern and southern hemisphere were similar. The summation dial representation of the annual variation and semi-annual variation of cosmic ray was obtained. The inversion of annual variation in 1958 - 1959 and 1968 - 1969 corresponded to the inversion of polarity of solar pole magnetic field. The semi-annual variation showed a complex behavior. The helio-latitudial distribution of cosmic ray intensity was obtained. The asymmetric distribution in relation to the solar equator was observed in the annual variation. The northward gradient of density in 1955 - 1958 and southward gradient in 1959 - 1968 were seen. (Kato, T.)

  12. The ShakeOut earthquake source and ground motion simulations

    Science.gov (United States)

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  13. Monetizing a Meme: YouTube, Content ID, and the Harlem Shake

    Directory of Open Access Journals (Sweden)

    Michael Soha

    2016-01-01

    Full Text Available This article analyzes the creation, evolution, and monetization of the Harlem Shake meme on YouTube to explore contemporary implementation of copyright and understanding of authorship in regard to monetization of works with distributed authorship. This article has three main findings: first, we highlight the collection of digital labor that comprises the “Harlem Shake” meme, its rise in popularity, and the subsequent rise in popularity of Baauer, the composer of the song which forms the backbone of the meme; second, we examine how YouTube’s “new bargain” of Content ID, as a departure from the site’s origins creates coercive control mechanisms, shedding new light on the concept of and debate over “digital sharecropping.” Finally, we argue for a “Fair(er use” system by exploring how memes might be understood outside of the contemporary copyright system, rethinking the rights of users engaged in collective production. The article is significant in that it challenges the current distribution of Content ID payments solely to copyright holders in an attempt to rethink a system that acknowledges the creative labor of memetic phenomena and collective authorship.

  14. COMPARISON OF THE SHAKE WEIGHT® MODALITY EXERCISES WHEN COMPARED TO TRADITIONAL DUMBBELLS

    Directory of Open Access Journals (Sweden)

    Jordan M. Glenn

    2012-12-01

    Full Text Available Individuals are continuously looking for faster, more efficient methods with which to develop physical fitness. This has led to the development of products and programs marketed towards increasing physical fitness in minimal time. The Shake Weight® (SW has been advertised to increase muscular strength among other factors in less time than traditional weightlifting. The purpose of this study was to compare the electromyographic (EMG muscle activity of the SW to a traditional dumbbell (DB performing the same exercises. Twelve men (22.9 ± 1.6 years and 13 women (23.0 ± 1.9 years volunteered to participate in this study. Subjects performed the chest shake (CS, biceps shake (BS, and triceps shake (TS using the SW and DW. Maximal voluntary isometric contractions (MVIC were exhibited for all muscles. EMG activity was recorded for the pectoralis major (PM, triceps brachii (TB, biceps brachii (BB, anterior deltoid (AD, trapezius (TR, and rectus abdominus (RA and compared to detect differences between modalities. EMG activity for each muscle group was reported as a percentage of each subject's individual MVIC. A repeated measures ANOVA revealed no significant differences between the SW and DB modalities during each exercise for all muscles except the BB (p < 0.05. During the CS exercise muscle activity was significantly greater for DB in the BB muscle when compared to the SW mode (50.8 ± 28.9%; 35.8 ± 30.8%. The SW did not have any advantage over the DB for any exercise, nor for any muscle group. Further, no muscle group during any of the SW trials exhibited an MVIC over 60%, the level necessary to increase muscular strength

  15. Estimation of historical earthquake intensities and intensity-PGA relationship for wooden house damages

    International Nuclear Information System (INIS)

    Choi, In-Kil; Seo, Jeong-Moon

    2002-01-01

    A series of tests and dynamic analyses on Korean traditional wooden houses was performed for the intensity estimation of the typical large historical earthquake records. Static and cyclic lateral load tests on the wooden frames were performed to assess the lateral load capacity of wooden frames. The shaking table tests on two 1:4 scaled models of a Korean ancient commoner's house made of fresh pine lumber were performed. Typical earthquake time histories recorded on soil and rock sites were used as input for the tests. The prototypical wooden house was analyzed for multiple time histories which match Ohsaki's ground response spectra. Seismic analyses comprise the aging of lumber and different soil condition. The relationship between the earthquake intensity and the peak ground acceleration (PGA) is proposed for the wooden house damages based on the results of this study. The intensity of major Korean historical earthquake records related with house collapses was quantitatively estimated to be MM VIII

  16. From Demonstration System to Prototype: ShakeAlert Beta Users Provide Feedback to Improve Alert Delivery

    Science.gov (United States)

    Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.

    2013-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds to minutes of warning prior to ground shaking at a given location. The goal and purpose of such a system is to reduce the damage, costs, and casualties resulting from an earthquake. A prototype earthquake early warning system (ShakeAlert) is in development by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, and the USGS. Events are published to the UserDisplay--ShakeAlert's Java based graphical interface, which is being tested by a small group of beta users throughout California. The beta users receive earthquake alerts in real-time and are providing feedback on their experiences. For early warning alerts to be useful, people, companies, and institutions must know beforehand what actions they will perform when they receive the information. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. We also collect feedback detailing costs of implementing actions and challenges within the beta user organizations, as well as anticipated benefits and savings. Thus, creating a blueprint for a fully operational system that will meet the needs of the public. New California users as well as the first group of Pacific Northwest users are slated to join the ShakeAlert beta test group in the fall of 2013.

  17. Liquid films on shake flask walls explain increasing maximum oxygen transfer capacities with elevating viscosity.

    Science.gov (United States)

    Giese, Heiner; Azizan, Amizon; Kümmel, Anne; Liao, Anping; Peter, Cyril P; Fonseca, João A; Hermann, Robert; Duarte, Tiago M; Büchs, Jochen

    2014-02-01

    In biotechnological screening and production, oxygen supply is a crucial parameter. Even though oxygen transfer is well documented for viscous cultivations in stirred tanks, little is known about the gas/liquid oxygen transfer in shake flask cultures that become increasingly viscous during cultivation. Especially the oxygen transfer into the liquid film, adhering on the shake flask wall, has not yet been described for such cultivations. In this study, the oxygen transfer of chemical and microbial model experiments was measured and the suitability of the widely applied film theory of Higbie was studied. With numerical simulations of Fick's law of diffusion, it was demonstrated that Higbie's film theory does not apply for cultivations which occur at viscosities up to 10 mPa s. For the first time, it was experimentally shown that the maximum oxygen transfer capacity OTRmax increases in shake flasks when viscosity is increased from 1 to 10 mPa s, leading to an improved oxygen supply for microorganisms. Additionally, the OTRmax does not significantly undermatch the OTRmax at waterlike viscosities, even at elevated viscosities of up to 80 mPa s. In this range, a shake flask is a somehow self-regulating system with respect to oxygen supply. This is in contrary to stirred tanks, where the oxygen supply is steadily reduced to only 5% at 80 mPa s. Since, the liquid film formation at shake flask walls inherently promotes the oxygen supply at moderate and at elevated viscosities, these results have significant implications for scale-up. © 2013 Wiley Periodicals, Inc.

  18. Transfrontier macroseismic data exchange in NW Europe: examples of non-circular intensity distributions

    Science.gov (United States)

    Van Noten, Koen; Lecocq, Thomas; Hinzen, Klaus-G.; Sira, Christophe; Camelbeeck, Thierry

    2016-04-01

    Macroseismic data acquisition recently received a strong increase in interest due to public crowdsourcing through internet-based inquiries and real-time smartphone applications. Macroseismic analysis of felt earthquakes is important as the perception of people can be used to detect local/regional site effects in areas without instrumentation. We will demonstrate how post-processing macroseismic data improves the quality of real-time intensity evaluation of new events. Instead of using the classic DYFI representation in which internet intensities are averaged per community, we, first, geocoded all individual responses and structure the model area into 100 km2grid cells. Second, the average intensity of all answers within a grid cell is calculated. The resulting macroseismic grid cell distribution shows a less subjective and more homogeneous intensity distribution than the classical irregular community distribution and helps to improve the calculation of intensity attenuation functions. In this presentation, the 'Did You Feel It' (DYFI) macroseismic data of several >M4, e.g. the 2002 ML 4.9 Alsdorf and 2011 ML 4.3 Goch (Germany) and the 2015 ML 4.1 Ramsgate (UK), earthquakes felt in Belgium, Germany, The Netherlands, France, Luxemburg and UK are analysed. Integration of transfrontier DYFI data of the ROB-BNS, KNMI, BCSF and BGS networks results in a particular non-circular, distribution of the macroseismic data in which the felt area for all these examples extends significantly more in E-W than N-S direction. This intensity distribution cannot be explained by geometrical amplitude attenuation alone, but rather illustrates a low-pass filtering effect due to the south-to-north increasing thickness of cover sediments above the London-Brabant Massif. For the studied M4 to M5 earthquakes, the thick sediments attenuate seismic energy at higher frequencies and consequently less people feel the vibrations at the surface. This example of successful macroseismic data exchange

  19. Improve beam position stability of SSRF BL15U beamline by using beam intensity feedback

    International Nuclear Information System (INIS)

    Li Guoqiang; Liang Dongxu; Yan Fen; Li Aiguo; Yu Xiaohan

    2013-01-01

    Background: The shaking of micro-focus spot in the vertical direction is found during the energy scan experiments, such as XAFS scan. The beam position of vertical direction changes obviously with the energy. Purpose: In order to make the beam position shaking amplitude less than 1/10 of the beam size. Methods: The beam position stability of SSRF BL15U beamline is improved by using beam intensity feedback. The feedback system include beam intensity monitor of the beamline and fine adjust mechanism of pitch 2 (the pitch angle of the second crystal of the double crystal monochromator). The feedback control of the beam position is realized by adjusting the pitch 2 to fix beam intensity at its maximum value. Results: The test results show that the vertical beam vibration below 10 Hz frequency is significantly reduced and also the beam position stability during photon energy scan is improved by more than 5 times. Conclusions: By adopting the new feedback systems, the stability of the beam spot on the specimen stage was dramatically improved which achieved the anticipated target. (authors)

  20. The Influence of Soil Moisture and Wind on Rainfall Distribution and Intensity in Florida

    Science.gov (United States)

    Baker, R. David; Lynn, Barry H.; Boone, Aaron; Tao, Wei-Kuo

    1998-01-01

    Land surface processes play a key role in water and energy budgets of the hydrological cycle. For example, the distribution of soil moisture will affect sensible and latent heat fluxes, which in turn may dramatically influence the location and intensity of precipitation. However, mean wind conditions also strongly influence the distribution of precipitation. The relative importance of soil moisture and wind on rainfall location and intensity remains uncertain. Here, we examine the influence of soil moisture distribution and wind distribution on precipitation in the Florida peninsula using the 3-D Goddard Cumulus Ensemble (GCE) cloud model Coupled with the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) land surface model. This study utilizes data collected on 27 July 1991 in central Florida during the Convection and Precipitation Electrification Experiment (CaPE). The idealized numerical experiments consider a block of land (the Florida peninsula) bordered on the east and on the west by ocean. The initial soil moisture distribution is derived from an offline PLACE simulation, and the initial environmental wind profile is determined from the CaPE sounding network. Using the factor separation technique, the precise contribution of soil moisture and wind to rainfall distribution and intensity is determined.

  1. Effect of spatial distribution of tastants on taste intensity, fluctuation of taste intensity and consumer preference of (semi-)solid food products

    NARCIS (Netherlands)

    Mosca, A.C.; Bult, J.H.F.; Stieger, M.A.

    2013-01-01

    Two sensory studies were carried out to compare the taste intensity, the perceived fluctuation of taste intensity and the consumer preference of food products with homogeneous and inhomogeneous distributions of tastants using 2-alternative forced choice tests. The first study evaluated pairs of

  2. Preparing a population for an earthquake like Chi-Chi: The Great Southern California ShakeOut

    Science.gov (United States)

    Jones, Lucile M.; ,

    2009-01-01

    The Great Southern California ShakeOut was a week of special events featuring the largest earthquake drill in United States history. On November 13, 2008, over 5 million southern Californians pretended that a magnitude-7.8 earthquake had occurred and practiced actions that could reduce its impact on their lives. The primary message of the ShakeOut is that what we do now, before a big earthquake, will determine what our lives will be like after. The drill was based on a scenario of the impacts and consequences of such an earthquake on the Southern San Andreas Fault, developed by over 300 experts led by the U.S. Geological Survey in partnership with the California Geological Survey, the Southern California Earthquake Center, Earthquake Engineering Research Institute, lifeline operators, emergency services and many other organizations. The ShakeOut campaign was designed and implemented by earthquake scientists, emergency managers, sociologists, art designers and community participants. The means of communication were developed using results from sociological research on what encouraged people to take action. This was structured around four objectives: 1) consistent messages – people are more inclined to believe something when they hear the same thing from multiple sources; 2) visual reinforcement – people are more inclined to do something they see other people doing; 3) encourage “milling” or discussing contemplated action – people need to discuss an action with others they care about before committing to undertaking it; and 4) focus on concrete actions – people are more likely to prepare for a set of concrete consequences of a particular hazard than for an abstract concept of risk. The goals of the ShakeOut were established in Spring 2008 and were: 1) to register 5 million people to participate in the drill; 2) to change the culture of earthquake preparedness in southern California; and 3) to reduce earthquake losses in southern California. All of these

  3. CISN Display Progress to Date - Reliable Delivery of Real-Time Earthquake Information, and ShakeMap to Critical End Users

    Science.gov (United States)

    Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Frechette, K.; Given, D.

    2003-12-01

    The California Integrated Seismic Network (CISN) has collaborated to develop a next-generation earthquake notification system that is nearing its first operations-ready release. The CISN Display actively alerts users of seismic data, and vital earthquake hazards information following a significant event. It will primarily replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering geographical seismic data to emergency operations centers, utility companies and media outlets. A subsequent goal is to provide automated access to the many Web products produced by regional seismic networks after an earthquake. Another aim is to create a highly configurable client, allowing user organizations to overlay infrastructure data critical to their roles as first-responders, or lifeline operators. And the final goal is to integrate these requirements, into a package offering several layers of reliability to ensure delivery of services. Central to the CISN Display's role as a gateway to Web-based earthquake products is its comprehensive XML-messaging schema. The message model uses many of the same attributes in the CUBE format, but extends the old standard by provisioning additional elements for products currently available, and others yet to be considered. The client consumes these XML-messages, sorts them through a resident Quake Data Merge filter, and posts updates that also include hyperlinks associated to specific event IDs on the display map. Earthquake products available for delivery to the CISN Display are ShakeMap, focal mechanisms, waveform data, felt reports, aftershock forecasts and earthquake commentaries. By design the XML-message schema can evolve as products and information needs change, without breaking existing applications that rely on it. The latest version of the CISN Display can also automatically download ShakeMaps and display shaking intensity within the GIS system. This

  4. A Method of Estimating Pressure and Intensity Distributions of Multielement Phased Array High Intensity Focused Ultrasonic Field at Full Power Using a Needle Hydrophone

    International Nuclear Information System (INIS)

    Yu Ying; Shen Guofeng; Bai Jingfeng; Chen Yazhu

    2011-01-01

    The pressure and intensity distribution of high intensity focused ultrasound (HIFU) fields at full power are critical for predicting heating patterns and ensuring safety of the therapy. With the limitations of maximum pressure at the hydrophone and damage from cavitation or thermal effects, it is hard to measure pressure and intensity directly when HIFU is at full power. HIFU-phased arrays are usually composed of large numbers of small elements and the sound power radiated from some of them at full power is measureable using a hydrophone, we grouped them based on the limitation of maximum permissible pressure at the hydrophone and the characteristics of the element arrangement in the array. Then sound field measurement of the group was carried out at full power level. Using the acoustic coherence principle, the pressure and intensity distribution of the array at full power level can be calculated from corresponding values from the groups. With this method, computer simulations and sound field measurement of a 65-element concentric distributed phased array was carried out. The simulation results demonstrate theoretically the feasibility of this method. Measurements on the 65-element phased array also verify the effectiveness of this method for estimating the pressure and intensity distribution of phased array at full power level using a needle hydrophone.

  5. New results in the relation between intensity distribution of reflected molecular beams and spatial distribution of elementary crystal cells in solids

    International Nuclear Information System (INIS)

    Nikuradse, A.; Weidner, J.

    1974-01-01

    Analytic expressions for the intensity distribution of a molecular beam reflected by a solid surface which consists of face centered cubic elementary cells have been studied. One has also tried to study the influence of the spatial distribution of the elementary crystal cells on the intensity of reflection. Some curves which had been evaluated and designed by a computer are now published. The Kratzer potential of interaction has alway been supposed [fr

  6. MR imaging of the bone marrow using short TI IR, 1. Normal and pathological intensity distribution of the bone marrow

    Energy Technology Data Exchange (ETDEWEB)

    Ishizaka, Hiroshi; Kurihara, Mikiko; Tomioka, Kuniaki; Kobayashi, Kanako; Sato, Noriko; Nagai, Teruo; Heshiki, Atsuko; Amanuma, Makoto; Mizuno, Hitomi.

    1989-02-01

    Normal vertebral bone marrow intensity distribution and its alteration in various anemias were evaluated on short TI IR sequences. Material consists of 73 individuals, 48 normals and 25 anemic patients excluding neoplastic conditions. All normal and reactive hypercellular bone marrow revealed characteristic intensity distribution; marginal high intensity and central low intensity, corresponding well to normal distribution of red and yellow marrows and their physiological or reactive conversion between red and yellow marrows. Aplastic anemia did not reveal normal intensity distribution, presumably due to autonomous condition.

  7. Compact 3D Camera for Shake-the-Box Particle Tracking

    Science.gov (United States)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  8. Terrestrial Planet Formation: Dynamical Shake-up and the Low Mass of Mars

    Science.gov (United States)

    Bromley, Benjamin C.; Kenyon, Scott J.

    2017-05-01

    We consider a dynamical shake-up model to explain the low mass of Mars and the lack of planets in the asteroid belt. In our scenario, a secular resonance with Jupiter sweeps through the inner solar system as the solar nebula depletes, pitting resonant excitation against collisional damping in the Sun’s protoplanetary disk. We report the outcome of extensive numerical calculations of planet formation from planetesimals in the terrestrial zone, with and without dynamical shake-up. If the Sun’s gas disk within the terrestrial zone depletes in roughly a million years, then the sweeping resonance inhibits planet formation in the asteroid belt and substantially limits the size of Mars. This phenomenon likely occurs around other stars with long-period massive planets, suggesting that asteroid belt analogs are common.

  9. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    Science.gov (United States)

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  10. Future Earth: Reducing Loss By Automating Response to Earthquake Shaking

    Science.gov (United States)

    Allen, R. M.

    2014-12-01

    Earthquakes pose a significant threat to society in the U.S. and around the world. The risk is easily forgotten given the infrequent recurrence of major damaging events, yet the likelihood of a major earthquake in California in the next 30 years is greater than 99%. As our societal infrastructure becomes ever more interconnected, the potential impacts of these future events are difficult to predict. Yet, the same inter-connected infrastructure also allows us to rapidly detect earthquakes as they begin, and provide seconds, tens or seconds, or a few minutes warning. A demonstration earthquake early warning system is now operating in California and is being expanded to the west coast (www.ShakeAlert.org). In recent earthquakes in the Los Angeles region, alerts were generated that could have provided warning to the vast majority of Los Angelinos who experienced the shaking. Efforts are underway to build a public system. Smartphone technology will be used not only to issue that alerts, but could also be used to collect data, and improve the warnings. The MyShake project at UC Berkeley is currently testing an app that attempts to turn millions of smartphones into earthquake-detectors. As our development of the technology continues, we can anticipate ever-more automated response to earthquake alerts. Already, the BART system in the San Francisco Bay Area automatically stops trains based on the alerts. In the future, elevators will stop, machinery will pause, hazardous materials will be isolated, and self-driving cars will pull-over to the side of the road. In this presentation we will review the current status of the earthquake early warning system in the US. We will illustrate how smartphones can contribute to the system. Finally, we will review applications of the information to reduce future losses.

  11. Mapping the continuous reciprocal space intensity distribution of X-ray serial crystallography.

    Science.gov (United States)

    Yefanov, Oleksandr; Gati, Cornelius; Bourenkov, Gleb; Kirian, Richard A; White, Thomas A; Spence, John C H; Chapman, Henry N; Barty, Anton

    2014-07-17

    Serial crystallography using X-ray free-electron lasers enables the collection of tens of thousands of measurements from an equal number of individual crystals, each of which can be smaller than 1 µm in size. This manuscript describes an alternative way of handling diffraction data recorded by serial femtosecond crystallography, by mapping the diffracted intensities into three-dimensional reciprocal space rather than integrating each image in two dimensions as in the classical approach. We call this procedure 'three-dimensional merging'. This procedure retains information about asymmetry in Bragg peaks and diffracted intensities between Bragg spots. This intensity distribution can be used to extract reflection intensities for structure determination and opens up novel avenues for post-refinement, while observed intensity between Bragg peaks and peak asymmetry are of potential use in novel direct phasing strategies.

  12. Using structural damage statistics to derive macroseismic intensity within the Kathmandu valley for the 2015 M7.8 Gorkha, Nepal earthquake

    Science.gov (United States)

    McGowan, S. M.; Jaiswal, K. S.; Wald, D. J.

    2017-09-01

    We make and analyze structural damage observations from within the Kathmandu valley following the 2015 M7.8 Gorkha, Nepal earthquake to derive macroseismic intensities at several locations including some located near ground motion recording sites. The macroseismic intensity estimates supplement the limited strong ground motion data in order to characterize the damage statistics. This augmentation allows for direct comparisons between ground motion amplitudes and structural damage characteristics and ultimately produces a more constrained ground shaking hazard map for the Gorkha earthquake. For systematic assessments, we focused on damage to three specific building categories: (a) low/mid-rise reinforced concrete frames with infill brick walls, (b) unreinforced brick masonry bearing walls with reinforced concrete slabs, and (c) unreinforced brick masonry bearing walls with partial timber framing. Evaluating dozens of photos of each construction type, assigning each building in the study sample to a European Macroseismic Scale (EMS)-98 Vulnerability Class based upon its structural characteristics, and then individually assigning an EMS-98 Damage Grade to each building allows a statistically derived estimate of macroseismic intensity for each of nine study areas in and around the Kathmandu valley. This analysis concludes that EMS-98 macroseismic intensities for the study areas from the Gorkha mainshock typically were in the VII-IX range. The intensity assignment process described is more rigorous than the informal approach of assigning intensities based upon anecdotal media or first-person accounts of felt-reports, shaking, and their interpretation of damage. Detailed EMS-98 macroseismic assessments in urban areas are critical for quantifying relations between shaking and damage as well as for calibrating loss estimates. We show that the macroseismic assignments made herein result in fatality estimates consistent with the overall and district-wide reported values.

  13. A calculation model for primary intensity distributions from cylindrically symmetric x-ray lenses

    International Nuclear Information System (INIS)

    Hristov, Dimitre; Maltz, Jonathan

    2008-01-01

    A calculation model for the quantitative prediction of primary intensity fluence distributions obtained by the Bragg diffraction focusing of kilovoltage radiation by cylindrical x-ray lenses is presented. The mathematical formalism describes primary intensity distributions from cylindrically-symmetric x-ray lenses, with a planar isotropic radiation source located in a plane perpendicular to the lens axis. The presence of attenuating medium inserted between the lens and the lens focus is accounted for by energy-dependent attenuation. The influence of radiation scattered within the media is ignored. Intensity patterns are modeled under the assumption that photons that are not interacting with the lens are blocked out at any point of interest. The main characteristics of the proposed calculation procedure are that (i) the application of vector formalism allows universal treatment of all cylindrical lenses without the need of explicit geometric constructs; (ii) intensity distributions resulting from x-ray diffraction are described by a 3D generalization of the mosaic spread concept; (iii) the calculation model can be immediately coupled to x-ray diffraction simulation packages such as XOP and Shadow. Numerical simulations based on this model are to facilitate the design of focused orthovoltage treatment (FOT) systems employing cylindrical x-ray lenses, by providing insight about the influence of the x-ray source and lens parameters on quantities of dosimetric interest to radiation therapy

  14. The Distribution of the Interval between Events of a Cox Process with Shot Noise Intensity

    Directory of Open Access Journals (Sweden)

    Angelos Dassios

    2008-01-01

    Full Text Available Applying piecewise deterministic Markov processes theory, the probability generating function of a Cox process, incorporating with shot noise process as the claim intensity, is obtained. We also derive the Laplace transform of the distribution of the shot noise process at claim jump times, using stationary assumption of the shot noise process at any times. Based on this Laplace transform and from the probability generating function of a Cox process with shot noise intensity, we obtain the distribution of the interval of a Cox process with shot noise intensity for insurance claims and its moments, that is, mean and variance.

  15. Validation of the shake test for detecting freeze damage to adsorbed vaccines.

    Science.gov (United States)

    Kartoglu, Umit; Ozgüler, Nejat Kenan; Wolfson, Lara J; Kurzatkowski, Wiesław

    2010-08-01

    To determine the validity of the shake test for detecting freeze damage in aluminium-based, adsorbed, freeze-sensitive vaccines. A double-blind crossover design was used to compare the performance of the shake test conducted by trained health-care workers (HCWs) with that of phase contrast microscopy as a "gold standard". A total of 475 vials of 8 different types of World Health Organization prequalified freeze-sensitive vaccines from 10 different manufacturers were used. Vaccines were kept at 5 degrees C. Selected numbers of vials from each type were then exposed to -25 degrees C and -2 degrees C for 24-hour periods. There was complete concordance between HCWs and phase-contrast microscopy in identifying freeze-damaged vials and non-frozen samples. Non-frozen samples showed a fine-grain structure under phase contrast microscopy, but freeze-damaged samples showed large conglomerates of massed precipitates with amorphous, crystalline, solid and needle-like structures. Particles in the non-frozen samples measured from 1 microm (vaccines against diphtheria-tetanus-pertussis; Haemophilus influenzae type b; hepatitis B; diphtheria-tetanus-pertussis-hepatitis B) to 20 microm (diphtheria and tetanus vaccines, alone or in combination). By contrast, aggregates in the freeze-damaged samples measured up to 700 microm (diphtheria-tetanus-pertussis) and 350 microm on average. The shake test had 100% sensitivity, 100% specificity and 100% positive predictive value in this study, which confirms its validity for detecting freeze damage to aluminium-based freeze-sensitive vaccines.

  16. Sky-distribution of intensity of synchrotron radio emission of relativistic electrons trapped in Earth’s magnetic field

    Directory of Open Access Journals (Sweden)

    Klimenko V.V.

    2017-12-01

    Full Text Available This paper presents the calculations of synchrotron radio emission intensity from Van Allen belts with Gaussian space distribution of electron density across L-shells of a dipole magnetic field, and with Maxwell’s relativistic electron energy distribution. The results of these calculations come to a good agreement with measurements of the synchrotron emission intensity of the artificial radiation belt’s electrons during the Starfish nuclear test. We have obtained two-dimensional distributions of radio brightness in azimuth — zenith angle coordinates for an observer on Earth’s surface. The westside and eastside intensity maxima exceed several times the maximum level of emission in the meridian plane. We have also constructed two-dimensional distributions of the radio emission intensity in decibels related to the background galactic radio noise level. Isotropic fluxes of relativistic electrons (Е~1 MeV should be more than 107 cm–2s–1 for the synchrotron emission intensity in the meridian plane to exceed the cosmic noise level by 0.1 dB (riometer sensitivity threshold.

  17. Real-time numerical shake prediction and updating for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Wei, Yongxiang; Huang, Yandan

    2017-12-01

    Ground motion prediction is important for earthquake early warning systems, because the region's peak ground motion indicates the potential disaster. In order to predict the peak ground motion quickly and precisely with limited station wave records, we propose a real-time numerical shake prediction and updating method. Our method first predicts the ground motion based on the ground motion prediction equation after P waves detection of several stations, denoted as the initial prediction. In order to correct the prediction error of the initial prediction, an updating scheme based on real-time simulation of wave propagation is designed. Data assimilation technique is incorporated to predict the distribution of seismic wave energy precisely. Radiative transfer theory and Monte Carlo simulation are used for modeling wave propagation in 2-D space, and the peak ground motion is calculated as quickly as possible. Our method has potential to predict shakemap, making the potential disaster be predicted before the real disaster happens. 2008 M S8.0 Wenchuan earthquake is studied as an example to show the validity of the proposed method.

  18. Communication during copulation in the sex-role reversed wolf spider Allocosa brasiliensis: Female shakes for soliciting new ejaculations?

    Science.gov (United States)

    Garcia Diaz, Virginia; Aisenberg, Anita; Peretti, Alfredo V

    2015-07-01

    Traditional studies on sexual communication have focused on the exchange of signals during courtship. However, communication between the sexes can also occur during or after copulation. Allocosa brasiliensis is a wolf spider that shows a reversal in typical sex roles and of the usual sexual size dimorphism expected for spiders. Females are smaller than males and they are the roving sex that initiates courtship. Occasional previous observations suggested that females performed body shaking behaviors during copulation. Our objective was to analyze if female body shaking is associated with male copulatory behavior in A. brasiliensis, and determine if this female behavior has a communicatory function in this species. For that purpose, we performed fine-scaled analysis of fifteen copulations under laboratory conditions. We video-recorded all the trials and looked for associations between female and male copulatory behaviors. The significant difference between the time before and after female shaking, in favor of the subsequent ejaculation is analyzed. We discuss if shaking could be acting as a signal to accelerate and motivate palpal insertion and ejaculation, and/or inhibiting male cannibalistic tendencies in this species. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Building a Communication, Education, an Outreach Program for the ShakeAlert National Earthquake Early Warning Program - Recommendations for Public Alerts Via Cell Phones

    Science.gov (United States)

    DeGroot, R. M.; Long, K.; Strauss, J. A.

    2017-12-01

    The United States Geological Survey (USGS) and its partners are developing the ShakeAlert Earthquake Early Warning System for the West Coast of the United States. To be an integral part of successful implementation, ShakeAlert engagement programs and materials must integrate with and leverage broader earthquake risk programs. New methods and products for dissemination must be multidisciplinary, cost effective, and consistent with existing hazards education and communication efforts. The ShakeAlert Joint Committee for Communication, Education, and Outreach (JCCEO), is identifying, developing, and cultivating partnerships with ShakeAlert stakeholders including Federal, State, academic partners, private companies, policy makers, and local organizations. Efforts include developing materials, methods for delivery, and reaching stakeholders with information on ShakeAlert, earthquake preparedness, and emergency protective actions. It is essential to develop standards to ensure information communicated via the alerts is consistent across the public and private sector and achieving a common understanding of what actions users take when they receive a ShakeAlert warning. In February 2017, the JCCEO convened the Warning Message Focus Group (WMFG) to provide findings and recommendations to the Alliance for Telecommunications Industry Solutions on the use of earthquake early warning message content standards for public alerts via cell phones. The WMFG represents communications, education, and outreach stakeholders from various sectors including ShakeAlert regional coordinators, industry, emergency managers, and subject matter experts from the social sciences. The group knowledge was combined with an in-depth literature review to ensure that all groups who could receive the message would be taken into account. The USGS and the participating states and agencies acknowledge that the implementation of ShakeAlert is a collective effort requiring the participation of hundreds of

  20. Experimental comparison of phase retrieval methods which use intensity distribution at different planes

    International Nuclear Information System (INIS)

    Shevkunov, I A; Petrov, N V

    2014-01-01

    Performance of the three phase retrieval methods that use spatial intensity distributions was investigated in dealing with a task of reconstruction of the amplitude characteristics of the test object. These methods differ both by mathematical models and order of iteration execution. The single-beam multiple-intensity reconstruction method showed the best efficiency in terms of quality of reconstruction and time consumption.

  1. Technical implementation plan for the ShakeAlert production system: an Earthquake Early Warning system for the West Coast of the United States

    Science.gov (United States)

    Given, Douglas D.; Cochran, Elizabeth S.; Heaton, Thomas; Hauksson, Egill; Allen, Richard; Hellweg, Peggy; Vidale, John; Bodin, Paul

    2014-01-01

    Earthquake Early Warning (EEW) systems can provide as much as tens of seconds of warning to people and automated systems before strong shaking arrives. The United States Geological Survey (USGS) and its partners are developing such an EEW system, called ShakeAlert, for the West Coast of the United States. This document describes the technical implementation of that system, which leverages existing stations and infrastructure of the Advanced National Seismic System (ANSS) regional networks to achieve this new capability. While significant progress has been made in developing the ShakeAlert early warning system, improved robustness of each component of the system and additional testing and certification are needed for the system to be reliable enough to issue public alerts. Major components of the system include dense networks of ground motion sensors, telecommunications from those sensors to central processing systems, algorithms for event detection and alert creation, and distribution systems to alert users. Capital investment costs for a West Coast EEW system are projected to be $38.3M, with additional annual maintenance and operations totaling $16.1M—in addition to current ANSS expenditures for earthquake monitoring. An EEW system is complementary to, but does not replace, other strategies to mitigate earthquake losses. The system has limitations: false and missed alerts are possible, and the area very near to an earthquake epicenter may receive little or no warning. However, such an EEW system would save lives, reduce injuries and damage, and improve community resilience by reducing longer-term economic losses for both public and private entities.

  2. Shaking of reinforced concrete structures subjected to transient dynamic analysis

    International Nuclear Information System (INIS)

    Rouzaud, Christophe

    2015-01-01

    In the design of nuclear engineering structures security and safety present a crucial aspect. Civil engineering design and the qualification of materials to dynamic loads must consider the accelerations which they undergo. These accelerations could integrate seismic activity and shaking movements consecutive to aircraft impact with higher cut-off frequency. Current methodologies for assessing this shock are based on transient analyses using classical finite element method associated with explicit numerical schemes or projection on modal basis, often linear. In both cases, to represent in meaningful way a medium-frequency content, it should implement a mesh refinement which is hardly compatible with the size of models of the civil engineering structures. In order to extend industrial methodologies used and to allow a better representation of the behavior of the structure in medium-frequency, an approach coupling a temporal and non-linear analysis for shock area with a frequency approach to treatment of shaking with VTCR (Variational Theory of Complex Rays) has been used. The aim is to use the computational efficiency of the implemented strategy, including medium frequency to describe the nuclear structures to aircraft impact. (author)

  3. Shaking Table Tests of Curved Bridge considering Bearing Friction Sliding Isolation

    Directory of Open Access Journals (Sweden)

    Lei Yan

    2016-01-01

    Full Text Available Specific to severe damage to curved bridges in earthquakes caused by the excessive force of the fixed bearings and piers, a new seismic design method on curved bridges considering bearing friction sliding isolation is proposed in this paper. Seismic model bridge and isolation model bridge with similarity ratio of 1/20 were made and the shaking table comparison test was conducted. The experimental results show that the isolation model curved bridge suffered less seismic damage than the seismic model curved bridge. The fundamental frequencies of the seismic model bridge and isolation model bridge decreased and the damping ratio increased with the increase of seismic intensity. Compared with seismic curved bridge, the maximum reduction rates of peak acceleration along the radial and tangential directions on the top of pier of the isolation model curved bridge were 47.3% and 55.5%, respectively, and the maximum reduction rate of the peak strain on the bottom of pier of the isolation model curved bridge was 43.4%. For the isolation model curved bridge, the maximum reduction rate of peak acceleration on the top of pier was 24.6% compared with that on the bottom of pier. The study results can provide experimental basis for the seismic design of curved bridges.

  4. An Overview of the Great Puerto Rico ShakeOut 2012

    Science.gov (United States)

    Gómez, G.; Soto-Cordero, L.; Huérfano-Moreno, V.; Ramos-Gómez, W.; De La Matta, M.

    2012-12-01

    With a population of 4 million, Puerto Rico will be celebrating this year, for first time, an Island-wide earthquake drill following the Great California ShakeOut model. Most of our population has never experienced a large earthquake, since our last significant event occurred on 1918, and is not adequately prepared to respond to a sudden ground movement. During the moderate-size earthquakes (M5.2-5.8) that have been felt in Puerto Rico since 2010, and despite Puerto Rico Seismic Network education efforts, the general public reaction was inappropriate, occasionally putting themselves and others at risk. Our overarching goal for the Great Puerto Rico ShakeOut is to help develop seismic awareness and preparedness in our communities. In addition, our main objectives include: to teach the public to remain calm and act quickly and appropriately during a seismic event, the identification and correction of potential hazards that may cause injuries, and the development/update of mitigation plans for home, work place and/or school. We are also taking this opportunity to clarify the misconceptions of other methods of protection (e.g. triangle of life) and warning equipment and systems that do not have sound scientific or applicable basis for our country. We will be presenting an overview of the accomplishment of our earthquake drill and the different strategies we are using, such as internet, social media and collaboration with state government agencies and professional groups, to reach diverse age and educational level groups and to promote their participation. One of our main target groups this year are school students since their experience can have a direct and positive impact on their families. The drill webpage was developed in Spanish and English as well as our promotional and educational materials. Being the first time a Spanish-speaking country coordinates a ShakeOut exercise we hope our experience and the materials we are developing could be of use and benefit to

  5. Curved-straight neutron guide system with uniform spatial intensity distribution

    International Nuclear Information System (INIS)

    Mildner, D.F.R.; Cook, J.C.

    2008-01-01

    The spatial intensity distribution of neutrons emerging from a curved guide is asymmetric, and straight guide sections are sometimes appended to curved guides to make the intensity distribution more nearly uniform. For idealized uniform illumination and in the perfect reflectivity approximation, the spatial-angular acceptance at the exit of the combination can be made exactly uniform for a range of long wavelengths by using a sufficiently long straight section, together with a curved guide whose outer wall coating has a critical angle slightly greater than those of the other guide walls. We refer to this as a 'phase space tailoring guide' where the coatings on the inner wall and straight section are used to define the required divergence at the end of the guide. Increasing the critical angle of the outer wall of the curved section reduces the characteristic wavelength of the curved guide as well as the wavelength at which ideal uniformity can be obtained. The outer wall coating need only be of sufficiently high critical angle to fill the transmittable phase space area of the straight guide uniformly to adequately short wavelength

  6. Ground motions from the 2015 Mw 7.8 Gorkha, Nepal, earthquake constrained by a detailed assessment of macroseismic data

    Science.gov (United States)

    Martin, Stacey; Hough, Susan E.; Hung, Charleen

    2015-01-01

    To augment limited instrumental recordings of the Mw 7.8 Gorkha, Nepal, earthquake on 25 April 2015 (Nepali calendar: 12 Baisakh 2072, Bikram Samvat), we collected 3831 detailed media and first-person accounts of macroseismic effects that include sufficiently detailed information to assign intensities. The resulting intensity map reveals the distribution of shaking within and outside of Nepal, with the key result that shaking intensities throughout the near-field region only exceeded intensity 8 on the 1998 European Macroseismic Scale (EMS-98) in rare instances. Within the Kathmandu Valley, intensities were generally 6–7 EMS. This surprising (and fortunate) result can be explained by the nature of the mainshock ground motions, which were dominated by energy at periods significantly longer than the resonant periods of vernacular structures throughout the Kathmandu Valley. Outside of the Kathmandu Valley, intensities were also generally lower than 8 EMS, but the earthquake took a heavy toll on a number of remote villages, where many especially vulnerable masonry houses collapsed catastrophically in 7–8 EMS shaking. We further reconsider intensities from the 1833 earthquake sequence and conclude that it occurred on the same fault segment as the Gorkha earthquake.

  7. Recovery of PET from packaging plastics mixtures by wet shaking table.

    Science.gov (United States)

    Carvalho, M T; Agante, E; Durão, F

    2007-01-01

    Recycling requires the separation of materials appearing in a mass of wastes of heterogeneous composition and characteristics, into single, almost pure, component/material flows. The separation of materials (e.g., some types of plastics) with similar physical properties (e.g., specific gravity) is often accomplished by human sorting. This is the case of the separation of packaging plastics in municipal solid wastes (MSW). The low cost of virgin plastics and low value of recycled plastics necessitate the utilization of low cost techniques and processes in the recycling of packaging plastics. An experimental study was conducted to evaluate the feasibility of production of a PET product, cleaned from PVC and PS, using a wet shaking table. The wet shaking table is an environmentally friendly process, widely used to separate minerals, which has low capital and operational costs. Some operational variables of the equipment, as well as different feed characteristics, were considered. The results show that the separation of these plastics is feasible although, similarly to the mineral field, in somewhat complex flow sheets.

  8. Shaking table tests of two different reinforcement techniques using polymeric grids on an asymmetric limestone full-scaled structure

    OpenAIRE

    Bairrão, R.

    2009-01-01

    This paper describes the shaking table tests, and their main results, of an asymmetric limestone masonry building, under different reinforcement conditions. The work was performed in the aim of the project “Enhancing Seismic Resistance and Durability of Natural Masonry Stone” for User Group 3 of the European Consortium of Laboratories for Earthquake and Dynamic Experimental Research (ECOLEADER). The experimental program was performed using the LNEC 3D shaking table. The design of the struc...

  9. Proposed shake table studies for NAPP containment

    International Nuclear Information System (INIS)

    Akolkar, P.M.; Khuddus, M.A.

    1975-01-01

    The proposal for shake table studies on model of containment structure of Narora Atomic Power Project is discussed. The physical characteristics such as the dimensions, connection details of the containment with the internal structure and the dynamic interaction between the two have been described. The dynamic scale factors obtained through similitude requirements and dimensional analysis have been presented and the modelling aspects and the choice of model material and scale have been discussed. The proposed type of tests, necessary measurement and instrumentation have been mentioned. The limitations imposed by similitude requirements on model studies are brought out and the usefulness of the results of the proposed tests in the dynamic design of the containment have been covered. (author)

  10. Patient Engagement: Time to Shake the Foundations.

    Science.gov (United States)

    Thompson, Leslee

    2015-01-01

    Something big is happening in healthcare. It's not the new Apple Watch, 3D printing or the advent of personalized medicine. It's people power. And, it is starting to shake up the very foundation on which healthcare systems around the world have been built. Healthcare professionals and hospitals are iconic features on a healthcare landscape that has been purpose-built with castles, moats and defence artillery. Turf protection, often under the guise of "patient protection," has become so ingrained in the way things are that few recognize what it has become. Fooks et al. step gently into this somewhat dangerous territory for "insiders" of the system to tread; yet in my view, they do not go far enough.

  11. The training intensity distribution among well-trained and elite endurance athletes

    Science.gov (United States)

    Stöggl, Thomas L.; Sperlich, Billy

    2015-01-01

    Researchers have retrospectively analyzed the training intensity distribution (TID) of nationally and internationally competitive athletes in different endurance disciplines to determine the optimal volume and intensity for maximal adaptation. The majority of studies present a “pyramidal” TID with a high proportion of high volume, low intensity training (HVLIT). Some world-class athletes appear to adopt a so-called “polarized” TID (i.e., significant % of HVLIT and high-intensity training) during certain phases of the season. However, emerging prospective randomized controlled studies have demonstrated superior responses of variables related to endurance when applying a polarized TID in well-trained and recreational individuals when compared with a TID that emphasizes HVLIT or threshold training. The aims of the present review are to: (1) summarize the main responses of retrospective and prospective studies exploring TID; (2) provide a systematic overview on TIDs during preparation, pre-competition, and competition phases in different endurance disciplines and performance levels; (3) address whether one TID has demonstrated greater efficacy than another; and (4) highlight research gaps in an effort to direct future scientific studies. PMID:26578968

  12. A reliable simultaneous representation of seismic hazard and of ground shaking recurrence

    Science.gov (United States)

    Peresan, A.; Panza, G. F.; Magrin, A.; Vaccari, F.

    2015-12-01

    Different earthquake hazard maps may be appropriate for different purposes - such as emergency management, insurance and engineering design. Accounting for the lower occurrence rate of larger sporadic earthquakes may allow to formulate cost-effective policies in some specific applications, provided that statistically sound recurrence estimates are used, which is not typically the case of PSHA (Probabilistic Seismic Hazard Assessment). We illustrate the procedure to associate the expected ground motions from Neo-deterministic Seismic Hazard Assessment (NDSHA) to an estimate of their recurrence. Neo-deterministic refers to a scenario-based approach, which allows for the construction of a broad range of earthquake scenarios via full waveforms modeling. From the synthetic seismograms the estimates of peak ground acceleration, velocity and displacement, or any other parameter relevant to seismic engineering, can be extracted. NDSHA, in its standard form, defines the hazard computed from a wide set of scenario earthquakes (including the largest deterministically or historically defined credible earthquake, MCE) and it does not supply the frequency of occurrence of the expected ground shaking. A recent enhanced variant of NDSHA that reliably accounts for recurrence has been developed and it is applied to the Italian territory. The characterization of the frequency-magnitude relation can be performed by any statistically sound method supported by data (e.g. multi-scale seismicity model), so that a recurrence estimate is associated to each of the pertinent sources. In this way a standard NDSHA map of ground shaking is obtained simultaneously with the map of the corresponding recurrences. The introduction of recurrence estimates in NDSHA naturally allows for the generation of ground shaking maps at specified return periods. This permits a straightforward comparison between NDSHA and PSHA maps.

  13. Seismic Fragility Assessment of an Isolated Multipylon Cable-Stayed Bridge Using Shaking Table Tests

    Directory of Open Access Journals (Sweden)

    Yutao Pang

    2017-01-01

    Full Text Available In recent decades, cable-stayed bridges have been widely built around the world due to the appealing aesthetics and efficient and fast mode of construction. Numerous studies have concluded that the cable-stayed bridges are sensitive to earthquakes because they possess low damping characteristics and high flexibility. Moreover, cable-stayed bridges need to warrant operability especially in the moderate-to-severe earthquakes. The provisions implemented in the seismic codes allow obtaining adequate seismic performance for the cable-stayed bridge components; nevertheless, they do not provide definite yet reliable rules to protect the bridge. To date, very few experimental tests have been carried out on the seismic fragility analysis of cable-stayed bridges which is the basis of performance-based analyses. The present paper is aimed at proposing a method to derive the seismic fragility curves of multipylon cable-stayed bridge through shake table tests. Toward this aim, a 1/20 scale three-dimensional model of a 22.5 m cable-stayed bridge in China is constructed and tested dynamically by using the shaking table facility of Tongji University. The cable-stayed bridge contains three pylons and one side pier. The outcomes of the comprehensive shaking table tests carried out on cable-stayed bridge have been utilized to derive fragility curves based on a systemic approach.

  14. Visualization of flow patterns in shaking vessels with various geometry; Shushu no kika keijo wo motsu yodo kakuhan sonai no ryudo jotai no kashika

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Y; Hiraoka, S; Tada, Y; Ue, T [Nagoya Institute of Technology, Nagoya (Japan); Koh, S [Toyo Engineering Corp., Tokyo (Japan); Lee, Y [Keimyung University, (Korea, Republic of)

    1996-03-10

    The flow patterns in shaking vessels with various geometries were visualized with a tracer method using aluminum powder. The spherical and conical vessels were effective for the shake mixing in the same manner as the cylindrical vessel, because these vessels have circular cross sections that develop the rotational flow. Neither a rectangular vessel nor a cylindrical vessel with baffles should be used for shake mixing, because rotational flows are not developed in these vessels. 2 refs., 6 figs.

  15. A Novel Intensive Distribution Logistics Network Design and Profit Allocation Problem considering Sharing Economy

    Directory of Open Access Journals (Sweden)

    Mi Gan

    2018-01-01

    Full Text Available The rapid growth of logistics distribution highlights the problems including the imperfect infrastructure of logistics distribution network, the serious shortage of distribution capacity of each individual enterprise, and the high cost of distribution in China. While the development of sharing economy makes it possible to achieve the integration of whole social logistic resources, big data technology can grasp customer’s logistics demand accurately on the basis of analyzing the customer’s logistics distribution preference, which contributes to the integration and optimization of the whole logistics resources. This paper proposes a kind of intensive distribution logistics network considering sharing economy, which assumes that all the social logistics suppliers build a strategic alliance, and individual idle logistics resources are also used to deal with distribution needs. Analyzing customer shopping behavior by the big data technology to determine customer’s logistics preference on the basis of dividing the customer’s logistics preference into high speed, low cost, and low pollution and then constructing the corresponding objective function model according to different logistics preferences, we obtain the intensive distribution logistics network model and solve it with heuristic algorithm. Furthermore, this paper analyzes the mechanism of interest distribution of the participants in the distribution network and puts forward an improved interval Shapley value method considering both satisfaction and contribution, with case verifying the feasibility and effectiveness of the model. The results showed that, compared with the traditional Shapley method, distribution coefficient calculated by the improved model could be fairer, improve stakeholder satisfaction, and promote the sustainable development of the alliance as well.

  16. X-ray absorption intensity at high-energy region

    International Nuclear Information System (INIS)

    Fujikawa, Takashi; Kaneko, Katsumi

    2012-01-01

    We theoretically discuss X-ray absorption intensity in high-energy region far from the deepest core threshold to explain the morphology-dependent mass attenuation coefficient of some carbon systems, carbon nanotubes (CNTs), highly oriented pyrolytic graphite (HOPG) and fullerenes (C 60 ). The present theoretical approach is based on the many-body X-ray absorption theory including the intrinsic losses (shake-up losses). In the high-energy region the absorption coefficient has correction term dependent on the solid state effects given in terms of the polarization part of the screened Coulomb interaction W p . We also discuss the tail of the valence band X-ray absorption intensity. In the carbon systems C 2s contribution has some influence on the attenuation coefficient even in the high energy region at 20 keV.

  17. Computed versus measured response of HDR reactor building in large scale shaking tests

    International Nuclear Information System (INIS)

    Werkle, H.; Waas, G.

    1987-01-01

    The earthquake resistant design of NPP structures and their installations is commonly based on linear analysis methods. Nonlinear effects, which may occur during strong earthquakes, are approximately accounted for in the analysis by adjusting the structural damping values. Experimental investigations of nonlinear effects were performed with an extremely heavy shaker at the decommissioned HDR reactor building in West Germany. The tests were directed by KfK (Nuclear Research Center Karlsruhe, West Germany) and supported by several companies and institutes from West Germany, Switzerland and the USA. The objective was the dynamic repsonse behaviour of the structure, piping and components to strong earthquake-like shaking including nonlinear effects. This paper presents some results of safety analyses and measurements, which were performed prior and during the test series. It was intended to shake the building up to a level where only a marginal safety against global structural failure was left

  18. Development of synchronized control method for shaking table with booster device. Verification of the capabilities based on both real facility and numerical simulator

    International Nuclear Information System (INIS)

    Kajii, Shin-ichirou; Yasuda, Chiaki; Yamashita, Toshio; Abe, Hiroshi; Kanki, Hiroshi

    2004-01-01

    In the seismic design of nuclear power plant, it is recently considered to use probability method in a addition to certainty method. The former method is called Seismic Probability Safety Assessment (Seismic PSA). In case of seismic PSA for some components of a nuclear power plant using a shaking table, it is necessary for some limited conditions with high level of accelerations such as actual conditions. However, it might be difficult to achieve the test conditions that a current shaking table based on hydraulic power system is intended for the test facility. Therefore, we have been planning out a test method in which both a current and another shaking table called a booster device are applied. This paper describes the verification test of a synchronized control between a current shaking table and a booster device. (author)

  19. Distributed modelling of shallow landslides triggered by intense rainfall

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Hazard assessment of shallow landslides represents an important aspect of land management in mountainous areas. Among all the methods proposed in the literature, physically based methods are the only ones that explicitly includes the dynamic factors that control landslide triggering (rainfall pattern, land-use. For this reason, they allow forecasting both the temporal and the spatial distribution of shallow landslides. Physically based methods for shallow landslides are based on the coupling of the infinite slope stability analysis with hydrological models. Three different grid-based distributed hydrological models are presented in this paper: a steady state model, a transient "piston-flow" wetting front model, and a transient diffusive model. A comparative test of these models was performed to simulate landslide occurred during a rainfall event (27–28 June 1997 that triggered hundreds of shallow landslides within Lecco province (central Southern Alps, Italy. In order to test the potential for a completely distributed model for rainfall-triggered landslides, radar detected rainfall intensity has been used. A new procedure for quantitative evaluation of distributed model performance is presented and used in this paper. The diffusive model results in the best model for the simulation of shallow landslide triggering after a rainfall event like the one that we have analysed. Finally, radar data available for the June 1997 event permitted greatly improving the simulation. In particular, radar data allowed to explain the non-uniform distribution of landslides within the study area.

  20. A study on seismic behavior of pile foundations of bridge abutment on liquefiable ground through shaking table tests

    Science.gov (United States)

    Nakata, Mitsuhiko; Tanimoto, Shunsuke; Ishida, Shuichi; Ohsumi, Michio; Hoshikuma, Jun-ichi

    2017-10-01

    There is risk of bridge foundations to be damaged by liquefaction-induced lateral spreading of ground. Once bridge foundations have been damaged, it takes a lot of time for restoration. Therefore, it is important to assess the seismic behavior of the foundations on liquefiable ground appropriately. In this study, shaking table tests of models on a scale of 1/10 were conducted at the large scale shaking table in Public Works Research Institute, Japan, to investigate the seismic behavior of pile-supported bridge abutment on liquefiable ground. The shaking table tests were conducted for three types of model. Two are models of existing bridge which was built without design for liquefaction and the other is a model of bridge which was designed based on the current Japanese design specifications for highway bridges. As a result, the bending strains of piles of the abutment which were designed based on the current design specifications were less than those of the existing bridge.

  1. Impact of distribution intensity on perceived quality, brand awareness and brand loyality - structural model

    Directory of Open Access Journals (Sweden)

    Ivan-Damir Anić

    2008-12-01

    Full Text Available The purpose of this empirical study was to determine the impacts of distribution intensity on perceived quality and brand awareness, and to analyze the effects of perceived quality and brand awareness on brand loyalty. A structural equation model was used to identify the size and the direction of proposed relationships. The model was tested on a sample of 956 students using three brand categories in the manufacturing industry and three brand categories in the service industry. The proposed hypotheses were supported by the model. The results show that distribution intensity is positively related to perceived quality and brand awareness. Moreover, perceived quality and brand awareness were shown to be significant and positive predictors of brand loyalty. Managerial implications are discussed in the paper. The findings of this study could be of special interests to managers, professionals and those doing research in the field of distribution and brand management.

  2. A microfluidic platform for the rapid determination of distribution coefficients by gravity assisted droplet-based liquid-liquid extraction

    DEFF Research Database (Denmark)

    Poulsen, Carl Esben; Wootton, Robert C. R.; Wolff, Anders

    2015-01-01

    The determination of pharmacokinetic properties of drugs, such as the distribution coefficient, D, is a crucial measurement in pharmaceutical research. Surprisingly, the conventional (gold standard) technique used for D measurements, the shake-flask method, is antiquated and unsuitable...... for the testing of valuable and scarce drug candidates. Herein we present a simple micro fluidic platform for the determination of distribution coefficients using droplet-based liquid-liquid extraction. For simplicity, this platform makes use of gravity to enable phase separation for analysis and is 48 times...... the apparent acid dissociation constant, pK', as a proxy for inter-system comparison. Our platform determines a pK' value of 7.24 ± 0.15, compared to 7.25 ± 0.58 for the shake-flask method in our hands and 7.21 for the shake-flask method in literature. Devices are fabricated using injection moulding, the batch...

  3. Analysis of distribution of PSL intensity recorded in imaging plate

    International Nuclear Information System (INIS)

    Oda, Keiji; Tsukahara, Kazutaka; Tada, Hidenori; Yamauchi, Tomoya

    2006-01-01

    Supplementary experiments and theoretical consideration have been performed about a new method for particle identification with an imaging plate, which was proposed in the previous paper. The imaging plate was exposed to 137 Cs γ-rays, 2 MeV- protons accelerated by a tandem Van de Graaff, X-rays emitted from a tube operated under the condition of 20-70 kV, as well as α- and β-rays. The frequency distribution in PSL intensity in a pixel of 100μm x 100μm was measured and the standard deviation was obtained by fitting to a Gaussian. It was confirmed that the relative standard deviation decreased with the average PSL intensity for every radiation species and that the curves were roughly divided into four groups of α-rays, protons, β-rays and photons. In the second step, these data were analyzed by plotting the square of the relative standard deviation against the average PSL intensity in full-log scale, where the relation should be expressed by a straight line with an slope of -1 provided that the deviation could be dominated only by statistical fluctuation. The data for α- and β-rays deviated from a straight line and approached to each saturated value as the average PSL intensity increased. This saturation was considered to be caused by inhomogeneity in the source intensity. It was also out that the value of interception on full-log plot would have important information about PSL reading efficiency, one of characteristic parameters of imaging plate. (author)

  4. Estimation of Stresses in a Dry Sand Layer Tested on Shaking Table

    Science.gov (United States)

    Sawicki, Andrzej; Kulczykowski, Marek; Jankowski, Robert

    2012-12-01

    Theoretical analysis of shaking table experiments, simulating earthquake response of a dry sand layer, is presented. The aim of such experiments is to study seismic-induced compaction of soil and resulting settlements. In order to determine the soil compaction, the cyclic stresses and strains should be calculated first. These stresses are caused by the cyclic horizontal acceleration at the base of soil layer, so it is important to determine the stress field as function of the base acceleration. It is particularly important for a proper interpretation of shaking table tests, where the base acceleration is controlled but the stresses are hard to measure, and they can only be deduced. Preliminary experiments have shown that small accelerations do not lead to essential settlements, whilst large accelerations cause some phenomena typical for limit states, including a visible appearance of slip lines. All these problems should be well understood for rational planning of experiments. The analysis of these problems is presented in this paper. First, some heuristic considerations about the dynamics of experimental system are presented. Then, the analysis of boundary conditions, expressed as resultants of respective stresses is shown. A particular form of boundary conditions has been chosen, which satisfies the macroscopic boundary conditions and the equilibrium equations. Then, some considerations are presented in order to obtain statically admissible stress field, which does not exceed the Coulomb-Mohr yield conditions. Such an approach leads to determination of the limit base accelerations, which do not cause the plastic state in soil. It was shown that larger accelerations lead to increase of the lateral stresses, and the respective method, which may replace complex plasticity analyses, is proposed. It is shown that it is the lateral stress coefficient K0 that controls the statically admissible stress field during the shaking table experiments.

  5. Generation of initial kinetic distributions for simulation of long-pulse charged particle beams with high space-charge intensity

    Directory of Open Access Journals (Sweden)

    Steven M. Lund

    2009-11-01

    Full Text Available Self-consistent Vlasov-Poisson simulations of beams with high space-charge intensity often require specification of initial phase-space distributions that reflect properties of a beam that is well adapted to the transport channel—both in terms of low-order rms (envelope properties as well as the higher-order phase-space structure. Here, we first review broad classes of kinetic distributions commonly in use as initial Vlasov distributions in simulations of unbunched or weakly bunched beams with intense space-charge fields including the following: the Kapchinskij-Vladimirskij (KV equilibrium, continuous-focusing equilibria with specific detailed examples, and various nonequilibrium distributions, such as the semi-Gaussian distribution and distributions formed from specified functions of linear-field Courant-Snyder invariants. Important practical details necessary to specify these distributions in terms of standard accelerator inputs are presented in a unified format. Building on this presentation, a new class of approximate initial kinetic distributions are constructed using transformations that preserve linear focusing, single-particle Courant-Snyder invariants to map initial continuous-focusing equilibrium distributions to a form more appropriate for noncontinuous focusing channels. Self-consistent particle-in-cell simulations are employed to show that the approximate initial distributions generated in this manner are better adapted to the focusing channels for beams with high space-charge intensity. This improved capability enables simulations that more precisely probe intrinsic stability properties and machine performance.

  6. System identification of timber masonry walls using shaking table test

    Science.gov (United States)

    Roy, Timir B.; Guerreiro, Luis; Bagchi, Ashutosh

    2017-04-01

    Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as: bridges, dams, high rise buildings etc. There had been substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as: natural frequency, modal damping and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototype of such wall has been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.

  7. Calculation and measurement of the intensity distribution of 60Co radiation behind block filters

    International Nuclear Information System (INIS)

    Gerlach, R.; Kranepuhl, H.; Salewski, D.

    1987-01-01

    A method for determining the absorption length in block filters with non-focussing edges is described. It accounts for geometric parameters as source diameter, source-surface-distance and the position of the absorber relative to the central ray. The model was checked by intensity measurements. Behind the absorber as well as in the penumbra regions good agreement between calculation and measurement of the intensity distribution was observed. (author)

  8. Development of 1-D Shake Table Testing Facility for Liquefaction Studies

    Science.gov (United States)

    Unni, Kartha G.; Beena, K. S.; Mahesh, C.

    2018-04-01

    One of the major challenges researchers face in the field of earthquake geotechnical engineering in India is the high cost of laboratory infrastructure. Developing a reliable and low cost experimental set up is attempted in this research. The paper details the design and development of a uniaxial shake table and the data acquisition system with accelerometers and pore water pressure sensors which can be used for liquefaction studies.

  9. Enzyme controlled glucose auto-delivery for high cell density cultivations in microplates and shake flasks

    Directory of Open Access Journals (Sweden)

    Casteleijn Marco G

    2008-11-01

    Full Text Available Abstract Background Here we describe a novel cultivation method, called EnBase™, or enzyme-based-substrate-delivery, for the growth of microorganisms in millilitre and sub-millilitre scale which yields 5 to 20 times higher cell densities compared to standard methods. The novel method can be directly applied in microwell plates and shake flasks without any requirements for additional sensors or liquid supply systems. EnBase is therefore readily applicable for many high throughput applications, such as DNA production for genome sequencing, optimisation of protein expression, production of proteins for structural genomics, bioprocess development, and screening of enzyme and metagenomic libraries. Results High cell densities with EnBase are obtained by applying the concept of glucose-limited fed-batch cultivation which is commonly used in industrial processes. The major difference of the novel method is that no external glucose feed is required, but glucose is released into the growth medium by enzymatic degradation of starch. To cope with the high levels of starch necessary for high cell density cultivation, starch is supplied to the growing culture suspension by continuous diffusion from a storage gel. Our results show that the controlled enzyme-based supply of glucose allows a glucose-limited growth to high cell densities of OD600 = 20 to 30 (corresponding to 6 to 9 g l-1 cell dry weight without the external feed of additional compounds in shake flasks and 96-well plates. The final cell density can be further increased by addition of extra nitrogen during the cultivation. Production of a heterologous triosphosphate isomerase in E. coli BL21(DE3 resulted in 10 times higher volumetric product yield and a higher ratio of soluble to insoluble product when compared to the conventional production method. Conclusion The novel EnBase method is robust and simple-to-apply for high cell density cultivation in shake flasks and microwell plates. The

  10. Contribution of the cerebral SPECT in the field of evaluation of the hemodynamic cerebral vascular accident risk in the Limb shaking syndrome; Apport de la TEMP cerebrale dans le cadre de l'evaluation du risque d'AVC hemodynamique dans le Limb Shaking Syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Lauer, V.; Wolff, V.; Marescaux, C. [CHU de Strasbourg, Unite neurovasculaire, service de neurologie, 67 (France); Namer, I.J. [CHU de Strasbourg, service de biophysique et medecine nucleaire, 67 -Strasbourg (France)

    2010-07-01

    The limb shaking syndrome (L.S.S.) is characterized by uncontrollable shaking of members that are caused by a passage in the upright or by an hyper extension of the neck and occur in a patient with internal carotid stenosis. To investigate the pathophysiology of L.S.S. we used brain SPECT (SPECT-E.C.D. or H.M.P.A.O.) to measure cerebral perfusion in the supine position and standing in three patients. (N.C.)

  11. Development of Ultra-Light Composite Material to Build the Platform of a Shaking Table

    Directory of Open Access Journals (Sweden)

    Botero-Jaramillo Eduardo

    2013-10-01

    Full Text Available Based on the developments of the last decades in the area of ultra-light materials, their application in the construction of the platform of the new one direction hydrau- lic shaking table was proposed, with capacity of one ton and frequency range from 0.4 Hz to 4.0 Hz for the Geotechnical Laboratory of the Institute of Engineering, UNAM. The aim was to replace the heavy conventional steel platforms, used in shaking tables, by a composite material based on wood and Kevlar, hence reducing its weight and optimizing the hydraulic equipment capacity available in the labora- tory. Accordingly, an experimental investigation was conducted to characterize the stress-strain behavior of composite materials under monotonically increasing load. This research involved the determination of the adequate proportions of the different constituent materials and manufacturing techniques that best suit the needs and available resources.

  12. Use of deformed intensity distributions for on-line modification of image-guided IMRT to account for interfractional anatomic changes

    International Nuclear Information System (INIS)

    Mohan, Radhe; Zhang Xiaodong; Wang He; Kang Yixiu; Wang Xiaochun; Liu, Helen; Ang, K.; Kuban, Deborah; Dong Lei

    2005-01-01

    Purpose: Recent imaging studies have demonstrated that there can be significant changes in anatomy from day to day and over the course of radiotherapy as a result of daily positioning uncertainties and physiologic and clinical factors. There are a number of strategies to minimize such changes, reduce their impact, or correct for them. Measures to date have included improved immobilization of external and internal anatomy or adjustment of positions based on portal or ultrasound images. Perhaps the most accurate way is to use CT image-guided radiotherapy, for which the possibilities range from simple correction of setup based on daily CT images to on-line near real-time intensity modulated radiotherapy (IMRT) replanning. In addition, there are numerous intermediate possibilities. In this paper, we report the development of one such intermediate method that takes into account anatomic changes by deforming the intensity distributions of each beam based on deformations of anatomy as seen in the beam's-eye-view. Methods and materials: The intensity distribution deformations are computed based on anatomy deformations discerned from the changes in the current image relative to a reference image (e.g., the pretreatment CT scan). First, a reference IMRT plan is generated based on the reference CT image. A new CT image is acquired using an in-room CT for every fraction. The anatomic structure contours are obtained for the new image. (For this article, these contours were manually drawn. When image guided IMRT methods are implemented, anatomic structure contours on subsequent images will likely be obtained with automatic or semiautomatic means. This could be achieved by, for example, first deforming the original CT image to match today's image, and then using the same deformation transformation to map original contours to today's image.) The reference intensity distributions for each beam are then deformed so that the projected geometric relationship within the beam

  13. Magnitude and Spatial Distribution of Impact Intensity Under the Foot Relates to Initial Foot Contact Pattern.

    Science.gov (United States)

    Breine, Bastiaan; Malcolm, Philippe; Segers, Veerle; Gerlo, Joeri; Derie, Rud; Pataky, Todd; Frederick, Edward C; De Clercq, Dirk

    2017-12-01

    In running, foot contact patterns (rear-, mid-, or forefoot contact) influence impact intensity and initial ankle and foot kinematics. The aim of the study was to compare impact intensity and its spatial distribution under the foot between different foot contact patterns. Forty-nine subjects ran at 3.2 m·s -1 over a level runway while ground reaction forces (GRF) and shoe-surface pressures were recorded and foot contact pattern was determined. A 4-zone footmask (forefoot, midfoot, medial and lateral rearfoot) assessed the spatial distribution of the vertical GRF under the foot. We calculated peak vertical instantaneous loading rate of the GRF (VILR) per foot zone as the impact intensity measure. Midfoot contact patterns were shown to have the lowest, and atypical rearfoot contact patterns the highest impact intensities, respectively. The greatest local impact intensity was mainly situated under the rear- and midfoot for the typical rearfoot contact patterns, under the midfoot for the atypical rearfoot contact patterns, and under the mid- and forefoot for the midfoot contact patterns. These findings indicate that different foot contact patterns could benefit from cushioning in different shoe zones.

  14. Investigation of an He-Ne laser generating a beam with a ring-shaped intensity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, I I; Troitskii, IU V; Iakushkin, S V

    1987-02-01

    The paper examines an He-Ne laser regime with the simultaneous generation of TEM(01) and TEM(10) modes, forming a beam with a ring-shaped intensity distribution with total suppression of the TEM(00) mode. The ratio of the intensity at the ring crest to the intensity at the axis reached a value of 200 and was limited by scattering in the optical components of the resonator. A regime of mutual frequency locking of the TEM(01) and TEM(10) modes was achieved with total spatial coherence of the ring-shaped beam. 14 references.

  15. Shaking table testing of a HTGR reactor core, comparison with the results obtained using a nonlinear mathematical model

    International Nuclear Information System (INIS)

    Berriaud, C.; Cebe, E.; Livolant, M.; Buland, P.

    1975-01-01

    Two series of horizontal tests have been performed at Saclay on the shaking table VESUVE: sinusoidal test and time history response. Sinusoidal tests have shown the strongly nonlinear dynamic behavior of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass nonlinear model. El Centro time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good comparison was obtained for forces and velocities [fr

  16. Income Disparities and the Global Distribution of Intensively Farmed Chicken and Pigs.

    Directory of Open Access Journals (Sweden)

    Marius Gilbert

    Full Text Available The rapid transformation of the livestock sector in recent decades brought concerns on its impact on greenhouse gas emissions, disruptions to nitrogen and phosphorous cycles and on land use change, particularly deforestation for production of feed crops. Animal and human health are increasingly interlinked through emerging infectious diseases, zoonoses, and antimicrobial resistance. In many developing countries, the rapidity of change has also had social impacts with increased risk of marginalisation of smallholder farmers. However, both the impacts and benefits of livestock farming often differ between extensive (backyard farming mostly for home-consumption and intensive, commercial production systems (larger herd or flock size, higher investments in inputs, a tendency towards market-orientation. A density of 10,000 chickens per km2 has different environmental, epidemiological and societal implications if these birds are raised by 1,000 individual households or in a single industrial unit. Here, we introduce a novel relationship that links the national proportion of extensively raised animals to the gross domestic product (GDP per capita (in purchasing power parity. This relationship is modelled and used together with the global distribution of rural population to disaggregate existing 10 km resolution global maps of chicken and pig distributions into extensive and intensive systems. Our results highlight countries and regions where extensive and intensive chicken and pig production systems are most important. We discuss the sources of uncertainties, the modelling assumptions and ways in which this approach could be developed to forecast future trajectories of intensification.

  17. Income Disparities and the Global Distribution of Intensively Farmed Chicken and Pigs.

    Science.gov (United States)

    Gilbert, Marius; Conchedda, Giulia; Van Boeckel, Thomas P; Cinardi, Giuseppina; Linard, Catherine; Nicolas, Gaëlle; Thanapongtharm, Weerapong; D'Aietti, Laura; Wint, William; Newman, Scott H; Robinson, Timothy P

    2015-01-01

    The rapid transformation of the livestock sector in recent decades brought concerns on its impact on greenhouse gas emissions, disruptions to nitrogen and phosphorous cycles and on land use change, particularly deforestation for production of feed crops. Animal and human health are increasingly interlinked through emerging infectious diseases, zoonoses, and antimicrobial resistance. In many developing countries, the rapidity of change has also had social impacts with increased risk of marginalisation of smallholder farmers. However, both the impacts and benefits of livestock farming often differ between extensive (backyard farming mostly for home-consumption) and intensive, commercial production systems (larger herd or flock size, higher investments in inputs, a tendency towards market-orientation). A density of 10,000 chickens per km2 has different environmental, epidemiological and societal implications if these birds are raised by 1,000 individual households or in a single industrial unit. Here, we introduce a novel relationship that links the national proportion of extensively raised animals to the gross domestic product (GDP) per capita (in purchasing power parity). This relationship is modelled and used together with the global distribution of rural population to disaggregate existing 10 km resolution global maps of chicken and pig distributions into extensive and intensive systems. Our results highlight countries and regions where extensive and intensive chicken and pig production systems are most important. We discuss the sources of uncertainties, the modelling assumptions and ways in which this approach could be developed to forecast future trajectories of intensification.

  18. Income Disparities and the Global Distribution of Intensively Farmed Chicken and Pigs

    Science.gov (United States)

    Gilbert, Marius; Conchedda, Giulia; Van Boeckel, Thomas P.; Cinardi, Giuseppina; Linard, Catherine; Nicolas, Gaëlle; Thanapongtharm, Weerapong; D'Aietti, Laura; Wint, William; Newman, Scott H.; Robinson, Timothy P.

    2015-01-01

    The rapid transformation of the livestock sector in recent decades brought concerns on its impact on greenhouse gas emissions, disruptions to nitrogen and phosphorous cycles and on land use change, particularly deforestation for production of feed crops. Animal and human health are increasingly interlinked through emerging infectious diseases, zoonoses, and antimicrobial resistance. In many developing countries, the rapidity of change has also had social impacts with increased risk of marginalisation of smallholder farmers. However, both the impacts and benefits of livestock farming often differ between extensive (backyard farming mostly for home-consumption) and intensive, commercial production systems (larger herd or flock size, higher investments in inputs, a tendency towards market-orientation). A density of 10,000 chickens per km2 has different environmental, epidemiological and societal implications if these birds are raised by 1,000 individual households or in a single industrial unit. Here, we introduce a novel relationship that links the national proportion of extensively raised animals to the gross domestic product (GDP) per capita (in purchasing power parity). This relationship is modelled and used together with the global distribution of rural population to disaggregate existing 10 km resolution global maps of chicken and pig distributions into extensive and intensive systems. Our results highlight countries and regions where extensive and intensive chicken and pig production systems are most important. We discuss the sources of uncertainties, the modelling assumptions and ways in which this approach could be developed to forecast future trajectories of intensification. PMID:26230336

  19. Intensity Distribution of the Three-Wave Diffraction from Dislocation Epitaxial Layers in the Reciprocal Space

    Science.gov (United States)

    Kyutt, R. N.

    2018-04-01

    The three-wave X-ray diffraction in strongly disordered epitaxial layers of GaN and ZnO is experimentally investigated. The charts of the intensity distribution in the reciprocal space are plotted in coordinates q θ and q ϕ for the most intensive three-wave combination (1010)/(1011) by means of subsequent θ- and ϕ-scanning. A nontrivial shape of the θ-sections of these contours at a distance from the ϕ center of reflection is revealed; it is different for different samples. For the θ-curves at the center of reflection, we observed a common peak that may be approximated by the Voigt function with a power-low decrease in the intensity at the wings; the decrease law (from-4.5 to-5.0) is found to be considerably greater than that for the similar curves of two-wave diffraction and not depending on the dislocation density and distribution in layers. In some films we observed a coarse-block structure; in addition, it follows from the distribution in the reciprocal space that these blocks are turned with respect to each other around a normal to the surface, which allows us to suggest the existence of low-angle boundaries between them, consisting exclusively of edge dislocations.

  20. Engineering geologic and geotechnical analysis of paleoseismic shaking using liquefaction effects: Field examples

    Science.gov (United States)

    Green, R.A.; Obermeier, S.F.; Olson, S.M.

    2005-01-01

    The greatest impediments to the widespread acceptance of back-calculated ground motion characteristics from paleoliquefaction studies typically stem from three uncertainties: (1) the significance of changes in the geotechnical properties of post-liquefied sediments (e.g., "aging" and density changes), (2) the selection of appropriate geotechnical soil indices from individual paleoliquefaction sites, and (3) the methodology for integration of back-calculated results of strength of shaking from individual paleoliquefaction sites into a regional assessment of paleoseismic strength of shaking. Presented herein are two case studies that illustrate the methods outlined by Olson et al. [Engineering Geology, this issue] for addressing these uncertainties. The first case study is for a site near Memphis, Tennessee, wherein cone penetration test data from side-by-side locations, one of liquefaction and the other of no liquefaction, are used to readily discern that the influence of post-liquefaction "aging" and density changes on the measured in situ soil indices is minimal. In the second case study, 12 sites that are at scattered locations in the Wabash Valley and that exhibit paleoliquefaction features are analyzed. The features are first provisionally attributed to the Vincennes Earthquake, which occurred around 6100 years BP, and are used to illustrate our proposed approach for selecting representative soil indices of the liquefied sediments. These indices are used in back-calculating the strength of shaking at the individual sites, the results from which are then incorporated into a regional assessment of the moment magnitude, M, of the Vincennes Earthquake. The regional assessment validated the provisional assumption that the paleoliquefaction features at the scattered sites were induced by the Vincennes Earthquake, in the main, which was determined to have M ??? 7.5. The uncertainties and assumptions used in the assessment are discussed in detail. ?? 2004 Elsevier B

  1. Shaking table test study on seismic performance of dehydrogenation fan for nuclear power plants

    International Nuclear Information System (INIS)

    Liu Kaiyan; Shi Weixing; Cao Jialiang; Wang Yang

    2011-01-01

    Seismic performance of the dehydrogenation fan for nuclear power plants was evaluated based on the shaking table test of earthquake simulation. Dynamic characteristics including the orthogonal tri-axial fundamental frequencies and equivalent damping ratios were measured by the white noise scanning method. Artificial seismic waves were generated corresponding to the floor acceleration response spectra for nuclear power plants. Furthermore, five OBE and one SSE shaking table tests for dehydrogenation fan were performed by using the artificial seismic waves as the seismic inputs along the orthogonal axis simultaneity. Operating function of dehydrogenation fan was monitored and observed during all seismic tests, and performance indexes of dehydrogenation fan were compared before and after seismic tests. The results show that the structural integrity and operating function of the dehydrogenation fan are perfect during all seismic tests; and the performance indexes of the dehydrogenation fan can remain consistent before and after seismic tests; the seismic performance of the dehydrogenation fan can satisfy relevant technical requirements. (authors)

  2. Recent applications for rapid estimation of earthquake shaking and losses with ELER Software

    International Nuclear Information System (INIS)

    Demircioglu, M.B.; Erdik, M.; Kamer, Y.; Sesetyan, K.; Tuzun, C.

    2012-01-01

    A methodology and software package entitled Earthquake Loss Estimation Routine (ELER) was developed for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region. The work was carried out under the Joint Research Activity-3 (JRA3) of the EC FP6 project entitled Network of Research Infrastructures for European Seismology (NERIES). The ELER methodology anticipates: 1) finding of the most likely location of the source of the earthquake using regional seismo-tectonic data base; 2) estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion prediction models, bias-correcting the ground motion estimations with strong ground motion data, if available; 3) estimation of the spatial distribution of site-corrected ground motion parameters using regional geology database using appropriate amplification models; and 4) estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties). The multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships which are coded into ELER. The present paper provides brief information on the methodology of ELER and provides an example application with the recent major earthquake that hit the Van province in the east of Turkey on 23 October 2011 with moment magnitude (Mw) of 7.2. For this earthquake, Kandilli Observatory and Earthquake Research Institute (KOERI) provided almost real time estimations in terms of building damage and casualty distribution using ELER. (author)

  3. USGS earthquake hazards program (EHP) GPS use case : earthquake early warning (EEW) and shake alert

    Science.gov (United States)

    2017-03-30

    GPS Adjacent Band Workshop VI RTCA Inc., Washington D.C., 30 March 2017. USGS GPS receiver use case - Real-Time GPS for EEW -Continued: CRITICAL EFFECT - The GNSS component of the Shake Alert system augments the inertial sensors and is especial...

  4. Tennis Play Intensity Distribution and Relation with Aerobic Fitness in Competitive Players.

    Directory of Open Access Journals (Sweden)

    Ernest Baiget

    Full Text Available The aims of this study were (i to describe the relative intensity of simulated tennis play based on the cumulative time spent in three metabolic intensity zones, and (ii to determine the relationships between this play intensity distribution and the aerobic fitness of a group of competitive players. 20 male players of advanced to elite level (ITN performed an incremental on-court specific endurance tennis test to exhaustion to determine maximal oxygen uptake (VO2max and the first and second ventilatory thresholds (VT1, VT2. Ventilatory and gas exchange parameters were monitored using a telemetric portable gas analyser (K4 b2, Cosmed, Rome, Italy. Two weeks later the participants played a simulated tennis set against an opponent of similar level. Intensity zones (1: low, 2: moderate, and 3: high were delimited by the individual VO2 values corresponding to VT1 and VT2, and expressed as percentage of maximum VO2 and heart rate. When expressed relative to VO2max, percentage of playing time in zone 1 (77 ± 25% was significantly higher (p < 0.001 than in zone 2 (20 ± 21% and zone 3 (3 ± 5%. Moderate to high positive correlations were found between VT1, VT2 and VO2max, and the percentage of playing time spent in zone 1 (r = 0.68-0.75, as well as low to high inverse correlations between the metabolic variables and the percentage of time spent in zone 2 and 3 (r = -0.49-0.75. Players with better aerobic fitness play at relatively lower intensities. We conclude that players spent more than 75% of the time in their low-intensity zone, with less than 25% of the time spent at moderate to high intensities. Aerobic fitness appears to determine the metabolic intensity that players can sustain throughout the game.

  5. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  6. Intensity and absorbed-power distribution in a cylindrical solar-pumped dye laser

    Science.gov (United States)

    Williams, M. D.

    1984-01-01

    The internal intensity and absorbed-power distribution of a simplified hypothetical dye laser of cylindrical geometry is calculated. Total absorbed power is also calculated and compared with laboratory measurements of lasing-threshold energy deposition in a dye cell to determine the suitability of solar radiation as a pump source or, alternatively, what modifications, if any, are necessary to the hypothetical system for solar pumping.

  7. Plasma Temperature Determination of Hydrogen Containing High-Frequency Electrodeless Lamps by Intensity Distribution Measurements of Hydrogen Molecular Band

    OpenAIRE

    Gavare, Zanda; Revalde, Gita; Skudra, Atis

    2010-01-01

    The goal of the present work was the investigation of the possibility to use intensity distribution of the Q-branch lines of the hydrogen Fulcher-α diagonal band (d3Πu−→a3∑g+ electronic transition; Q-branch with v=v′=2) to determine the temperature of hydrogen containing high-frequency electrodeless lamps (HFEDLs). The values of the rotational temperatures have been obtained from the relative intensity distributions for hydrogen-helium and hydrogen-argon HFEDLs depending on the applied curren...

  8. Effect of the consumption of a new symbiotic shake on glycemia and cholesterol levels in elderly people with type 2 diabetes mellitus.

    Science.gov (United States)

    Moroti, Camila; Souza Magri, Loyanne Francine; de Rezende Costa, Marcela; Cavallini, Daniela C U; Sivieri, Katia

    2012-02-22

    The consumption of foods containing probiotic and prebiotic ingredients is growing consistently every year, and in view of the limited number of studies investigating their effect in the elderly. The objective of this study was to evaluate the effect of the consumption of a symbiotic shake containing Lactobacillus acidophilus, Bifidobacterium bifidum and fructooligosaccharides on glycemia and cholesterol levels in elderly people. A randomized, double-blind, placebo-controlled study was conducted on twenty volunteers (ten for placebo group and ten for symbiotic group), aged 50 to 60 years. The criteria for inclusion in the study were: total cholesterol > 200 mg/dL; triglycerides > 200 mg/dL and glycemia > 110 mg/dL. Over a total test period of 30 days, 10 individuals (the symbiotic group) consumed a daily dose of 200 mL of a symbiotic shake containing 10(8) UFC/mL Lactobacillus acidophilus, 10(8) UFC/mL Bifidobacterium bifidum and 2 g oligofructose, while 10 other volunteers (the placebo group) drank daily the same amount of a shake that did not contain any symbiotic bacteria. Blood samples were collected 15 days prior to the start of the experiment and at 10-day intervals after the beginning of the shake intake. The standard lipid profile (total cholesterol, triglycerides and HDL cholesterol) and glycemia, or blood sugar levels, were evaluated by an enzyme colorimetric assay. The results of the symbiotic group showed a non-significant reduction (P > 0.05) in total cholesterol and triglycerides, a significant increase (P symbiotic shake resulted in a significant increase in HDL and a significant decrease of glycemia.

  9. Ion shaking in the 200 MeV XLS-ring

    International Nuclear Information System (INIS)

    Bozoki, E.; Kramer, S.L.

    1992-01-01

    It has been shown that ions, trapped inside the beam's potential, can be removed by the clearing electrodes when the amplitude of the ion oscillation is increased by vertically shaking the ions. We will report on a similar experiment in the 200 Mev XLS ring. The design of the ion clearing system for the ring and the first results obtained, were already reported. In the present series of experiments, RF voltage was applied on a pair of vertical strip-lines. The frequency was scanned in the range of the ion (from H 2 to CO 2 ) bounce frequencies in the ring (1--10 MHz). The response of the beam size, vertical betatron tune and lifetime was studied

  10. Effects of oxcarbazepine on monoamines content in hippocampus and head and body shakes and sleep patterns in kainic acid-treated rats.

    Science.gov (United States)

    Alfaro-Rodríguez, Alfonso; González-Piña, Rigoberto; Bueno-Nava, Antonio; Arch-Tirado, Emilio; Ávila-Luna, Alberto; Uribe-Escamilla, Rebeca; Vargas-Sánchez, Javier

    2011-09-01

    The aim of this work was to analyze the effect of oxcarbazepine (OXC) on sleep patterns, "head and body shakes" and monoamine neurotransmitters level in a model of kainic-induced seizures. Adult Wistar rats were administered kainic acid (KA), OXC or OXC + KA. A polysomnographic study showed that KA induced animals to stay awake for the whole initial 10 h. OXC administration 30 min prior to KA diminished the effect of KA on the sleep parameters. As a measure of the effects of the drug treatments on behavior, head and body shakes were visually recorded for 4 h after administration of KA, OXC + KA or saline. The presence of OXC diminished the shakes frequency. 4 h after drug application, the hippocampus was dissected out, and the content of monoamines was analyzed. The presence of OXC still more increased serotonin, 5-hidroxyindole acetic acid, dopamine, and homovanilic acid, induced by KA.

  11. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    Science.gov (United States)

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  12. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  13. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  14. Intensity attenuation for active crustal regions

    Science.gov (United States)

    Allen, Trevor I.; Wald, David J.; Worden, C. Bruce

    2012-07-01

    We develop globally applicable macroseismic intensity prediction equations (IPEs) for earthquakes of moment magnitude M W 5.0-7.9 and intensities of degree II and greater for distances less than 300 km for active crustal regions. The IPEs are developed for two distance metrics: closest distance to rupture ( R rup) and hypocentral distance ( R hyp). The key objective for developing the model based on hypocentral distance—in addition to more rigorous and standard measure R rup—is to provide an IPE which can be used in near real-time earthquake response systems for earthquakes anywhere in the world, where information regarding the rupture dimensions of a fault may not be known in the immediate aftermath of the event. We observe that our models, particularly the model for the R rup distance metric, generally have low median residuals with magnitude and distance. In particular, we address whether the direct use of IPEs leads to a reduction in overall uncertainties when compared with methods which use a combination of ground-motion prediction equations and ground motion to intensity conversion equations. Finally, using topographic gradient as a proxy and median model predictions, we derive intensity-based site amplification factors. These factors lead to a small reduction of residuals at shallow gradients at strong shaking levels. However, the overall effect on total median residuals is relatively small. This is in part due to the observation that the median site condition for intensity observations used to develop these IPEs is approximately near the National Earthquake Hazard Reduction Program CD site-class boundary.

  15. Economic Impact Analyses of Interdisciplinary Multi-hazard Scenarios: ShakeOut and ARkStorm

    Science.gov (United States)

    Wein, A. M.; Rose, A.; Sue Wing, I.; Wei, D.

    2011-12-01

    U. S. Geological Survey (USGS) scientists are using an interdisciplinary strategy to develop and analyze multi-hazard scenarios to help communities enhance resilience to natural hazard disasters. Two such scenarios are the southern California ShakeOut earthquake and the California ARkStorm winter storm. Both scenarios are multi-hazard: Shakeout ground motions trigger landslides and liquefaction and ARkStorm involves wind, flood, landslide, and coastal hazards. A collaborative scenario-process engages partners and stakeholders throughout the development and use of the scenarios, In doing so, community resilience is enhanced by educating communities about hazards and hazard interdependencies, building networks from scientists to decision makers, exercising emergency management strategies, identifying emergency management issues, and motivating solutions prior to an event. In addition, interdisciplinary scenarios stimulate research on the various steps of analysis (e.g., natural hazard processes, physical damages, societal consequences, and policy connections). In particular, USGS scientists have collaborated with economists to advance methods to estimate the economic impacts (business interruption losses) of disasters. Our economic impact analyses evolved from the economic module in the Federal Emergency Management Agency's loss-estimation tool, HAZUS-MH, to a more encompassing input-output analysis for ShakeOut, to a more sophisticated Computable General Equilibrium model for ARkStorm. The analyses depend on physical damage and restoration time estimates from engineers and geographic analyses of economic assets in hazard zones. Economic resilience strategies are incorporated to represent resourcefulness and ingenuity that avoids potential losses during and after an event. Such strategies operate at three levels of the economy: micro (e.g., ability to catch up on lost production time), meso (e.g., coordination within a sector to share resources), and macro (e

  16. The USGS "Did You Feel It?" Macroseismic Intensity Maps: Lessons Learned from a Decade of Citizen-Empowered Seismology

    Science.gov (United States)

    Wald, D. J.; Worden, C. B.; Quitoriano, V. R.; Dewey, J. W.

    2012-12-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity (MI) data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2012). The internet-based interface allows for a two-way path of communication between seismic data providers (scientists) and earthquake information recipients (citizens) by swapping roles: users looking for information from the USGS become data providers to the USGS. This role-reversal presents opportunities for data collection, generation of good will, and further communication and education. In addition, online MI collecting systems like DYFI have greatly expanded the range of quantitative analyses possible with MI data and taken the field of MI in important new directions. The maps are made more quickly, usually provide more complete coverage at higher resolution, and allow data collection at rates and quantities never before considered. Scrutiny of the USGS DYFI data indicates that one-decimal precision is warranted, and web-based geocoding services now permit precise locations. The high-quality, high-resolution, densely sampled MI assignments allow for peak ground motion (PGM) versus MI analyses well beyond earlier studies. For instance, Worden et al. (2011) used large volumes of data to confirm low standard deviations for multiple, proximal DYFI reports near a site, and they used the DYFI observations with PGM data to develop bidirectional, ground motion-intensity conversion equations. Likewise, Atkinson and Wald (2007) and Allen et al. (2012) utilized DYFI data to derive intensity prediction equations directly without intermediate conversion of ground-motion prediction equation metrics to intensity. Both types of relations are important for robust historic and real-time ShakeMaps, among other uses. In turn, ShakeMap and DYFI afford ample opportunities to

  17. Ion energy distributions from laser-generated plasmas at two different intensities

    Science.gov (United States)

    Ceccio, Giovanni; Torrisi, Lorenzo; Okamura, Masahiro; Kanesue, Takeshi; Ikeda, Shunsuke

    2018-01-01

    Laser-generated non-equilibrium plasmas were analyzed at Brookhaven National Laboratory (NY, USA) and MIFT Messina University (Italy). Two laser intensities of 1012 W/cm2 and 109 W/cm2, have been employed to irradiate Al and Al with Au coating targets in high vacuum conditions. Ion energy distributions were obtained using electrostatic analyzers coupled with ion collectors. Time of flight measurements were performed by changing the laser irradiation conditions. The study was carried out to provide optimum keV ions injection into post acceleration systems. Possible applications will be presented.

  18. Role of shake processes and inter-multiplet Auger transitions in production of multiply-charged ions upon cascade decay of resonantly excited 1s-14p state of the argon atom

    International Nuclear Information System (INIS)

    Kochur, A.G.; Dudenko, A.I.; Petrov, I.D.; Demekhin, V.F.

    2007-01-01

    The Ar i+ ion yields upon the decay of the Ar1s -1 4p state are calculated in one-electron configuration-average approximation considering shake up, shake down and shake off processes as well as the ejection of electrons in inter-multiplet Auger transitions. Our calculation underestimates the production of the higher-charged ions which may indicate limitations of the one-electron approximation, and of the step-by-step cascade model

  19. Dynamical shake-up and the low mass of Mars

    Science.gov (United States)

    Bromley, Benjamin C.; Kenyon, Scott

    2017-10-01

    The low mass of Mars and the lack of planets in the asteroid belt are important constraints on theories of planet formation. We revisit the idea that sweeping secular resonances involving the gas giants and theSun's dissipating protoplanetary disk can explain these features of our Solar System. To test this "dynamical shake-up" scenario, we perform an extensive suite of simulations to track terrestrial planet formation from planetesimals. We find that if the Sun’s gas disk depletes in roughly a million years, then a sweeping resonance with Jupiter inhibits planet formation in the asteroid belt and substantially limits the mass of Mars. We explore how this phenomenon might lead to asteroid belt analogs around other stars with long-period, massive planets.

  20. Building a Communication, Education, an Outreach Program for the ShakeAlert National Earthquake Early Warning Program

    Science.gov (United States)

    DeGroot, R. M.; Strauss, J. A.; Given, D. D.; Cochran, E. S.; Burkett, E. R.; Long, K.

    2016-12-01

    Earthquake Early Warning (EEW) systems can provide as much as tens of seconds of warning to people and automated systems before strong shaking arrives. The United States Geological Survey (USGS) and its partners are developing an EEW system for the West Coast of the United States. To be an integral part of successful implementation, EEW engagement programs and materials must integrate with and leverage broader earthquake risk programs. New methods and products for dissemination must be multidisciplinary, cost effective, and consistent with existing hazards education efforts. Our presentation outlines how the USGS and its partners will approach this effort in the context of the EEW system through the work of a multistate and multiagency committee that participates in the design, implementation, and evaluation of a portfolio of programs and products. This committee, referred to as the ShakeAlert Joint Committee for Communication, Education, and Outreach (ShakeAlert CEO), is working to identify, develop, and cultivate partnerships with EEW stakeholders including Federal, State, academic partners, private companies, policy makers, and local organizations. Efforts include developing materials, methods for delivery, and reaching stakeholders with information on EEW, earthquake preparedness, and emergency protective actions. It is essential to develop standards to ensure information communicated via the EEW alerts is consistent across the public and private sector and achieving a common understanding of what actions users take when they receive an EEW warning. The USGS and the participating states and agencies acknowledge that the implementation of EEW is a collective effort requiring the participation of hundreds of stakeholders committed to ensuring public accessibility.

  1. Plasma Temperature Determination of Hydrogen Containing High-Frequency Electrode less Lamps by Intensity Distribution Measurements of Hydrogen Molecular Band

    International Nuclear Information System (INIS)

    Gavare, Z.; Revalde, G.; Skudra, A.

    2011-01-01

    The goal of the present work was the investigation of the possibility to use intensity distribution of the Q-branch lines of the hydrogen Fulcher-a diagonal band (d3η u- a3Σg + electronic transition; Q-branch with ν=ν=2) to determine the temperature of hydrogen containing high-frequency electrode less lamps (HFEDLs). The values of the rotational temperatures have been obtained from the relative intensity distributions for hydrogen-helium and hydrogen-argon HFEDLs depending on the applied current. The results have been compared with the method of temperature derivation from Doppler profiles of He 667.8 nm and Ar 772.4 nm lines. The results of both methods are in good agreement, showing that the method of gas temperature determination from the intensity distribution in the hydrogen Fulcher-a (2-2)Q band can be used for the hydrogen containing HFEDLs. It was observed that the admixture of 10% hydrogen in the argon HFEDLs significantly reduces the gas temperature

  2. Assessment of the signal intensity distribution pattern within the unruptured cerebral aneurysms using color-coded 3D MR angiography

    International Nuclear Information System (INIS)

    Satoh, Toru; Omi, Megumi; Ohsako, Chika

    2005-01-01

    To evaluate the interaction between the MR signal intensity distribution pattern and bleb formation/deformation of the aneurysmal dome, fifty cases of the unruptured cerebral aneurysms were investigated with the color-coded 3D MR angiography. Patterns were categorized into central-type, neck-type and peripheral-type according to the distribution of MR signals with low-, moderate- and high signal intensity areas. Imaging analysis revealed the significant relationship (P<0.02) of the peripheral-type aneurysms to the bleb formation and deformation of the dome, compared with those of central- and neck-type. Additionally, peripheral-type signal intensity distribution pattern was shown with aneurysms harboring relatively large dome size and lateral-type growth including internal carotid aneurysms. Prospective analysis of intraaneurysmal flow pattern with the color-coded 3D MR angiography may provide patient-specific analysis of intraaneurysmal flow status in relation to the morphological change of the corresponding aneurysmal dome in the management of unruptured cerebral aneurysms. (author)

  3. Variation of rain intensity and drop size distribution with General Weather Patterns (GWL)

    Science.gov (United States)

    Ghada, Wael; Buras, Allan; Lüpke, Marvin; Menzel, Annette

    2017-04-01

    Short-duration rainfall extremes may cause flash floods in certain catchments (e.g. cities or fast responding watersheds) and pose a great risk to affected communities. In order to predict their occurrence under future climate change scenarios, their link to atmospheric circulation patterns needs to be well understood. We used a comprehensive data set of meteorological data (temperature, rain gauge precipitation) and precipitation spectra measured by a disdrometer (OTT PARSIVEL) between October 2008 and June 2010 at Freising, southern Germany. For the 21 months of the study period, we integrated the disdrometer spectra over intervals of 10 minutes to correspond to the temporal resolution of the weather station data and discarded measurements with air temperatures below 0°C. Daily General Weather Patterns ("Großwetterlagen", GWL) were downloaded from the website of the German Meteorological Service. Out of the 29 GWL, 14 were included in the analysis for which we had at least 12 rain events during our study period. For the definition of a rain event, we tested different lengths of minimum inter-event times and chose 30 min as a good compromise between number and length of resulting events; rain events started when more than 0.001 mm/h (sensitivity of the disdrometer) were recorded. The length of the rain events ranged between 10 min and 28 h (median 130 min) with the maximum rain intensity recorded being 134 mm/h on 24-07-2009. Seasonal differences were identified for rain event average intensities and maximum intensities per event. The influence of GWL on rain properties such as rain intensity and drop size distribution per time step and per event was investigated based on the above mentioned rain event definition. Pairwise Wilcoxon-tests revealed that higher rain intensity and larger drops were associated with the GWL "Low over the British Isles" (TB), whereas low rain intensities and less drops per interval were associated with the GWL "High over Central Europe

  4. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    Science.gov (United States)

    Zhu, Yan-Chun; Du, Jiang; Yang, Wen-Chao; Duan, Chai-Jie; Wang, Hao-Yu; Gao, Song; Bao, Shang-Lian

    2014-03-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts.

  5. Reduced aliasing artifacts using shaking projection k-space sampling trajectory

    International Nuclear Information System (INIS)

    Zhu Yan-Chun; Yang Wen-Chao; Wang Hao-Yu; Gao Song; Bao Shang-Lian; Du Jiang; Duan Chai-Jie

    2014-01-01

    Radial imaging techniques, such as projection-reconstruction (PR), are used in magnetic resonance imaging (MRI) for dynamic imaging, angiography, and short-T2 imaging. They are less sensitive to flow and motion artifacts, and support fast imaging with short echo times. However, aliasing and streaking artifacts are two main sources which degrade radial imaging quality. For a given fixed number of k-space projections, data distributions along radial and angular directions will influence the level of aliasing and streaking artifacts. Conventional radial k-space sampling trajectory introduces an aliasing artifact at the first principal ring of point spread function (PSF). In this paper, a shaking projection (SP) k-space sampling trajectory was proposed to reduce aliasing artifacts in MR images. SP sampling trajectory shifts the projection alternately along the k-space center, which separates k-space data in the azimuthal direction. Simulations based on conventional and SP sampling trajectories were compared with the same number projections. A significant reduction of aliasing artifacts was observed using the SP sampling trajectory. These two trajectories were also compared with different sampling frequencies. A SP trajectory has the same aliasing character when using half sampling frequency (or half data) for reconstruction. SNR comparisons with different white noise levels show that these two trajectories have the same SNR character. In conclusion, the SP trajectory can reduce the aliasing artifact without decreasing SNR and also provide a way for undersampling reconstruction. Furthermore, this method can be applied to three-dimensional (3D) hybrid or spherical radial k-space sampling for a more efficient reduction of aliasing artifacts

  6. MyShake: Building a smartphone seismic network

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.

    2014-12-01

    We are in the process of building up a smartphone seismic network. In order to build this network, we did shake table tests to evaluate the performance of the smartphones as seismic recording instruments. We also conducted noise floor test to find the minimum earthquake signal we can record using smartphones. We added phone noises to the strong motion data from past earthquakes, and used these as an analogy dataset to test algorithms and to understand the difference of using the smartphone network and the traditional seismic network. We also built a prototype system to trigger the smartphones from our server to record signals which can be sent back to the server in near real time. The phones can also be triggered by our developed algorithm running locally on the phone, if there's an earthquake occur to trigger the phones, the signal recorded by the phones will be sent back to the server. We expect to turn the prototype system into a real smartphone seismic network to work as a supplementary network to the existing traditional seismic network.

  7. Spatial Distributions of Tropical Cyclone Tornadoes by Intensity and Size Characteristics

    Directory of Open Access Journals (Sweden)

    Todd W. Moore

    2017-08-01

    Full Text Available Tropical cyclones that make landfall often spawn tornadoes. Previous studies have shown that these tornadoes are not uniformly distributed in the United States or in the tropical cyclone environment. They show that tornadoes tend to occur relatively close to the coastline and that they tend to cluster to the east-of-center in the tropical cyclone environment, particularly in the northeast and east-of-center quadrants. This study contributes to these studies by analyzing the spatial distributions of tropical cyclone tornadoes by intensity, path length, path width, and the damage potential index. The analyses confirm that most tornadoes occur relatively close to the coastline, but show that stronger tornadoes with larger paths are disproportionately common farther inland. They also confirm that the highest amount of activity is located within the northeast and east-of-center quadrants and show that the most potentially damaging tornadoes cluster in a sub region near the intersection of these two quadrants.

  8. Asymmetries in angular distributions of nucleon emission intensity in high energy hadron-nucleus collisions

    International Nuclear Information System (INIS)

    Strugalski, Z.

    1982-01-01

    Asymmetry in nucleon emission intensity angular distributions relatively to the hadron deflection plane and to two planes normal to it and related to it uniquely is analyzed, using appropriate experimental data on pion-xenon nucleus collisions at 3.5 GeV/c momentum. Quantative characteristics of the asymmetries found are presented in tables and on figures

  9. The quick convolution of galaxy profiles, with application to power-law intensity distributions

    International Nuclear Information System (INIS)

    Bailey, M.E.; Sparks, W.B.

    1983-01-01

    The two-dimensional convolution of a circularly symmetric galaxy model with a Gaussian point-spread function of dispersion σ reduces to a single integral. This is solved analytically for models with power-law intensity distributions and results are given which relate the apparent core radius to σ and the power-law index k. The convolution integral is also simplified for the case of a point-spread function corresponding to a circular aperture. Models of galactic nuclei with stellar density cusps can only be distinguished from alternatives with small core radii if both the brightness and seeing profiles are measured accurately. The results are applied to data on the light distribution at the Galactic Centre. (author)

  10. Use of frozen stress in extracting stress intensity factor distributions in three dimensional cracked body problems

    Science.gov (United States)

    Smith, C. W.

    1992-01-01

    The adaptation of the frozen stress photoelastic method to the determination of the distribution of stress intensity factors in three dimensional problems is briefly reviewed. The method is then applied to several engineering problems of practical significance.

  11. A simple hand-held magnet array for efficient and reproducible SABRE hyperpolarisation using manual sample shaking.

    Science.gov (United States)

    Richardson, Peter M; Jackson, Scott; Parrott, Andrew J; Nordon, Alison; Duckett, Simon B; Halse, Meghan E

    2018-07-01

    Signal amplification by reversible exchange (SABRE) is a hyperpolarisation technique that catalytically transfers nuclear polarisation from parahydrogen, the singlet nuclear isomer of H 2 , to a substrate in solution. The SABRE exchange reaction is carried out in a polarisation transfer field (PTF) of tens of gauss before transfer to a stronger magnetic field for nuclear magnetic resonance (NMR) detection. In the simplest implementation, polarisation transfer is achieved by shaking the sample in the stray field of a superconducting NMR magnet. Although convenient, this method suffers from limited reproducibility and cannot be used with NMR spectrometers that do not have appreciable stray fields, such as benchtop instruments. Here, we use a simple hand-held permanent magnet array to provide the necessary PTF during sample shaking. We find that the use of this array provides a 25% increase in SABRE enhancement over the stray field approach, while also providing improved reproducibility. Arrays with a range of PTFs were tested, and the PTF-dependent SABRE enhancements were found to be in excellent agreement with comparable experiments carried out using an automated flow system where an electromagnet is used to generate the PTF. We anticipate that this approach will improve the efficiency and reproducibility of SABRE experiments carried out using manual shaking and will be particularly useful for benchtop NMR, where a suitable stray field is not readily accessible. The ability to construct arrays with a range of PTFs will also enable the rapid optimisation of SABRE enhancement as function of PTF for new substrate and catalyst systems. © 2017 The Authors Magnetic Resonance in Chemistry Published by John Wiley & Sons Ltd.

  12. Prompt Assessment of Global Earthquakes for Response (PAGER): A System for Rapidly Determining the Impact of Earthquakes Worldwide

    Science.gov (United States)

    Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy

    2009-01-01

    Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.

  13. James Parkinson and his essay on "shaking palsy", two hundred years later.

    Science.gov (United States)

    Palacios-Sánchez, Leonardo; Torres Nupan, Martha; Botero-Meneses, Juan Sebastián

    2017-09-01

    In 1817, British physician James Parkinson published a 66-page document entitled "Essay on the Shaking Palsy". This brief text became a classical and fundamental piece in the history of medicine and, in particular, of neurology. The authors of this article wish to pay tribute to this great pioneer of neurology, 200 years after the publication of his findings, which would, in turn, immortalize his name and give rise to the renaming on the entity in 1860 by Professor Jean Martin Charcot, father of neurology. It would be known, henceforth as Parkinson's disease.

  14. In-Plane Strengthening Effect of Prefabricated Concrete Walls on Masonry Structures: Shaking Table Test

    OpenAIRE

    Li, Weiwei; Liu, Weiqing; Wang, Shuguang; Du, Dongsheng

    2017-01-01

    The improvement effect of a new strengthening strategy on dynamic action of masonry structure, by installing prefabricated concrete walls on the outer facades, is validated by shaking table test presented in this paper. We carried out dynamic tests of two geometrically identical five-story reduced scaled models, including an unstrengthened and a strengthened masonry model. The experimental analysis encompasses seismic performances such as cracking patterns, failure mechanisms, amplification f...

  15. Optimal Design and Hybrid Control for the Electro-Hydraulic Dual-Shaking Table System

    Directory of Open Access Journals (Sweden)

    Lianpeng Zhang

    2016-08-01

    Full Text Available This paper is to develop an optimal electro-hydraulic dual-shaking table system with high waveform replication precision. The parameters of hydraulic cylinders, servo valves, hydraulic supply power and gravity balance system are designed and optimized in detail. To improve synchronization and tracking control precision, a hybrid control strategy is proposed. The cross-coupled control using a novel based on sliding mode control based on adaptive reaching law (ASMC, which can adaptively tune the parameters of sliding mode control (SMC, is proposed to reduce the synchronization error. To improve the tracking performance, the observer-based inverse control scheme combining the feed-forward inverse model controller and disturbance observer is proposed. The system model is identified applying the recursive least squares (RLS algorithm and then the feed-forward inverse controller is designed based on zero phase error tracking controller (ZPETC technique. To compensate disturbance and model errors, disturbance observer is used cooperating with the designed inverse controller. The combination of the novel ASMC cross-coupled controller and proposed observer-based inverse controller can improve the control precision noticeably. The dual-shaking table experiment system is built and various experiments are performed. The experimental results indicate that the developed system with the proposed hybrid control strategy is feasible and efficient and can reduce the tracking errors to 25% and synchronization error to 16% compared with traditional control schemes.

  16. USGS "Did You Feel It?" internet-based macroseismic intensity maps

    Science.gov (United States)

    Wald, D.J.; Quitoriano, V.; Worden, B.; Hopper, M.; Dewey, J.W.

    2011-01-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system is an automated approach for rapidly collecting macroseismic intensity data from Internet users' shaking and damage reports and generating intensity maps immediately following earthquakes; it has been operating for over a decade (1999-2011). DYFI-based intensity maps made rapidly available through the DYFI system fundamentally depart from more traditional maps made available in the past. The maps are made more quickly, provide more complete coverage and higher resolution, provide for citizen input and interaction, and allow data collection at rates and quantities never before considered. These aspects of Internet data collection, in turn, allow for data analyses, graphics, and ways to communicate with the public, opportunities not possible with traditional data-collection approaches. Yet web-based contributions also pose considerable challenges, as discussed herein. After a decade of operational experience with the DYFI system and users, we document refinements to the processing and algorithmic procedures since DYFI was first conceived. We also describe a number of automatic post-processing tools, operations, applications, and research directions, all of which utilize the extensive DYFI intensity datasets now gathered in near-real time. DYFI can be found online at the website http://earthquake.usgs.gov/dyfi/. ?? 2011 by the Istituto Nazionale di Geofisica e Vulcanologia.

  17. Local influence of south-east France topography and land cover on the distribution and characteristics of intense rainfall cells

    Science.gov (United States)

    Renard, Florent

    2017-04-01

    The Greater Lyon area is strongly built up, grouping 58 communes and a population of 1.3 million in approximately 500 km2. The flood risk is high as the territory is crossed by two large watercourses and by streams with torrential flow. Floods may also occur in case of runoff after heavy rain or because of a rise in the groundwater level. The whole territory can therefore be affected, and it is necessary to possess in-depth knowledge of the depths, causes and consequences of rainfall to achieve better management of precipitation in urban areas and to reduce flood risk. This study is thus focused on the effects of topography and land cover on the occurrence, intensity and area of intense rainfall cells. They are identified by local radar meteorology (C-band) combined with a processing algorithm running in a geographic information system (GIS) which identified 109,979 weighted mean centres of them in a sample composed of the five most intense rainfall events from 2001 to 2005. First, analysis of spatial distribution at an overall scale is performed, completed by study at a more detailed scale. The results show that the distribution of high-intensity rainfall cells is spread in cluster form. Subsequently, comparison of intense rainfall cells with the topography shows that cell density is closely linked with land slope but that, above all, urbanised zones feature nearly twice as many rainfall cells as farm land or forest, with more intense intensity.

  18. Distribution measurement of radiation intensity with optical fiber at narrow space

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Chizuo [Nagoya Univ. (Japan). School of Engineering

    1998-07-01

    Recently, in the field or radiation measurement, optical fiber and scintillation fiber are also begun to use. In order to investigate a new application method of the optical fiber to radiation measurement, a lithium compound for neutron converter and a ZnS(Ag) scintillator are kneaded with epoxy type adhesives, and much few weight of them is coated at an end of optical fiber with 1 to 2 mm in diameter, which is further overcoated with black paint or an aluminum cap for its shielding light to produce a thermal neutron detector. The thermal neutron detector is found to be measurable to neutron flux distribution very rapidly and in high position resolution by moving with computer automatically. This method can be measured selctively aimed radiation such as thermal neutron, rapid neutron, {gamma}-ray, and so forth by means of changing the neutron converter. And, the developed fiber method could be widely used for measurement of neutron and {gamma}-ray intensity distribution at fine interval in the nuclear radiation facilities such as neighbors of accelerator facilities, medical radiation facilities. (G.K.)

  19. Nanoscale shift of the intensity distribution of dipole radiation.

    Science.gov (United States)

    Shu, Jie; Li, Xin; Arnoldus, Henk F

    2009-02-01

    The energy flow lines (field lines of the Poynting vector) for radiation emitted by a dipole are in general curves, rather than straight lines. For a linear dipole the field lines are straight, but when the dipole moment of a source rotates, the field lines wind numerous times around an axis, which is perpendicular to the plane of rotation, before asymptotically approaching a straight line. We consider an elliptical dipole moment, representing the most general state of oscillation, and this includes the linear dipole as a special case. Due to the spiraling near the source, for the case of a rotating dipole moment, the field lines in the far field are displaced with respect to the outward radial direction, and this leads to a shift of the intensity distribution of the radiation in the far field. This shift is shown to be independent of the distance to the source and, although of nanoscale dimension, should be experimentally observable.

  20. Damage Assessment of a Full-Scale Six-Story wood-frame Building Following Triaxial shake Table Tests

    Science.gov (United States)

    John W. van de Lindt; Rakesh Gupta; Shiling Pei; Kazuki Tachibana; Yasuhiro Araki; Douglas Rammer; Hiroshi Isoda

    2012-01-01

    In the summer of 2009, a full-scale midrise wood-frame building was tested under a series of simulated earthquakes on the world's largest shake table in Miki City, Japan. The objective of this series of tests was to validate a performance-based seismic design approach by qualitatively and quantitatively examining the building's seismic performance in terms of...

  1. Generation of intensity duration frequency curves and intensity temporal variability pattern of intense rainfall for Lages/SC

    Directory of Open Access Journals (Sweden)

    Célio Orli Cardoso

    2014-04-01

    Full Text Available The objective of this work was to analyze the frequency distribution and intensity temporal variability of intense rainfall for Lages/SC from diary pluviograph data. Data on annual series of maximum rainfalls from rain gauges of the CAV-UDESC Weather Station in Lages/SC were used from 2000 to 2009. Gumbel statistic distribution was applied in order to obtain the rainfall height and intensity in the following return periods: 2, 5, 10, 15 and 20 years. Results showed intensity-duration-frequency curves (I-D-F for those return periods, as well as I-D-F equations: i=2050.Tr0,20.(t+30-0,89, where i was the intensity, Tr was the rainfall return periods and t was the rainfall duration. For the intensity of temporal variability pattern along of the rainfall duration time, the convective, or advanced pattern was the predominant, with larger precipitate rainfalls in the first half of the duration. The same pattern presented larger occurrences in the spring and summer stations.

  2. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  3. Generalized Kapchinskij-Vladimirskij Distribution and Envelope Equation for High-intensity Beams in a Coupled Transverse Focusing Lattice

    International Nuclear Information System (INIS)

    Qin, Hong; Chung, Moses; Davidson, Ronald C.

    2009-01-01

    In an uncoupled lattice, the Kapchinskij-Vladimirskij (KV) distribution function first analyzed in 1959 is the only known exact solution of the nonlinear Vlasov-Maxwell equations for high- intensity beams including self-fields in a self-consistent manner. The KV solution is generalized here to high-intensity beams in a coupled transverse lattice using the recently developed generalized Courant-Snyder invariant for coupled transverse dynamics. This solution projects to a rotating, pulsating elliptical beam in transverse configuration space, determined by the generalized matrix envelope equation.

  4. pH-metric solubility. 2: correlation between the acid-base titration and the saturation shake-flask solubility-pH methods.

    Science.gov (United States)

    Avdeef, A; Berger, C M; Brownell, C

    2000-01-01

    The objective of this study was to compare the results of a normal saturation shake-flask method to a new potentiometric acid-base titration method for determining the intrinsic solubility and the solubility-pH profiles of ionizable molecules, and to report the solubility constants determined by the latter technique. The solubility-pH profiles of twelve generic drugs (atenolol, diclofenac.Na, famotidine, flurbiprofen, furosemide, hydrochlorothiazide, ibuprofen, ketoprofen, labetolol.HCl, naproxen, phenytoin, and propranolol.HCl), with solubilities spanning over six orders of magnitude, were determined both by the new pH-metric method and by a traditional approach (24 hr shaking of saturated solutions, followed by filtration, then HPLC assaying with UV detection). The 212 separate saturation shake-flask solubility measurements and those derived from 65 potentiometric titrations agreed well. The analysis produced the correlation equation: log(1/S)titration = -0.063(+/- 0.032) + 1.025(+/- 0.011) log(1/S)shake-flask, s = 0.20, r2 = 0.978. The potentiometrically-derived intrinsic solubilities of the drugs were: atenolol 13.5 mg/mL, diclofenac.Na 0.82 microg/mL, famotidine 1.1 mg/ mL, flurbiprofen 10.6 microg/mL, furosemide 5.9 microg/mL, hydrochlorothiazide 0.70 mg/mL, ibuprofen 49 microg/mL, ketoprofen 118 microg/mL, labetolol.HCl 128 microg/mL, naproxen 14 microg/mL, phenytoin 19 microg/mL, and propranolol.HCl 70 microg/mL. The new potentiometric method was shown to be reliable for determining the solubility-pH profiles of uncharged ionizable drug substances. Its speed compared to conventional equilibrium measurements, its sound theoretical basis, its ability to generate the full solubility-pH profile from a single titration, and its dynamic range (currently estimated to be seven orders of magnitude) make the new pH-metric method an attractive addition to traditional approaches used by preformulation and development scientists. It may be useful even to discovery

  5. Synergistic Effects of Unintended Pregnancy and Young Motherhood on Shaking and Smothering of Infants among Caregivers in Nagoya City, Japan

    Directory of Open Access Journals (Sweden)

    Aya Isumi

    2017-09-01

    Full Text Available BackgroundShaking and smothering in response to infant crying are forms of child abuse that often result in death. Unintended pregnancy and young motherhood are risk factors of such child maltreatment that are often comorbid, few studies have examined their synergistic effect on shaking and smothering of infants. We examined the synergistic effects of unintended pregnancy and young motherhood on shaking and smothering among caregivers of infants in Japan.MethodsIn this retrospective cohort study, a questionnaire was administered to caregivers enrolled for a health check for 3- to 4-month-old infants between October 2013 and February 2014 in Nagoya City, Japan. The questionnaire data were linked to those from pregnancy notification forms registered at municipalities and included information on women’s age and feelings about their pregnancy (N = 4,159. Data were analyzed using logistic regression analysis in 2016.ResultsShaking and smothering of 3- to 4-month-old infants occurred at least once in the past month in 2.0 and 1.5% of cases, respectively. Of all participants, 24.8% reported unintended pregnancy while 7.3% were younger than 25 years old. Infants of young mothers (under 25 years old with unintended pregnancy were 2.77 [95% confidence interval (CI: 1.15–6.68] and 5.61 (95% CI: 2.40–13.1 times more likely to be shaken and smothered, respectively, than those of older mothers with intended pregnancy. In addition, the odds ratio of young mothers with unintended pregnancy regarding smothering was significantly higher than that of older mothers with unintended pregnancy (odds ratio: 2.12; p = 0.02.ConclusionOur findings suggest a synergistic effect of unintended pregnancy and young motherhood on smothering. Infants of young mothers with unintended pregnancy are at greater risk of abuse, especially smothering. Prevention strategies are required for young women with unintended pregnancies.

  6. The O-mannosylation and production of recombinant APA (45/47 KDa protein from Mycobacterium tuberculosis in Streptomyces lividans is affected by culture conditions in shake flasks

    Directory of Open Access Journals (Sweden)

    Gamboa-Suasnavart Ramsés A

    2011-12-01

    Full Text Available Abstract Background The Ala-Pro-rich O-glycoprotein known as the 45/47 kDa or APA antigen from Mycobacterium tuberculosis is an immunodominant adhesin restricted to mycobacterium genus and has been proposed as an alternative candidate to generate a new vaccine against tuberculosis or for diagnosis kits. In this work, the recombinant O-glycoprotein APA was produced by the non-pathogenic filamentous bacteria Streptomyces lividans, evaluating three different culture conditions. This strain is known for its ability to produce heterologous proteins in a shorter time compared to M. tuberculosis. Results Three different shake flask geometries were used to provide different shear and oxygenation conditions; and the impact of those conditions on the morphology of S. lividans and the production of rAPA was characterized and evaluated. Small unbranched free filaments and mycelial clumps were found in baffled and coiled shake flasks, but one order of magnitude larger pellets were found in conventional shake flasks. The production of rAPA is around 3 times higher in small mycelia than in larger pellets, most probably due to difficulties in mass transfer inside pellets. Moreover, there are four putative sites of O-mannosylation in native APA, one of which is located at the carboxy-terminal region. The carbohydrate composition of this site was determined for rAPA by mass spectrometry analysis, and was found to contain different glycoforms depending on culture conditions. Up to two mannoses residues were found in cultures carried out in conventional shake flasks, and up to five mannoses residues were determined in coiled and baffled shake flasks. Conclusions The shear and/or oxygenation parameters determine the bacterial morphology, the productivity, and the O-mannosylation of rAPA in S. lividans. As demonstrated here, culture conditions have to be carefully controlled in order to obtain recombinant O-glycosylated proteins with similar "quality" in bacteria

  7. The O-mannosylation and production of recombinant APA (45/47 KDa) protein from Mycobacterium tuberculosis in Streptomyces lividans is affected by culture conditions in shake flasks.

    Science.gov (United States)

    Gamboa-Suasnavart, Ramsés A; Valdez-Cruz, Norma A; Cordova-Dávalos, Laura E; Martínez-Sotelo, José A; Servín-González, Luis; Espitia, Clara; Trujillo-Roldán, Mauricio A

    2011-12-20

    The Ala-Pro-rich O-glycoprotein known as the 45/47 kDa or APA antigen from Mycobacterium tuberculosis is an immunodominant adhesin restricted to mycobacterium genus and has been proposed as an alternative candidate to generate a new vaccine against tuberculosis or for diagnosis kits. In this work, the recombinant O-glycoprotein APA was produced by the non-pathogenic filamentous bacteria Streptomyces lividans, evaluating three different culture conditions. This strain is known for its ability to produce heterologous proteins in a shorter time compared to M. tuberculosis. Three different shake flask geometries were used to provide different shear and oxygenation conditions; and the impact of those conditions on the morphology of S. lividans and the production of rAPA was characterized and evaluated. Small unbranched free filaments and mycelial clumps were found in baffled and coiled shake flasks, but one order of magnitude larger pellets were found in conventional shake flasks. The production of rAPA is around 3 times higher in small mycelia than in larger pellets, most probably due to difficulties in mass transfer inside pellets. Moreover, there are four putative sites of O-mannosylation in native APA, one of which is located at the carboxy-terminal region. The carbohydrate composition of this site was determined for rAPA by mass spectrometry analysis, and was found to contain different glycoforms depending on culture conditions. Up to two mannoses residues were found in cultures carried out in conventional shake flasks, and up to five mannoses residues were determined in coiled and baffled shake flasks. The shear and/or oxygenation parameters determine the bacterial morphology, the productivity, and the O-mannosylation of rAPA in S. lividans. As demonstrated here, culture conditions have to be carefully controlled in order to obtain recombinant O-glycosylated proteins with similar "quality" in bacteria, particularly, if the protein activity depends on the

  8. Energy and intensity distributions of 0.279 MeV multiply Compton-scattered photons in soldering material

    International Nuclear Information System (INIS)

    Singh, Manpreet; Singh, Gurvinderjit; Singh, Bhajan; Sandhu, B.S.

    2007-01-01

    An inverse response matrix converts the observed pulse-height distribution of a NaI(Tl) scintillation detector to a photon spectrum. This also results in extraction of intensity distribution of multiply scattered events originating from interactions of 0.279 MeV photons with thick targets of soldering material. The observed pulse-height distributions are a composite of singly and multiply scattered events in addition to bremmstrahlung-and Rayleigh-scattered events. To evaluate the contribution of multiply scattered events, the spectrum of singly scattered events contributing to inelastic Compton peak is reconstructed analytically. The optimum thickness (saturation depth), at which the number of multiply scattered events saturates, has been measured. Monte Carlo calculations also support the present results

  9. Potassium biphthalate buffer for pH control to optimize glycosyl hydrolase production in shake flasks using filamentous fungi

    Directory of Open Access Journals (Sweden)

    Patrícia dos Santos Costa

    Full Text Available Abstract The optimization of culture medium with statistical methods is widely used in filamentous fungi glycosyl hydrolase production. The implementation of such methodology in bioreactors is very expensive as it requires several pH-controlled systems operating in parallel in order to test a large number of culture media components. The objective of this study was to evaluate potassium biphthalate buffer for pH control, which allows the optimization studies to be performed in shake flasks.The results have shown that buffering the culture medium with 0.1 M potassium biphthalate allowed pH control, resulting in a decrease of the standard deviation of triplicates for pH and activities of glycosyl hydrolase measurements. The use of this buffer allowed shake flask culture media optimization of enzyme production by Trichoderma harzianum, increasing the cellulase activity by more than 2 times compared to standard unbuffered culture medium. The same buffer can be used for culture media optimization of other fungi, such as Penicillium echinulatum.

  10. Combination of On-line pH and Oxygen Transfer Rate Measurement in Shake Flasks by Fiber Optical Technique and Respiration Activity MOnitoring System (RAMOS

    Directory of Open Access Journals (Sweden)

    Jochen Büchs

    2007-12-01

    Full Text Available Shake flasks are commonly used for process development in biotechnologyindustry. For this purpose a lot of information is required from the growth conditions duringthe fermentation experiments. Therefore, Anderlei et al. developed the RAMOS technology[1, 2], which proviedes on-line oxygen and carbondioxide transfer rates in shake flasks.Besides oxygen consumption, the pH in the medium also plays an important role for thesuccessful cultivation of micro-organisms and for process development. For online pHmeasurement fiber optical methods based on fluorophores are available. Here a combinationof the on-line Oxygen Transfer Rate (OTR measurements in the RAMOS device with anon-line, fiber optical pH measurement is presented. To demonstrate the application of thecombined measurement techniques, Escherichia coli cultivations were performed and on-line pH measurements were compared with off-line samples. The combination of on-lineOTR and pH measurements gives a lot of information about the cultivation and, therefore, itis a powerful technique for monitoring shake flask experiments as well as for processdevelopment.

  11. The ShakeOut scenario: A hypothetical Mw7.8 earthquake on the Southern San Andreas Fault

    Science.gov (United States)

    Porter, K.; Jones, L.; Cox, D.; Goltz, J.; Hudnut, K.; Mileti, D.; Perry, S.; Ponti, D.; Reichle, M.; Rose, A.Z.; Scawthorn, C.R.; Seligson, H.A.; Shoaf, K.I.; Treiman, J.; Wein, A.

    2011-01-01

    In 2008, an earthquake-planning scenario document was released by the U.S. Geological Survey (USGS) and California Geological Survey that hypothesizes the occurrence and effects of a Mw7.8 earthquake on the southern San Andreas Fault. It was created by more than 300 scientists and engineers. Fault offsets reach 13 m and up to 8 m at lifeline crossings. Physics-based modeling was used to generate maps of shaking intensity, with peak ground velocities of 3 m/sec near the fault and exceeding 0.5 m/sec over 10,000 km2. A custom HAZUS??MH analysis and 18 special studies were performed to characterize the effects of the earthquake on the built environment. The scenario posits 1,800 deaths and 53,000 injuries requiring emergency room care. Approximately 1,600 fires are ignited, resulting in the destruction of 200 million square feet of the building stock, the equivalent of 133,000 single-family homes. Fire contributes $87 billion in property and business interruption loss, out of the total $191 billion in economic loss, with most of the rest coming from shakerelated building and content damage ($46 billion) and business interruption loss from water outages ($24 billion). Emergency response activities are depicted in detail, in an innovative grid showing activities versus time, a new format introduced in this study. ?? 2011, Earthquake Engineering Research Institute.

  12. Final-photon angular distributions in Compton double-ionization

    International Nuclear Information System (INIS)

    Kornberg, M.A.

    1999-01-01

    Angular distributions of the scattered-photon in two-electron ionization of helium by Compton scattering are reported. Our calculations are performed as a direct integration over Compton profiles. We show that backward scattering is adequately described using an uncorrelated final-state approximation, as compared with impulse approximation (IA) results. The relation dσ c 2+ /dΩ = R c dσ c + /dΩ is fulfilled within IA at high-photon energies, with R c the asymptotic shake-off ratio. (orig.)

  13. Scale-up from shake flasks to bioreactor, based on power input and Streptomyces lividans morphology, for the production of recombinant APA (45/47 kDa protein) from Mycobacterium tuberculosis.

    Science.gov (United States)

    Gamboa-Suasnavart, Ramsés A; Marín-Palacio, Luz D; Martínez-Sotelo, José A; Espitia, Clara; Servín-González, Luis; Valdez-Cruz, Norma A; Trujillo-Roldán, Mauricio A

    2013-08-01

    Culture conditions in shake flasks affect filamentous Streptomyces lividans morphology, as well the productivity and O-mannosylation of recombinant Ala-Pro-rich O-glycoprotein (known as the 45/47 kDa or APA antigen) from Mycobacterium tuberculosis. In order to scale up from previous reported shake flasks to bioreactor, data from the literature on the effect of agitation on morphology of Streptomyces strains were used to obtain gassed volumetric power input values that can be used to obtain a morphology of S. lividans in bioreactor similar to the morphology previously reported in coiled/baffled shake flasks by our group. Morphology of S. lividans was successfully scaled-up, obtaining similar mycelial sizes in both scales with diameters of 0.21 ± 0.09 mm in baffled and coiled shake flasks, and 0.15 ± 0.01 mm in the bioreactor. Moreover, the specific growth rate was successfully scaled up (0.09 ± 0.02 and 0.12 ± 0.01 h(-1), for bioreactors and flasks, respectively), and the recombinant protein productivity measured by densitometry, as well. More interestingly, the quality of the recombinant glycoprotein measured as the amount of mannoses attached to the C-terminal of APA was also scaled- up; with up to five mannose residues in cultures carried out in shake flasks; and six in the bioreactor. However, final biomass concentration was not similar, indicating that although the process can be scaled-up using the power input, others factors like oxygen transfer rate, tip speed or energy dissipation/circulation function can be an influence on bacterial metabolism.

  14. A Field-Shaking System to Reduce the Screening Current-Induced Field in the 800-MHz HTS Insert of the MIT 1.3-GHz LTS/HTS NMR Magnet: A Small-Model Study.

    Science.gov (United States)

    Lee, Jiho; Park, Dongkeun; Michael, Philip C; Noguchi, So; Bascuñán, Juan; Iwasa, Yukikazu

    2018-04-01

    In this paper, we present experimental results, of a small-model study, from which we plan to develop and apply a full-scale field-shaking system to reduce the screening current-induced field (SCF) in the 800-MHz HTS Insert (H800) of the MIT 1.3-GHz LTS/HTS NMR magnet (1.3G) currently under construction-the H800 is composed of 3 nested coils, each a stack of no-insulation (NI) REBCO double-pancakes. In 1.3G, H800 is the chief source of a large error field generated by its own SCF. To study the effectiveness of the field-shaking technique, we used two NI REBCO double-pancakes, one from Coil 2 (HCoil2) and one from Coil 3 (HCoil3) of the 3 H800 coils, and placed them in the bore of a 5-T/300-mm room-temperature bore low-temperature superconducting (LTS) background magnet. The background magnet is used not only to induce the SCF in the double-pancakes but also to reduce it by the field-shaking technique. For each run, we induced the SCF in the double-pancakes at an axial location where the external radial field Br > 0, then for the field-shaking, moved them to another location where the external axial field Bz ≫ B R . Due to the geometry of H800 and L500, top double-pancakes of 3 H800 coils will experience the considerable radial magnetic field perpendicular to the REBCO tape surface. To examine the effect of the field-shaking on the SCF, we tested each NI REBCO DP in the absence or presence of a radial field. In this paper, we report 77-K experimental results and analysis of the effect and a few significant remarks of the field-shaking.

  15. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  16. Relationship Study on Land Use Spatial Distribution Structure and Energy-Related Carbon Emission Intensity in Different Land Use Types of Guangdong, China, 1996–2008

    Directory of Open Access Journals (Sweden)

    Yi Huang

    2013-01-01

    Full Text Available This study attempts to discuss the relationship between land use spatial distribution structure and energy-related carbon emission intensity in Guangdong during 1996–2008. We quantized the spatial distribution structure of five land use types including agricultural land, industrial land, residential and commercial land, traffic land, and other land through applying spatial Lorenz curve and Gini coefficient. Then the corresponding energy-related carbon emissions in each type of land were calculated in the study period. Through building the reasonable regression models, we found that the concentration degree of industrial land is negatively correlated with carbon emission intensity in the long term, whereas the concentration degree is positively correlated with carbon emission intensity in agricultural land, residential and commercial land, traffic land, and other land. The results also indicate that land use spatial distribution structure affects carbon emission intensity more intensively than energy efficiency and production efficiency do. These conclusions provide valuable reference to develop comprehensive policies for energy conservation and carbon emission reduction in a new perspective.

  17. James Parkinson and his essay on “shaking palsy”, two hundred years later

    Directory of Open Access Journals (Sweden)

    Leonardo Palacios-Sánchez

    Full Text Available ABSTRACT In 1817, British physician James Parkinson published a 66-page document entitled “Essay on the Shaking Palsy”. This brief text became a classical and fundamental piece in the history of medicine and, in particular, of neurology. The authors of this article wish to pay tribute to this great pioneer of neurology, 200 years after the publication of his findings, which would, in turn, immortalize his name and give rise to the renaming on the entity in 1860 by Professor Jean Martin Charcot, father of neurology. It would be known, henceforth as Parkinson’s disease.

  18. Mobility of solid vortex matter in 'shaking' ac magnetic fields of variable amplitude

    International Nuclear Information System (INIS)

    Moreno, A.J.; Valenzuela, S.O.; Pasquini, G.; Bekeris, V.

    2004-01-01

    The vortex solid in high temperature superconductors exhibits several regimes and dynamical behaviors. A temporarily symmetric magnetic ac field (e.g. sinusoidal, square, triangular) can increase the vortex lattice mobility and a temporarily asymmetric one (e.g. sawtooth) can decrease it. In this work, we study the effect on the mobility of the vortex solid as a function of the amplitude of an ac symmetric 'shaking' field when it is applied to previously prepared high and low mobility configurations. This study was carried out in high quality twinned YBCO single crystals and vortex mobility was studied through ac susceptibility measurements

  19. Analysis on Two Typical Landslide Hazard Phenomena in The Wenchuan Earthquake by Field Investigations and Shaking Table Tests

    Directory of Open Access Journals (Sweden)

    Changwei Yang

    2015-08-01

    Full Text Available Based on our field investigations of landslide hazards in the Wenchuan earthquake, some findings can be reported: (1 the multi-aspect terrain facing empty isolated mountains and thin ridges reacted intensely to the earthquake and was seriously damaged; (2 the slope angles of most landslides was larger than 45°. Considering the above disaster phenomena, the reasons are analyzed based on shaking table tests of one-sided, two-sided and four-sided slopes. The analysis results show that: (1 the amplifications of the peak accelerations of four-sided slopes is stronger than that of the two-sided slopes, while that of the one-sided slope is the weakest, which can indirectly explain the phenomena that the damage is most serious; (2 the amplifications of the peak accelerations gradually increase as the slope angles increase, and there are two inflection points which are the point where the slope angle is 45° and where the slope angle is 50°, respectively, which can explain the seismic phenomenon whereby landslide hazards mainly occur on the slopes whose slope angle is bigger than 45°. The amplification along the slope strike direction is basically consistent, and the step is smooth.

  20. No Additional Benefits of Block- Over Evenly-Distributed High-Intensity Interval Training within a Polarized Microcycle.

    Science.gov (United States)

    McGawley, Kerry; Juudas, Elisabeth; Kazior, Zuzanna; Ström, Kristoffer; Blomstrand, Eva; Hansson, Ola; Holmberg, Hans-Christer

    2017-01-01

    Introduction: The current study aimed to investigate the responses to block- versus evenly-distributed high-intensity interval training (HIT) within a polarized microcycle. Methods: Twenty well-trained junior cross-country skiers (10 males, age 17.6 ± 1.5 and 10 females, age 17.3 ± 1.5) completed two, 3-week periods of training (EVEN and BLOCK) in a randomized, crossover-design study. In EVEN, 3 HIT sessions (5 × 4-min of diagonal-stride roller-skiing) were completed at a maximal sustainable intensity each week while low-intensity training (LIT) was distributed evenly around the HIT. In BLOCK, the same 9 HIT sessions were completed in the second week while only LIT was completed in the first and third weeks. Heart rate (HR), session ratings of perceived exertion (sRPE), and perceived recovery (pREC) were recorded for all HIT and LIT sessions, while distance covered was recorded for each HIT interval. The recovery-stress questionnaire for athletes (RESTQ-Sport) was completed weekly. Before and after EVEN and BLOCK, resting saliva and muscle samples were collected and an incremental test and 600-m time-trial (TT) were completed. Results: Pre- to post-testing revealed no significant differences between EVEN and BLOCK for changes in resting salivary cortisol, testosterone, or IgA, or for changes in muscle capillary density, fiber area, fiber composition, enzyme activity (CS, HAD, and PFK) or the protein content of VEGF or PGC-1α. Neither were any differences observed in the changes in skiing economy, [Formula: see text] or 600-m time-trial performance between interventions. These findings were coupled with no significant differences between EVEN and BLOCK for distance covered during HIT, summated HR zone scores, total sRPE training load, overall pREC or overall recovery-stress state. However, 600-m TT performance improved from pre- to post-training, irrespective of intervention ( P = 0.003), and a number of hormonal and muscle biopsy markers were also significantly

  1. Sociospatial distribution of access to facilities for moderate and vigorous intensity physical activity in Scotland by different modes of transport

    Directory of Open Access Journals (Sweden)

    Lamb Karen E

    2012-07-01

    Full Text Available Abstract Background People living in neighbourhoods of lower socioeconomic status have been shown to have higher rates of obesity and a lower likelihood of meeting physical activity recommendations than their more affluent counterparts. This study examines the sociospatial distribution of access to facilities for moderate or vigorous intensity physical activity in Scotland and whether such access differs by the mode of transport available and by Urban Rural Classification. Methods A database of all fixed physical activity facilities was obtained from the national agency for sport in Scotland. Facilities were categorised into light, moderate and vigorous intensity activity groupings before being mapped. Transport networks were created to assess the number of each type of facility accessible from the population weighted centroid of each small area in Scotland on foot, by bicycle, by car and by bus. Multilevel modelling was used to investigate the distribution of the number of accessible facilities by small area deprivation within urban, small town and rural areas separately, adjusting for population size and local authority. Results Prior to adjustment for Urban Rural Classification and local authority, the median number of accessible facilities for moderate or vigorous intensity activity increased with increasing deprivation from the most affluent or second most affluent quintile to the most deprived for all modes of transport. However, after adjustment, the modelling results suggest that those in more affluent areas have significantly higher access to moderate and vigorous intensity facilities by car than those living in more deprived areas. Conclusions The sociospatial distributions of access to facilities for both moderate intensity and vigorous intensity physical activity were similar. However, the results suggest that those living in the most affluent neighbourhoods have poorer access to facilities of either type that can be reached on foot

  2. Verification of SORD, and Application to the TeraShake Scenario

    Science.gov (United States)

    Ely, G. P.; Day, S.; Minster, J.

    2007-12-01

    The Support Operator Rupture Dynamics (SORD) code provides a highly scalable (up to billions of nodes) computational tool for modeling spontaneous rupture on a non-planar fault surface embedded in a heterogeneous medium with surface topography. SORD successfully performs the SCEC Rupture Dynamics Code Validation Project tests, and we have undertaken further dynamic rupture tests assessing the effects of distorted hexahedral meshes on code accuracy. We generate a family of distorted meshes by simple shearing (applied both parallel and normal to the fault plane) of an initially Cartesian mesh. For shearing normal to the fault, shearing angle was varied, up to a maximum of 73-degrees. For SCEC Validation Problem 3, grid-induced errors increase with mesh-shear angle, with the logarithm of error approximately proportional to angle over the range tested. At 73-degrees, RMS misfits are about 10% for peak slip rate, and 0.5% for both rupture time and total slip, indicating that the method--which up to now we have applied mainly to near-vertical strike-slip faulting-- also is capable of handling geometries appropriate to low-angle surface-rupturing thrust earthquakes. The SORD code was used to reexamine the TeraShake 2 dynamics simulations of a M7.7 earthquake on the southern San Andreas Fault. Relative to the original (Olsen et al, 2007) TeraShake 2 simulations, our spontaneous rupture models find decreased peak ground velocities in the Los Angles basin, principally due to a shallower eastward connecting basin chain in the SCEC Velocity Model Version 4 (used in our simulations) compared to Version 3 (used by Olsen et al.). This is partially offset by including the effects of surface topography (which was not included in the Olsen et al. models) in the simulation, which increases PGV at some basin sites by as much as a factor of two. Some non-basin sites showed comparable decreases in PGV. These predicted topographic effects are quite large, so it is important to quantify

  3. Real-time hybrid simulation in a shaking table configuration for parametric studies of high-voltage equipment and IEEE693 development

    Energy Technology Data Exchange (ETDEWEB)

    Günay, Selim [nees@berkeley, UC Berkeley, Richmond, CA (United States); Mosalam, Khalid [Department of Civil and Environmental Engineering, UC Berkeley, Berkeley, CA (United States); Takhirov, Shakhzod, E-mail: takhirov@berkeley.edu [nees@berkeley, UC Berkeley, Richmond, CA (United States)

    2015-12-15

    Highlights: • A real-time hybrid simulation (RTHS) system for high-voltage (HV) equipment is developed. • The system is a cost effective and timely efficient approach for seismic testing and evaluation. • The coupled system of equipment and modeled support structure is tested/analyzed in real time. • The system is validated by comparing the RTHS test results with the shaking table results. • The effect of support structure on the equipment response is analyzed in a parametric study. - Abstract: This paper presents extensive discussion on seismic qualification of substation equipment in conventional shake table tests and its comparison to real-time hybrid simulation (RTHS). The hybrid simulation technique is based on a sub-structuring idea where a portion of a test specimen with well-predicted performance can be replaced by its finite element model. The rest of the test specimen is experimentally studied as part of the coupled system, where the test object and the mathematical model are interacting with each other in real time. The real-time hybrid simulation technique has a strong potential of complementing and in some cases replacing seismic qualification testing. In addition to that, it has a strong potential as a comprehensive and reliable tool for IEEE693 development, where code provisions can be developed from parametric hybrid simulation studies of actual pieces of substation equipment which are otherwise difficult to model. As a typical example of successful application of hybrid simulation, a comprehensive study related to RTHS of electrical disconnect switches is discussed in the paper. First, the RTHS system developed for this purpose is described and the results of a RTHS test are compared with a benchmark conventional shaking table test as a validation of the system. Second, effect of the support structures of the disconnect switches on the global and local responses of different insulator types is evaluated using the results of a series of

  4. Real-time hybrid simulation in a shaking table configuration for parametric studies of high-voltage equipment and IEEE693 development

    International Nuclear Information System (INIS)

    Günay, Selim; Mosalam, Khalid; Takhirov, Shakhzod

    2015-01-01

    Highlights: • A real-time hybrid simulation (RTHS) system for high-voltage (HV) equipment is developed. • The system is a cost effective and timely efficient approach for seismic testing and evaluation. • The coupled system of equipment and modeled support structure is tested/analyzed in real time. • The system is validated by comparing the RTHS test results with the shaking table results. • The effect of support structure on the equipment response is analyzed in a parametric study. - Abstract: This paper presents extensive discussion on seismic qualification of substation equipment in conventional shake table tests and its comparison to real-time hybrid simulation (RTHS). The hybrid simulation technique is based on a sub-structuring idea where a portion of a test specimen with well-predicted performance can be replaced by its finite element model. The rest of the test specimen is experimentally studied as part of the coupled system, where the test object and the mathematical model are interacting with each other in real time. The real-time hybrid simulation technique has a strong potential of complementing and in some cases replacing seismic qualification testing. In addition to that, it has a strong potential as a comprehensive and reliable tool for IEEE693 development, where code provisions can be developed from parametric hybrid simulation studies of actual pieces of substation equipment which are otherwise difficult to model. As a typical example of successful application of hybrid simulation, a comprehensive study related to RTHS of electrical disconnect switches is discussed in the paper. First, the RTHS system developed for this purpose is described and the results of a RTHS test are compared with a benchmark conventional shaking table test as a validation of the system. Second, effect of the support structures of the disconnect switches on the global and local responses of different insulator types is evaluated using the results of a series of

  5. Heart shaking transitions - A phenomenological-hermeneutic study of patients´ experiences in cardiac rehabilitation

    DEFF Research Database (Denmark)

    Simonÿ, Charlotte; Dreyer, Pia; Pedersen, Birthe D.

    enrolled in the cardiac rehabilitation programme. The data underwent interpretation consisting of three phases: naïve reading, structural analysis and comprehensive interpretation. Results. The preliminary findings are that the patients go through a Heart Shaking Journey in Cardiac Rehabilitation. Three......-patient cardiac rehabilitation during 1-2 months is offered after the acute treatment. Knowledge of the patients’ experiences of cardiac problems when receiving the current standards of treatment is needed in order to develop sufficient care. Hence the aim was to investigate how patients with new onset unstable...

  6. Performance of rocking systems on shallow improved sand: Shaking table testing

    Directory of Open Access Journals (Sweden)

    Angelos eTsatsis

    2015-07-01

    Full Text Available Recent studies have highlighted the potential benefits of inelastic foundation response during seismic shaking. According to an emerging seismic design scheme, termed rocking isolation, the foundation is intentionally under–designed to promote rocking and limit the inertia transmitted to the structure. Such reversal of capacity design may improve the seismic performance, drastically increasing the safety margins. However, the benefit comes at the expense of permanent settlement and rotation, which may threaten post-earthquake functionality. Such undesired deformation can be maintained within tolerable limits, provided that the safety factor against vertical loading FSV is adequately large. In such a case, the response is uplifting–dominated and the accumulation of settlement can be limited. However, this is not always feasible as the soil properties may not be ideal. Shallow soil improvement may offer a viable solution and is therefore worth investigating. Its efficiency is related to the nature of rocking, which tends to mobilize a shallow stress bulb. To this end, a series of shaking table tests are conducted, using an idealized slender bridge pier as conceptual prototype. Two systems are studied, both lying on a square foundation of width B. The first corresponds to a lightly-loaded and the second to a heavily-loaded structure. The two systems are first tested on poor and ideal soil conditions to demonstrate the necessity for soil improvement. Then, the efficiency of shallow soil improvement is studied by investigating their performance on soil crusts of depth z/B = 0.5 and 1. It is shown that a z/B = 1 dense sand crust is enough to achieve practically the same performance with the ideal case of dense sand. A shallower z/B = 0.5 improvement layer may also be considered, depending on design requirements. The efficiency of the soil improvement is ameliorated with the increase of rotation amplitude, and with the number of the cycles of the

  7. C2H6 scattering from LiF(0 0 1): Influence of the molecular anisotropy on rainbow scattering in both intensity and speed distributions

    International Nuclear Information System (INIS)

    Kondo, Takahiro; Tomii, Takashi; Yamamoto, Shigehiko

    2006-01-01

    We have measured the angle-resolved intensity and speed distributions of C 2 H 6 scattered from LiF(0 0 1) along the [1 0 0] azimuthal direction, the largest structural corrugation direction, to investigate the effect of the molecular anisotropy on the gas-surface interaction at the corrugated surface. While clear rainbow feature is observed in the mean energy angular distribution, no rainbow feature is detected in the intensity angular distribution. From the comparisons of the obtained results to the calculated predictions based on the simple classical theory of the ellipsoid-washboard model, the effect of the molecular anisotropy is found to play a crucial role in the rainbow feature. With an increase in the extent of the molecular anisotropy such as that of C 2 H 6 as compared with rare gas atoms, the integration of the intensity angular distributions for various molecular orientations results in the smearing of the rainbow feature on the corrugated surface. The rainbow scattering in the mean energy angular distribution, however, is not completely smeared out

  8. Derivation from first principles of the statistical distribution of the mass peak intensities of MS data.

    Science.gov (United States)

    Ipsen, Andreas

    2015-02-03

    Despite the widespread use of mass spectrometry (MS) in a broad range of disciplines, the nature of MS data remains very poorly understood, and this places important constraints on the quality of MS data analysis as well as on the effectiveness of MS instrument design. In the following, a procedure for calculating the statistical distribution of the mass peak intensity for MS instruments that use analog-to-digital converters (ADCs) and electron multipliers is presented. It is demonstrated that the physical processes underlying the data-generation process, from the generation of the ions to the signal induced at the detector, and on to the digitization of the resulting voltage pulse, result in data that can be well-approximated by a Gaussian distribution whose mean and variance are determined by physically meaningful instrumental parameters. This allows for a very precise understanding of the signal-to-noise ratio of mass peak intensities and suggests novel ways of improving it. Moreover, it is a prerequisite for being able to address virtually all data analytical problems in downstream analyses in a statistically rigorous manner. The model is validated with experimental data.

  9. CISN ShakeAlert: Faster Warning Information Through Multiple Threshold Event Detection in the Virtual Seismologist (VS) Early Warning Algorithm

    Science.gov (United States)

    Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team

    2010-12-01

    The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the

  10. GPS-seismograms reveal amplified shaking in California's San Joaquin Delta region

    Science.gov (United States)

    Johanson, I. A.

    2014-12-01

    The March 10, 2014, the Mw6.8 Ferndale earthquake occurred off the coast of Northern California, near the Mendocino Triple Junction. Aftershocks suggest a northeast striking fault plane for the strike-slip earthquake, oriented such that the California coast is roughly perpendicular to the rupture plane. Consequently, large amplitude Love waves were observed at seismic stations and continuous GPS stations throughout Northern California. While GPS is less sensitive then broadband instruments, in Northern California their station density is much higher, potentially providing valuable detail. A total of 269 GPS stations that have high-rate (1 sps) data available were used to generate GPS-seismograms. These include stations from the Bay Area Regional Deformation (BARD) network, the Plate Boundary Observatory (PBO, operated by UNAVCO), and the USGS, Menlo Park. The Track software package was used to generate relative displacements between pairs of stations, determined using Delaunay triangulation. This network-based approach allows for higher precision than absolute positioning, because common noise sources, in particular atmospheric noise, are cancelled out. A simple least-squares network adjustment with a stable centroid constraint is performed to transform the mesh of relative motions into absolute motions at individual GPS stations. This approach to generating GPS-seismograms is validated by the good agreement between time series records at 16 BARD stations that are co-located with broadband seismometers from the Berkeley Digital Seismic Network (BDSN). While the distribution of peak dynamic displacements is dominated in long periods by the radiation pattern, at shorter periods other patterns become visible. In particular, stations in the San Joaquin Delta (SJD) region show higher peak dynamic displacements than those in surrounding areas, as well as longer duration shaking. SJD stations also have higher dynamic displacements on the radial component than surrounding

  11. Investigation of dynamic response of HTR core and comparison with shaking table-tests

    International Nuclear Information System (INIS)

    Anderheggen, E.; Prater, E.G.; Kreis, A.

    1990-01-01

    The analytical studies and the shaking table tests have been performed with the aim of gaining a fundamental understanding of the dynamic behaviour of such core material and validating the numerical model. The dynamic analysis of a graphite pebble-bed core could be a fairly complex undertaking if all nonlinear effects were considered. However, to achieve a practicable solution the ensemble of spheres must be replaced by a statistically equivalent continuum. Based on the Hertz theories for regular configurations, the mechanical characteristics, at small shear strains, correspond to those of an isotropic nonlinear hypoelastic medium, in which the Lame constants are a function of volumetric strain. Thus, the initial modulus values depend on confining pressure, so that the medium is inhomogeneous with respect to depth. During seismic excitation the volumetric strain, and thus the moduli, will change with time. To simplify the analysis, however, a linearized form of the model has been adopted, as well as considerations concerning damping effects. The numerical simulations carried out thus far concern mainly the 1:6 rigid wall model (i.e. with a cylinder diameter of 1.5 m) investigated experimentally and take the form of a back-analysis. Subsequently, the walls were tested separately and finally the combined behaviour was investigated. To date only preliminary results for the modelling of the reflector walls have been obtained. The objectives of this paper are thus twofold. Firstly, to discuss the constitutive law and its implementation in a general purpose finite element program. Secondly, to present some preliminary results of the dynamic analysis and to compare these with data obtained from the shaking table tests. 5 refs, 2 figs, 1 tab

  12. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  13. Transluminal color-coded three-dimensional magnetic resonance angiography for visualization of signal Intensity distribution pattern within an unruptured cerebral aneurysm: preliminarily assessment with anterior communicating artery aneurysms

    International Nuclear Information System (INIS)

    Satoh, T.; Ekino, C.; Ohsako, C.

    2004-01-01

    The natural history of unruptured cerebral aneurysm is not known; also unknown is the potential growth and rupture in any individual aneurysm. The authors have developed transluminal color-coded three-dimensional magnetic resonance angiography (MRA) obtained by a time-of-flight sequence to investigate the interaction between the intra-aneurysmal signal intensity distribution patterns and configuration of unruptured cerebral aneurysms. Transluminal color-coded images were reconstructed from volume data of source magnetic resonance angiography by using a parallel volume-rendering algorithm with transluminal imaging technique. By selecting a numerical threshold range from a signal intensity opacity chart of the three-dimensional volume-rendering dataset several areas of signal intensity were depicted, assigned different colors, and visualized transparently through the walls of parent arteries and an aneurysm. Patterns of signal intensity distribution were analyzed with three operated cases of an unruptured anterior communicating artery aneurysm and compared with the actual configurations observed at microneurosurgery. A little difference in marginal features of an aneurysm was observed; however, transluminal color-coded images visualized the complex signal intensity distribution within an aneurysm in conjunction with aneurysmal geometry. Transluminal color-coded three-dimensional magnetic resonance angiography can thus provide numerical analysis of the interaction between spatial signal intensity distribution patterns and aneurysmal configurations and may offer an alternative and practical method to investigate the patient-specific natural history of individual unruptured cerebral aneurysms. (orig.)

  14. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    Science.gov (United States)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  15. Anticipating and Communicating Plausible Environmental and Health Concerns Associated with Future Disasters: The ShakeOut and ARkStorm Scenarios as Examples

    Science.gov (United States)

    Plumlee, G. S.; Morman, S. A.; Alpers, C. N.; Hoefen, T. M.; Meeker, G. P.

    2010-12-01

    Disasters commonly pose immediate threats to human safety, but can also produce hazardous materials (HM) that pose short- and long-term environmental-health threats. The U.S. Geological Survey (USGS) has helped assess potential environmental health characteristics of HM produced by various natural and anthropogenic disasters, such as the 2001 World Trade Center collapse, 2005 hurricanes Katrina and Rita, 2007-2009 southern California wildfires, various volcanic eruptions, and others. Building upon experience gained from these responses, we are now developing methods to anticipate plausible environmental and health implications of the 2008 Great Southern California ShakeOut scenario (which modeled the impacts of a 7.8 magnitude earthquake on the southern San Andreas fault, http://urbanearth.gps.caltech.edu/scenario08/), and the recent ARkStorm scenario (modeling the impacts of a major, weeks-long winter storm hitting nearly all of California, http://urbanearth.gps.caltech.edu/winter-storm/). Environmental-health impacts of various past earthquakes and extreme storms are first used to identify plausible impacts that could be associated with the disaster scenarios. Substantial insights can then be gleaned using a Geographic Information Systems (GIS) approach to link ShakeOut and ARkStorm effects maps with data extracted from diverse database sources containing geologic, hazards, and environmental information. This type of analysis helps constrain where potential geogenic (natural) and anthropogenic sources of HM (and their likely types of contaminants or pathogens) fall within areas of predicted ShakeOut-related shaking, firestorms, and landslides, and predicted ARkStorm-related precipitation, flooding, and winds. Because of uncertainties in the event models and many uncertainties in the databases used (e.g., incorrect location information, lack of detailed information on specific facilities, etc.) this approach should only be considered as the first of multiple steps

  16. In-Plane Strengthening Effect of Prefabricated Concrete Walls on Masonry Structures: Shaking Table Test

    Directory of Open Access Journals (Sweden)

    Weiwei Li

    2017-01-01

    Full Text Available The improvement effect of a new strengthening strategy on dynamic action of masonry structure, by installing prefabricated concrete walls on the outer facades, is validated by shaking table test presented in this paper. We carried out dynamic tests of two geometrically identical five-story reduced scaled models, including an unstrengthened and a strengthened masonry model. The experimental analysis encompasses seismic performances such as cracking patterns, failure mechanisms, amplification factors of acceleration, and displacements. The results show that the strengthened masonry structure shows much more excellent seismic capacity when compared with the unstrengthened one.

  17. Distributions of emissions intensity for individual beef cattle reared on pasture-based production systems.

    Science.gov (United States)

    McAuliffe, G A; Takahashi, T; Orr, R J; Harris, P; Lee, M R F

    2018-01-10

    Life Cycle Assessment (LCA) of livestock production systems is often based on inventory data for farms typical of a study region. As information on individual animals is often unavailable, livestock data may already be aggregated at the time of inventory analysis, both across individual animals and across seasons. Even though various computational tools exist to consider the effect of genetic and seasonal variabilities in livestock-originated emissions intensity, the degree to which these methods can address the bias suffered by representative animal approaches is not well-understood. Using detailed on-farm data collected on the North Wyke Farm Platform (NWFP) in Devon, UK, this paper proposes a novel approach of life cycle impact assessment that complements the existing LCA methodology. Field data, such as forage quality and animal performance, were measured at high spatial and temporal resolutions and directly transferred into LCA processes. This approach has enabled derivation of emissions intensity for each individual animal and, by extension, its intra-farm distribution, providing a step towards reducing uncertainty related to agricultural production inherent in LCA studies for food. Depending on pasture management strategies, the total emissions intensity estimated by the proposed method was higher than the equivalent value recalculated using a representative animal approach by 0.9-1.7 kg CO 2 -eq/kg liveweight gain, or up to 10% of system-wide emissions. This finding suggests that emissions intensity values derived by the latter technique may be underestimated due to insufficient consideration given to poorly performing animals, whose emissions becomes exponentially greater as average daily gain decreases. Strategies to mitigate life-cycle environmental impacts of pasture-based beef productions systems are also discussed.

  18. Quantum key distribution with an efficient countermeasure against correlated intensity fluctuations in optical pulses

    Science.gov (United States)

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa

    2018-03-01

    Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.

  19. Target micro-displacement measurement by a "comb" structure of intensity distribution in laser plasma propulsion

    Science.gov (United States)

    Zheng, Z. Y.; Zhang, S. Q.; Gao, L.; Gao, H.

    2015-05-01

    A "comb" structure of beam intensity distribution is designed and achieved to measure a target displacement of micrometer level in laser plasma propulsion. Base on the "comb" structure, the target displacement generated by nanosecond laser ablation solid target is measured and discussed. It is found that the "comb" structure is more suitable for a thin film target with a velocity lower than tens of millimeters per second. Combing with a light-electric monitor, the `comb' structure can be used to measure a large range velocity.

  20. Towards Coupling of Macroseismic Intensity with Structural Damage Indicators

    Science.gov (United States)

    Kouteva, Mihaela; Boshnakov, Krasimir

    2016-04-01

    Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.

  1. The Macroseismic Intensity Distribution of the 30 October 2016 Earthquake in Central Italy (Mw 6.6): Seismotectonic Implications

    Science.gov (United States)

    Galli, Paolo; Castenetto, Sergio; Peronace, Edoardo

    2017-10-01

    The central Italy Apennines were rocket in 2016 by the strongest earthquakes of the past 35 years. Two main shocks (Mw 6.2 and Mw 6.6) between the end of August and October caused the death of almost 300 people, and the destruction of 50 villages and small towns scattered along 40 km in the hanging wall of the N165° striking Mount Vettore fault system, that is, the structure responsible for the earthquakes. The 24 August southern earthquake, besides causing all the casualties, razed to the ground the small medieval town of Amatrice and dozens of hamlets around it. The 30 October main shock crushed definitely all the villages of the whole epicentral area (up to 11 intensity degree), extending northward the level of destruction and inducing heavy damage even to the 30 km far Camerino town. The survey of the macroseismic effects started the same day of the first main shock and continued during the whole seismic sequence, even during and after the strong earthquakes at the end of October, allowing the definition of a detailed picture of the damage distribution, day by day. Here we present the results of the final survey in terms of Mercalli-Cancani-Sieberg intensity, which account for the cumulative effects of the whole 2016 sequence (465 intensity data points, besides 435 related to the 24 August and 54 to the 26 October events, respectively). The distribution of the highest intensity data points evidenced the lack of any possible overlap between the 2016 earthquakes and the strongest earthquakes of the region, making this sequence a unique case in the seismic history of Italy. In turn, the cross matching with published paleoseismic data provided some interesting insights concerning the seismogenic behavior of the Mount Vettore fault in comparison with other active normal faults of the region.

  2. Prediction of thermal coagulation from the instantaneous strain distribution induced by high-intensity focused ultrasound

    Science.gov (United States)

    Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro

    2017-07-01

    The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.

  3. Beam pinging, sweeping, shaking, and electron/ion collecting, at the Proton Storage Ring

    International Nuclear Information System (INIS)

    Hardek, T.W.; Macek, R.J.; Plum, M.A.; Wang, T.S.F.

    1993-01-01

    We have built, installed and tested a pinger for use as a general diagnostic at the Los Alamos Proton Storage Ring (PSR). Two 4-m-long parallel-plate electrodes with a plate spacing of 10.2 cm provide kicks of up to 1.1 mrad. A pair of solid-state pulsers may be operated in a single-pulse mode for beam pinging (tune measurements) or in a burst mode at up to 700 kHz pulse rates for beam sweeping. During our 1992 operating period we used the pinger for beam sweeping, for beam shaking, for measuring the tune shift, and we have used it as an ion chamber. Using the pinger as an ion chamber during production conditions has yielded some surprising results

  4. Angular distributions of plasma edge velocity and integrated intensity: Update on specific impulse for Ablative Laser Propulsion

    Science.gov (United States)

    Lin, Jun; Pakhomov, Andrew V.

    2005-04-01

    This work concludes our discussion of the image processing technique developed earlier for determination of specific impulse (Isp) for Ablative Laser Propulsion (ALP). The plasma plumes are recorded with a time-resolved intensified charge-coupled device (ICCD) camera. The plasma was formed in vacuum (˜ 3×10-3 Torr) by focusing output pulses of a laser system (100-ps pulsewidth at 532 nm wavelength and ˜35 mJ energy) on surfaces of C (graphite), Al, Si, Fe, Cu, Zn, Sn, and Pb elements. Angular profiles for integrated intensity and plasma expansion velocity were determined for the tested elements. Such profiles were used further for assessment of specific impulse. Specific impulses derived from angular distributions of plasma expansion velocity and integral intensity appeared in excellent agreement with the data derived earlier from force measurements.

  5. Angular distributions of plasma edge velocity and integrated intensity: Update on specific impulse for Ablative Laser Propulsion

    International Nuclear Information System (INIS)

    Lin Jun; Pakhomov, Andrew V.

    2005-01-01

    This work concludes our discussion of the image processing technique developed earlier for determination of specific impulse (Isp) for Ablative Laser Propulsion (ALP). The plasma plumes are recorded with a time-resolved intensified charge-coupled device (ICCD) camera. The plasma was formed in vacuum (∼ 3x10-3 Torr) by focusing output pulses of a laser system (100-ps pulsewidth at 532 nm wavelength and ∼35 mJ energy) on surfaces of C (graphite), Al, Si, Fe, Cu, Zn, Sn, and Pb elements. Angular profiles for integrated intensity and plasma expansion velocity were determined for the tested elements. Such profiles were used further for assessment of specific impulse. Specific impulses derived from angular distributions of plasma expansion velocity and integral intensity appeared in excellent agreement with the data derived earlier from force measurements

  6. Seismic functional qualification of active mechanical and electrical components based on shaking table testing

    International Nuclear Information System (INIS)

    Jurukovski, D.

    1999-01-01

    The seismic testing for qualification of one sample of the NPP Kozloduy Control Panel type YKTC was carried out under Research Contract no: 8008/Rl, entitled: 'Seismic Functional Qualification of Active Mechanical and Electrical Components Based on Shaking Table Testing'. The tested specimen was selected by the Kozloduy NPP staff, Section 'TIA-2' (Technical Instrumentation and Automatics), however the seismic input parameters were selected by the NPP Kozloduy staff, Section HTS and SC (Hydro-Technical Systems and Engineering Structures). The applied methodology was developed by the Institute of Earthquake Engineering and Engineering Seismology staff. This report presents all relevant items related to the selected specimen seismic testing for seismic qualification such as: description of the tested specimen, mounting conditions on the shaking table, selection of seismic input parameters and creation of seismic excitations, description of the testing equipment, explanation of the applied methodology, 'on line' and 'off line' monitoring of the tested specimen, functioning capabilities, discussion of the results and their presentation and finally conclusions and recommendations. In this partial project report, two items are presented. The first item presents a review of the existing and used regulations for performing of the seismic and vibratory withstand testing of electro-mechanical equipment. The selection is made based on MEA, IEEE, IEC and former Soviet Union regulations. The second item presents the abstracts of all the tests performed at the Institute of Earthquake Engineering and Engineering Seismology in Skopje. The selected regulations, the experience of the Institute that has been gathered for the last seventeen years and some theoretical and experimental research will be the basis for further investigations for development of a synthesised methodology for seismic qualification of differently categorized equipment for nuclear power plants

  7. Bioconversion of mixed free fatty acids to poly-3-hydroxyalkanoates by Pseudomonas putida BET001 and modeling of its fermentation in shake flasks

    Directory of Open Access Journals (Sweden)

    Khalil Munawar Makhdum Munawar

    2016-01-01

    Conclusion: The findings of this study provided add to the literature on key variables in for achieving good microbial growth and mcl-PHA production in shake flasks culture. In addition, suitable kinetic model to describe cultivation in this system was also presented.

  8. A Time Difference Method for Measurement of Phase Shift between Distributed Feedback Laser Diode (DFB-LD Output Wavelength and Intensity

    Directory of Open Access Journals (Sweden)

    Yongning Liu

    2015-07-01

    Full Text Available A time difference method to conveniently measure the phase shift between output wavelength and intensity of distributed feedback laser diodes (DFB-LDs was proposed. This approach takes advantage of asymmetric absorption positions at the same wavelength during wavelength increase and decrease tuning processes in the intensity-time curve by current modulation. For its practical implementation, a measurement example of phase shift was demonstrated by measuring a time difference between the first time and the second time attendances of the same gas absorption line in the intensity-time curve during one sine or triangle modulation circle. The phase shifts at modulation frequencies ranging from 50 Hz to 50 kHz were measured with a resolution of 0.001π. As the modulation frequency increased the shift value increased with a slowed growth rate.

  9. Eleven-Week Preparation Involving Polarized Intensity Distribution Is Not Superior to Pyramidal Distribution in National Elite Rowers

    Directory of Open Access Journals (Sweden)

    Gunnar Treff

    2017-08-01

    Full Text Available Polarized (POL training intensity distribution (TID emphasizes high-volume low-intensity exercise in zone (Z1 (< first lactate threshold with a greater proportion of high-intensity Z3 (>second lactate threshold compared to Z2 (between first and second lactate threshold. In highly trained rowers there is a lack of prospective controlled evidence whether POL is superior to pyramidal (PYR; i.e., greater volume in Z1 vs. Z2 vs. Z3 TID. The aim of the study was to compare the effect of POL vs. PYR TID in rowers during an 11-wk preparation period. Fourteen national elite male rowers participated (age: 20 ± 2 years, maximal oxygen uptake (V˙O2max: 66 ± 5 mL/min/kg. The sample was split into PYR and POL by varying the percentage spent in Z2 and Z3 while Z1 was clamped to ~93% and matched for total and rowing volume. Actual TIDs were based on time within heart rate zones (Z1 and Z2 and duration of Z3-intervals. The main outcome variables were average power in 2,000 m ergometer-test (P2,000 m, power associated with 4 mmol/L [blood lactate] (P4[BLa], and V˙O2max. To quantify the level of polarization, we calculated a Polarization-Index as log (%Z1 × %Z3 / %Z2. PYR and POL did not significantly differ regarding rowing or total volume, but POL had a higher percentage of Z3 intensities (6 ± 3 vs. 2 ± 1%; p < 0.005 while Z2 was lower (1 ± 1 vs. 3 ± 2%; p < 0.05 and Z1 was similar (94 ± 3 vs. 93 ± 2%, p = 0.37. Consequently, Polarization-Index was significantly higher in POL (3.0 ± 0.7 vs. 1.9 ± 0.4 a.u.; p < 0.01. P2,000 m did not significantly change with PYR (1.5 ± 1.7%, p = 0.06 nor POL (1.5 ± 2.6%, p = 0.26. V˙O2max did not change (1.7 ± 5.6%, p = 0.52 or 0.6 ± 2.6, p = 0.67 and a small increase in P4[BLa] was observed in PYR only (1.9 ± 4.8%, p = 0.37 or −0.5 ± 4.1%, p = 0.77. Changes from pre to post were not significantly different between groups in any performance measure. POL did not prove to be superior to PYR, possibly due to

  10. Generic method for deriving the general shaking force balance conditions of parallel manipulators with application to a redundant planar 4-RRR parallel manipulator

    NARCIS (Netherlands)

    van der Wijk, V.; Krut, S.; Pierrot, F.; Herder, Justus Laurens

    2011-01-01

    This paper proposes a generic method for deriving the general shaking force balance conditions of parallel manipulators. Instead of considering the balancing of a parallel manipulator link-by-link or leg-by-leg, the architecture is considered altogether. The first step is to write the linear

  11. Shake-flask test for determination of biodegradation rates of 14C-labelled chemicals at low concentrations in surface water systems

    DEFF Research Database (Denmark)

    Ingerslev, F.; Nyholm, Niels

    2000-01-01

    A simple shake-flask surface water biodegradability die away test with C-14-labeled chemicals added to microgram per liter concentrations (usually 1-100 mu g/L) is described and evaluated. The aim was to provide information on biodegradation behavior and kinetic rates at environmental (low...... regular reinoculation with freshly collected surface water could, however, overcome the problems of false-negative results. (C) 2000 Academic Press....

  12. Carbon K-shell photoionization of CO: Molecular frame angular distributions of normal and conjugate shakeup satellites

    International Nuclear Information System (INIS)

    Jahnke, T.; Titze, J.; Foucar, L.; Wallauer, R.; Osipov, T.; Benis, E.P.; Jagutzki, O.; Arnold, W.; Czasch, A.; Staudte, A.; Schoeffler, M.; Alnaser, A.; Weber, T.; Prior, M.H.; Schmidt-Boecking, H.; Doerner, R.

    2011-01-01

    We have measured the molecular frame angular distributions of photoelectrons emitted from the Carbon K-shell of fixed-in-space CO molecules for the case of simultaneous excitation of the remaining molecular ion. Normal and conjugate shakeup states are observed. Photoelectrons belonging to normal Σ-satellite lines show an angular distribution resembling that observed for the main photoline at the same electron energy. Surprisingly a similar shape is found for conjugate shakeup states with Π-symmetry. In our data we identify shake rather than electron scattering (PEVE) as the mechanism producing the conjugate lines. The angular distributions clearly show the presence of a Σ shape resonance for all of the satellite lines.

  13. Coefficients of distribution and accumulation of K, Rb, Cs and 137Cs in the intensive poultry breeding cycle

    International Nuclear Information System (INIS)

    Djuric, G.; Ajdacic, N.; Institut za Nuklearne Nauke Boris Kidric, Belgrade

    1984-01-01

    The concentration of K,Rb,Cs and the activity level of Cs-137 in samples from the intensive poultry breeding cycle (feed, meat, eggs), under the condition of chronic alimentary contamination is presented. Concentrations of Cs and Rb were determined by non-destructive neutron activation analysis, concentration of K by atomic absorption flame photometry and activity of Cs-137 by gamma spectrometric analysis. On the basis of these results, coefficients of distribution and accumulation were calculated. The distribution coefficients of the analysed stable isotopes in meat have values close to 1, whereas for various parts of egg these coefficients vary between 0.5 and 1.5. Significant differences in Cs-137 distribution in various parts of egg were established. The values of accumulation coefficients indicate that all analysed elements selectively accumulate in the meat of young birds (broilers), and Cs-137 accumulates in the egg white as well. (orig.)

  14. Measuring the global distribution of intense convection over land with passive microwave radiometry

    Science.gov (United States)

    Spencer, R. W.; Santek, D. A.

    1985-01-01

    The global distribution of intense convective activity over land is shown to be measurable with satellite passive-microwave methods through a comparison of an empirical rain rate algorithm with a climatology of thunderstorm days for the months of June-August. With the 18 and 37 GHz channels of the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR), the strong volume scattering effects of precipitation can be measured. Even though a single frequency (37 GHz) is responsive to the scattering signature, two frequencies are needed to remove most of the effect that variations in thermometric temperatures and soil moisture have on the brightness temperatures. Because snow cover is also a volume scatterer of microwave energy at these microwavelengths, a discrimination procedure involving four of the SMMR channels is employed to separate the rain and snow classes, based upon their differences in average thermometric temperature.

  15. Research on sorption behavior of radionuclides under shallow land environment. Mechanism and standard methodologies for measurement of distribution coefficients of radionuclides

    International Nuclear Information System (INIS)

    Sakamoto, Yoshiaki; Tanaka, Tadao; Takebe, Shinichi; Nagao, Seiya; Ogawa, Hiromichi; Komiya, Tomokazu; Hagiwara, Shigeru

    2001-01-01

    This study consists of two categories' research works. One is research on sorption mechanism of radionuclides with long half-life, which are Technetium-99, TRU elements and U series radionuclides, on soil and rocks, including a development of database of distribution coefficients of radionuclides. The database on the distribution coefficients of radionuclides with information about measurement conditions, such as shaking method, soil characteristics and solution composition, has been already opened to the public (JAERI-DATABASE 20001003). Another study is investigation on a standard methodology of the distribution coefficient of radionuclide on soils, rocks and engineering materials in Japan. (author)

  16. White noise excited non-ideal elasto-plastic oscillator

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1997-01-01

    Two sets of 50 samples of the displacement response of the top traverse relative to the second traverse of an experimental shear frame with three traverses subject to white noise base shaking of two different intensities have been recorded at Institut fur Allgemeine Mechanik in 1995, and are in f......Two sets of 50 samples of the displacement response of the top traverse relative to the second traverse of an experimental shear frame with three traverses subject to white noise base shaking of two different intensities have been recorded at Institut fur Allgemeine Mechanik in 1995......, and are in file available for analysis. The column connection between the two top traverses were made of aluminum with a linear-elastic non-ideal plastic behavior, and the columns were therefore renewed after each experiment. The two other connections were made of steel with a purely linear-elastic behavior...... on an oscillator of more than one degree of freedom. Applied to the experimental frame the calculations give excellent predictions of the main distributional properties of the plastic displacement process....

  17. Shaking table test and analysis of embedded structure soil interaction considering input motion

    International Nuclear Information System (INIS)

    Matsushima, Y.; Mizuno, H.; Machida, N.; Sato, K.; Okano, H.

    1987-01-01

    The dynamic interaction between soil and structure is decomposed into inertial interaction (II) and kinematic interaction (KI). II denotes the interaction due to inertial force applied on foundations. KI denotes the interaction of massless foundations subjected to seismic waves. Forced vibration tests by exciters are not enough to evaluate the complete soil-structure interaction due to the lack of KI. To clarify the effects of KI on the seismic response of structure, the authors intended to carry out shaking table tests of the interaction between the soil and the embedded structure. A method to decompose II and KI is introduced which reveals the construction of embedment effects. Finally, the authors discuss the validity of three kinds of simulation analyses, that is, two-dimensional, approximate three-dimensional and rigorous three-dimensional analyses, comparing with the test results

  18. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    Science.gov (United States)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the

  19. Intelligent agents in data-intensive computing

    CERN Document Server

    Correia, Luís; Molina, José

    2016-01-01

    This book presents new approaches that advance research in all aspects of agent-based models, technologies, simulations and implementations for data intensive applications. The nine chapters contain a review of recent cross-disciplinary approaches in cloud environments and multi-agent systems, and important formulations of data intensive problems in distributed computational environments together with the presentation of new agent-based tools to handle those problems and Big Data in general. This volume can serve as a reference for students, researchers and industry practitioners working in or interested in joining interdisciplinary work in the areas of data intensive computing and Big Data systems using emergent large-scale distributed computing paradigms. It will also allow newcomers to grasp key concepts and potential solutions on advanced topics of theory, models, technologies, system architectures and implementation of applications in Multi-Agent systems and data intensive computing. .

  20. Strong-motion observations of the M 7.8 Gorkha, Nepal, earthquake sequence and development of the N-shake strong-motion network

    Science.gov (United States)

    Dixit, Amod; Ringler, Adam; Sumy, Danielle F.; Cochran, Elizabeth S.; Hough, Susan E.; Martin, Stacey; Gibbons, Steven; Luetgert, James H.; Galetzka, John; Shrestha, Surya; Rajaure, Sudhir; McNamara, Daniel E.

    2015-01-01

    We present and describe strong-motion data observations from the 2015 M 7.8 Gorkha, Nepal, earthquake sequence collected using existing and new Quake-Catcher Network (QCN) and U.S. Geological Survey NetQuakes sensors located in the Kathmandu Valley. A comparison of QCN data with waveforms recorded by a conventional strong-motion (NetQuakes) instrument validates the QCN data. We present preliminary analysis of spectral accelerations, and peak ground acceleration and velocity for earthquakes up to M 7.3 from the QCN stations, as well as preliminary analysis of the mainshock recording from the NetQuakes station. We show that mainshock peak accelerations were lower than expected and conclude the Kathmandu Valley experienced a pervasively nonlinear response during the mainshock. Phase picks from the QCN and NetQuakes data are also used to improve aftershock locations. This study confirms the utility of QCN instruments to contribute to ground-motion investigations and aftershock response in regions where conventional instrumentation and open-access seismic data are limited. Initial pilot installations of QCN instruments in 2014 are now being expanded to create the Nepal–Shaking Hazard Assessment for Kathmandu and its Environment (N-SHAKE) network.

  1. Short-term hunger intensity changes following ingestion of a meal replacement bar for weight control.

    Science.gov (United States)

    Rothacker, Dana Q; Watemberg, Salo

    2004-05-01

    Meal replacement products for weight loss are popular and safe for most unsupervised consumers desiring to lose weight. Previously we reported that the thickness of meal replacement diet shakes had a direct and significant effect on hunger intensity during the first 2 h and that hunger intensity scores for liquid meal replacements were significantly below baseline for 3 h following consumption (Mattes & Rothacker, 2001) This study uses the same protocol to investigate meal replacement bars designed for overweight consumers. Subjects were prescreened to include only those that normally ate breakfast and liked chocolate. The bar used in this study contained 250 calories (about 30 more than most liquid diet shakes), 4 g dietary fiber, 14 g protein and 8 g fat. Subjects were instructed to consume the entire bar with a glass of water following an overnight fast when they would normally consume their first meal of the day and to assess their hunger on a 1 (not hungry at all) to 9 (as hungry as I have ever felt) scale before consumption, immediately after and hourly for 6 h (only on typical weekdays). Similar assessments were made for the perception of stomach fullness (1=empty, 9=extremely full), strength of the desire to eat (1=no desire, 9=extremely strong) and thirst (1=not at all thirsty, 9=extremely thirsty). One-hundred and eight subjects (23 male and 85 female) completed the study. No gender satiety differences were found. Hunger ratings and desire to eat remained significantly below baseline for 5 h following consumption. Stomach fullness scores were significantly above baseline for 5 h. Thirst scores were significantly below baseline for 3 h. In conclusion, although the meal replacement diet bars contained only 30 additional calories than liquids, they provided an additional 2 h of hunger suppression from baseline that may have an impact on overall weightloss success. These results support superior short-term hunger control with solid meal replacements.

  2. Influence of scale-dependent fracture intensity on block size distribution and rock slope failure mechanisms in a DFN framework

    Science.gov (United States)

    Agliardi, Federico; Galletti, Laura; Riva, Federico; Zanchi, Andrea; Crosta, Giovanni B.

    2017-04-01

    An accurate characterization of the geometry and intensity of discontinuities in a rock mass is key to assess block size distribution and degree of freedom. These are the main controls on the magnitude and mechanisms of rock slope instabilities (structurally-controlled, step-path or mass failures) and rock mass strength and deformability. Nevertheless, the use of over-simplified discontinuity characterization approaches, unable to capture the stochastic nature of discontinuity features, often hampers a correct identification of dominant rock mass behaviour. Discrete Fracture Network (DFN) modelling tools have provided new opportunities to overcome these caveats. Nevertheless, their ability to provide a representative picture of reality strongly depends on the quality and scale of field data collection. Here we used DFN modelling with FracmanTM to investigate the influence of fracture intensity, characterized on different scales and with different techniques, on the geometry and size distribution of generated blocks, in a rock slope stability perspective. We focused on a test site near Lecco (Southern Alps, Italy), where 600 m high cliffs in thickly-bedded limestones folded at the slope scale impend on the Lake Como. We characterized the 3D slope geometry by Structure-from-Motion photogrammetry (range: 150-1500m; point cloud density > 50 pts/m2). Since the nature and attributes of discontinuities are controlled by brittle failure processes associated to large-scale folding, we performed a field characterization of meso-structural features (faults and related kinematics, vein and joint associations) in different fold domains. We characterized the discontinuity populations identified by structural geology on different spatial scales ranging from outcrops (field surveys and photo-mapping) to large slope sectors (point cloud and photo-mapping). For each sampling domain, we characterized discontinuity orientation statistics and performed fracture mapping and circular

  3. Training Intensity Distribution Over a Four-Year Cycle in Olympic Champion Rowers: Different Roads Lead to Rio.

    Science.gov (United States)

    Plews, Daniel J; Laursen, Paul B

    2017-09-27

    The purpose of this study was to compare the training intensity distribution (TID) of the undefeated world champion male rowing New Zealand (kiwi) pair over a four-year Olympic cycle, across training phases, training years, and between individuals. Training data, including heart rate and boat speed, were recorded in the athletes rowing in the same boat between March 2013 and August 2016, ending with the Rio Olympics final. Progressive exercise tests assessed first (LT 1 ) and second (LT 2 ) lactate thresholds and associated heart rates, to determine the percentage of training performed below, between and above these demarcation points. Training an average of only 12-15 h/wk throughout the Olympic cycle, the mean percent distribution of time (±SD) at each training intensity was 80.4 ± 5.5% LT 2 for Rower A and 67.3 ± 9.0% LT 2 for Rower B. Across the years 2014-2016, Rower A performed most likely more training training between LT 1 -LT 2 . Training appeared to become more polarised, with greater amounts of time spent training duration (R=0.38-0.43). Two of the world's best rowers, rowing together in the same boat with an undefeated record across an Olympic cycle, travelled markedly different "roads to Rio" within the context of their TID, with one rower displaying a polarised model of TID, and the other pyramidal. However, TID trended towards becoming more polarised in both rowers with increased training duration.

  4. An analysis of ground shaking and transmission loss from infra sound generated by the 2011 Tohoku earthquake

    International Nuclear Information System (INIS)

    Walker, Kristoffer T.; Le Pichon, Alexis; Tae Sung Kim; Il-Young Che; Groot-Hedlin, Catherine de; Garces, Milton

    2013-01-01

    The 2011 Mw 9.0 Tohoku earthquake generated infra sound that was recorded by nine infrasonic arrays. Most arrays recorded a back azimuth variation with time due to the expanse of the source region. We use ray tracing to predict group velocities and back azimuth wind corrections. A Japan accelerometer network recorded ground shaking in unprecedented spatial resolution. We back projected infra sound from arrays IS44 (Kamchatka) and IS30 (Tokyo) to the source region and compare these results with acceleration data. IS44 illuminates the complex geometry of land areas that experienced shaking. IS30 illuminates two volcanoes and a flat area around the city of Sendai, where the maximum accelerations occurred. The arrays and epicentral region define three source-receiver profiles. The observed broadband energy transmission loss (TL) follows an exponential decay law. The best fitting model, which has parameters that are interpreted to include the effects of geometric spreading, scattering, and the maximum ratio of the effective sound speed in the stratosphere to that at the ground (accounts for stratospheric wind speed), yields a 65% variance reduction relative to predictions from a traditional TL relationship. This model is a simplified version of the model of Le Pichon et al. (2012), which yields an 83% variance reduction for a single frequency, implying that fine-scale atmospheric structure is required to explain the TL for stratospheric upwind propagation. Our results show that infrasonic arrays are sensitive to ground acceleration in the source region of mega-thrust earthquakes. The TL results may improve infrasonic amplitude scaling laws for explosive yield. (authors)

  5. Tolerance to LSD and DOB induced shaking behaviour: differential adaptations of frontocortical 5-HT(2A) and glutamate receptor binding sites.

    Science.gov (United States)

    Buchborn, Tobias; Schröder, Helmut; Dieterich, Daniela C; Grecksch, Gisela; Höllt, Volker

    2015-03-15

    Serotonergic hallucinogens, such as lysergic acid diethylamide (LSD) and dimethoxy-bromoamphetamine (DOB), provoke stereotype-like shaking behaviour in rodents, which is hypothesised to engage frontocortical glutamate receptor activation secondary to serotonin2A (5-HT2A) related glutamate release. Challenging this hypothesis, we here investigate whether tolerance to LSD and DOB correlates with frontocortical adaptations of 5-HT2A and/or overall-glutamate binding sites. LSD and DOB (0.025 and 0.25 mg/kg, i.p.) induce a ketanserin-sensitive (0.5 mg/kg, i.p., 30-min pretreatment) increase in shaking behaviour (including head twitches and wet dog shakes), which with repeated application (7× in 4 ds) is undermined by tolerance. Tolerance to DOB, as indexed by DOB-sensitive [(3)H]spiroperidol and DOB induced [(35)S]GTP-gamma-S binding, is accompanied by a frontocortical decrease in 5-HT2A binding sites and 5-HT2 signalling, respectively; glutamate-sensitive [(3)H]glutamate binding sites, in contrast, remain unchanged. As to LSD, 5-HT2 signalling and 5-HT2A binding, respectively, are not or only marginally affected, yet [(3)H]glutamate binding is significantly decreased. Correlation analysis interrelates tolerance to DOB to the reduced 5-HT2A (r=.80) as well as the unchanged [(3)H]glutamate binding sites (r=.84); tolerance to LSD, as opposed, shares variance with the reduction in [(3)H]glutamate binding sites only (r=.86). Given that DOB and LSD both induce tolerance, one correlating with 5-HT2A, the other with glutamate receptor adaptations, it might be inferred that tolerance can arise at either level. That is, if a hallucinogen (like LSD in our study) fails to induce 5-HT2A (down-)regulation, glutamate receptors (activated postsynaptic to 5-HT2A related glutamate release) might instead adapt and thus prevent further overstimulation of the cortex. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Carrier Mediated Distribution System (CAMDIS): a new approach for the measurement of octanol/water distribution coefficients.

    Science.gov (United States)

    Wagner, Bjoern; Fischer, Holger; Kansy, Manfred; Seelig, Anna; Assmus, Frauke

    2015-02-20

    Here we present a miniaturized assay, referred to as Carrier-Mediated Distribution System (CAMDIS) for fast and reliable measurement of octanol/water distribution coefficients, log D(oct). By introducing a filter support for octanol, phase separation from water is facilitated and the tendency of emulsion formation (emulsification) at the interface is reduced. A guideline for the best practice of CAMDIS is given, describing a strategy to manage drug adsorption at the filter-supported octanol/buffer interface. We validated the assay on a set of 52 structurally diverse drugs with known shake flask log D(oct) values. Excellent agreement with literature data (r(2) = 0.996, standard error of estimate, SEE = 0.111), high reproducibility (standard deviation, SD < 0.1 log D(oct) units), minimal sample consumption (10 μL of 100 μM DMSO stock solution) and a broad analytical range (log D(oct) range = -0.5 to 4.2) make CAMDIS a valuable tool for the high-throughput assessment of log D(oc)t. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084

    Science.gov (United States)

    McKay, D.S.; Cooper, B.L.; Riofrio, L.M.

    2009-01-01

    We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.

  8. Modified stress intensity factor as a crack growth parameter applicable under large scale yielding conditions

    International Nuclear Information System (INIS)

    Yasuoka, Tetsuo; Mizutani, Yoshihiro; Todoroki, Akira

    2014-01-01

    High-temperature water stress corrosion cracking has high tensile stress sensitivity, and its growth rate has been evaluated using the stress intensity factor, which is a linear fracture mechanics parameter. Stress corrosion cracking mainly occurs and propagates around welded metals or heat-affected zones. These regions have complex residual stress distributions and yield strength distributions because of input heat effects. The authors previously reported that the stress intensity factor becomes inapplicable when steep residual stress distributions or yield strength distributions occur along the crack propagation path, because small-scale yielding conditions deviate around those distributions. Here, when the stress intensity factor is modified by considering these distributions, the modified stress intensity factor may be used for crack growth evaluation for large-scale yielding. The authors previously proposed a modified stress intensity factor incorporating the stress distribution or yield strength distribution in front of the crack using the rate of change of stress intensity factor and yield strength. However, the applicable range of modified stress intensity factor for large-scale yielding was not clarified. In this study, the range was analytically investigated by comparison with the J-integral solution. A three-point bending specimen with parallel surface crack was adopted as the analytical model and the stress intensity factor, modified stress intensity factor and equivalent stress intensity factor derived from the J-integral were calculated and compared under large-scale yielding conditions. The modified stress intensity was closer to the equivalent stress intensity factor when compared with the stress intensity factor. If deviation from the J-integral solution is acceptable up to 2%, the modified stress intensity factor is applicable up to 30% of the J-integral limit, while the stress intensity factor is applicable up to 10%. These results showed that

  9. Mobile Phones and Social Media Empower the Citizen Seismologist

    Science.gov (United States)

    Bray, J.; Dashti, S.; Reilly, J.; Bayen, A. M.; Glaser, S. D.

    2014-12-01

    Emergency responders must "see" the effects of an earthquake clearly and rapidly for effective response. Mobile phone and information technology can be used to measure ground motion intensity parameters and relay that information to emergency responders. However, the phone sensor is an imperfect device and has a limited operational range. Thus, shake table tests were performed to evaluate their reliability as seismic monitoring instruments. Representative handheld devices, either rigidly connected to the table or free to move, measured shaking intensity parameters well. Bias in 5%-damped spectral accelerations measured by phones was less than 0.05 and 0.2 [log(g)] during one-dimensional (1-D) and three-dimensional (3-D) shaking in frequencies ranging from 1 Hz to 10 Hz. They did tend to over-estimate the Arias Intensity, but this error declined for stronger motions with larger signal-to-noise ratios. Additionally, much of the data about infrastructure performance and geotechnical effects of an earthquake are lost soon after an earthquake occurs as efforts move to the recovery phase. A better methodology for reliable and rapid collection of perishable hazards data will enhance scientific inquiry and accelerate the building of disaster-resilient cities. Post-earthquake reconnaissance efforts can be aided through the strategic collection and reuse of social media data and other remote sources of information. This is demonstrated through their use following the NSF-sponsored GEER response to the September 2013 flooding in Colorado. With these ubiquitous measurement devices in the hands of the citizen seismologist, a more accurate and rapid portrayal of the damage distribution during an earthquake may be provided to emergency responders and to the public.

  10. Comparison of Soxhlet and Shake Extraction of Polycyclic Aromatic Hydrocarbons from Coal Tar Polluted Soils Sampled in the Field

    DEFF Research Database (Denmark)

    Lindhardt, Bo; Holst, Helle; Christensen, Thomas Højlund

    1994-01-01

    This study compares three extraction methods for PAHs in coal tar polluted soil: 3-times repeated shaking of the soil with dichloromethane-methanol (1:1), Soxhlet extraction with dichloromethane, and Soxhlet extraction with dichloromethane followed by Soxhlet extraction with methanol....... The extraction efficiencies were determined for ten selected PAHs in triplicate samples of six soils sampled at former gasworks sites. The samples covered a wide range of PAH concentrations, from 0.6 to 397 mg/kg soil. Soxhlet extraction with dichloromethane followed by Soxhlet extraction with methanol...

  11. Evolution of arbitrary moments of radiant intensity distribution for partially coherent general beams in atmospheric turbulence

    Science.gov (United States)

    Dan, Youquan; Xu, Yonggen

    2018-04-01

    The evolution law of arbitrary order moments of the Wigner distribution function, which can be applied to the different spatial power spectra, is obtained for partially coherent general beams propagating in atmospheric turbulence using the extended Huygens-Fresnel principle. A coupling coefficient of radiant intensity distribution (RID) in turbulence is introduced. Analytical expressions of the evolution of the first five-order moments, kurtosis parameter, coupling coefficient of RID for general beams in turbulence are derived, and the formulas are applied to Airy beams. Results show that there exist two types for general beams in turbulence. A larger value of kurtosis parameter for Airy beams also reveals that coupling effect due to turbulence is stronger. Both theoretical analysis and numerical results show that the maximum value of kurtosis parameter for an Airy beam in turbulence is independent of turbulence strength parameter and is only determined by inner scale of turbulence. Relative angular spread, kurtosis and coupling coefficient are less influenced by turbulence for Airy beams with a smaller decay factor and a smaller initial width of the first lobe.

  12. Nonsequential multiphoton double ionization of He in intense laser - a QED approach

    International Nuclear Information System (INIS)

    Bhattacharyya, S.; Mazumder, Mina; Chakrabarti, J.; Faisal, F.H.M.

    2010-01-01

    The non-sequential muItiphoton double ionization (NSDI) of He in intense laser field is not yet completely understood, more so for spin resolved currents. We are tempted to use QED and Feynman diagram to obtain spin polarized currents. Hartree-Fock (HF) ground-state correlated wave function of He atom is considered in circularly polarized laser. In QED approach one of the electrons is directly ionized by photon absorption while the second electron is shaken off due to the change in the internal potential of the atom. In He-atom the two ionized electrons can only be in the singlet spin state. Spin-symmetric and spin-flip transitions are eventually possible for the direct and the shake-off electrons. In an ensemble of (HF type) He-atoms the ionized Volkov electrons may acquire 4 pairs of momenta indicating e-e correlation in the final state. Coulomb correction is taken care off through the Sommerfeld factor

  13. Shake flask decolourization of direct dye solar golden yellow R by pleurotus ostreatus

    International Nuclear Information System (INIS)

    Jilani, K.; Asghar, M.; Bhatti, H.N.; Mushtaq, Z.

    2011-01-01

    Different on site treatment technologies are in practice for industrial wastewaters but bioremediation using white rot fungi is the most attractive option due to complete degradation of the pollutants to non toxic end products. Three direct dyes (Solar golden yellow R, Solar brilliant red BA and Solar orange RSN) were decolourized using white rot fungus (WRF) Pleurotus ostreatus. The best decolourized dye Solar golden yellow R was selected for subsequent optimization studies for decolourization. Under optimum conditions Pleurotus ostreatus caused 90.32 % decolourization of 0.01 % Solar golden yellow R solution within two days of shake flask incubation at pH 3.5 and 30 deg. C temperature in Kirk's basal nutrient medium with added 1 % starch and 0.01 % ammonium sulphate as carbon and nitrogen sources, respectively. Ligninolytic enzyme activities were correlated to dye decolourization and maximum laccase activity of 356.23 U/ml was also noted in the maximally decolourized medium. (author)

  14. Intensity-Stabilized Fast-Scanned Direct Absorption Spectroscopy Instrumentation Based on a Distributed Feedback Laser with Detection Sensitivity down to 4 × 10−6

    Directory of Open Access Journals (Sweden)

    Gang Zhao

    2016-09-01

    Full Text Available A novel, intensity-stabilized, fast-scanned, direct absorption spectroscopy (IS-FS-DAS instrumentation, based on a distributed feedback (DFB diode laser, is developed. A fiber-coupled polarization rotator and a fiber-coupled polarizer are used to stabilize the intensity of the laser, which significantly reduces its relative intensity noise (RIN. The influence of white noise is reduced by fast scanning over the spectral feature (at 1 kHz, followed by averaging. By combining these two noise-reducing techniques, it is demonstrated that direct absorption spectroscopy (DAS can be swiftly performed down to a limit of detection (LOD (1σ of 4 × 10−6, which opens up a number of new applications.

  15. Low-SNR Capacity of MIMO Optical Intensity Channels

    KAUST Repository

    Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim

    2017-01-01

    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.

  16. Low-SNR Capacity of MIMO Optical Intensity Channels

    KAUST Repository

    Chaaban, Anas

    2017-09-18

    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.

  17. A method to study the characteristics of 3D dose distributions created by superposition of many intensity-modulated beams delivered via a slit aperture with multiple absorbing vanes

    International Nuclear Information System (INIS)

    Webb, S.; Oldham, M.

    1996-01-01

    Highly conformal dose distributions can be created by the superposition of many radiation fields from different directions, each with its intensity spatially modulated by the method known as tomotherapy. At the planning stage, the intensity of radiation of each beam element (or bixel) is determined by working out the effect of superposing the radiation through all bixels with the elemental dose distribution specified as that from a single bixel with all its neighbours closed (the 'independent-vane' (IV) model). However, at treatment-delivery stage, neighbouring bixels may not be closed. Instead the slit beam is delivered with parts of the beam closed for different periods of time to create the intensity modulation. As a result, the 3D dose distribution actually delivered will differ from that determined at the planning stage if the elemental beams do not obey the superposition principle. The purpose of this paper is to present a method to investigate and quantify the relation between planned and delivered 3D dose distributions. Two modes of inverse planning have been performed: (i) with a fit to the measured elemental dose distribution and (ii) with a 'stretched fit' obeying the superposition principle as in the PEACOCK 3D planning system. The actual delivery has been modelled as a series of component deliveries (CDs). The algorithm for determining the component intensities and the appropriate collimation conditions is specified. The elemental beam from the NOMOS MIMiC collimator is too narrow to obey the superposition principle although it can be 'stretched' and fitted to a superposition function. Hence there are differences between the IV plans made using modes (i) and (ii) and the raw and the stretched elemental beam, and also differences with CD delivery. This study shows that the differences between IV and CD dose distributions are smaller for mode (ii) inverse planning than for mode (i), somewhat justifying the way planning is done within PEACOCK. Using a

  18. CISN ShakeAlert Earthquake Early Warning System Monitoring Tools

    Science.gov (United States)

    Henson, I. H.; Allen, R. M.; Neuhauser, D. S.

    2015-12-01

    CISN ShakeAlert is a prototype earthquake early warning system being developed and tested by the California Integrated Seismic Network. The system has recently been expanded to support redundant data processing and communications. It now runs on six machines at three locations with ten Apache ActiveMQ message brokers linking together 18 waveform processors, 12 event association processes and 4 Decision Module alert processes. The system ingests waveform data from about 500 stations and generates many thousands of triggers per day, from which a small portion produce earthquake alerts. We have developed interactive web browser system-monitoring tools that display near real time state-of-health and performance information. This includes station availability, trigger statistics, communication and alert latencies. Connections to regional earthquake catalogs provide a rapid assessment of the Decision Module hypocenter accuracy. Historical performance can be evaluated, including statistics for hypocenter and origin time accuracy and alert time latencies for different time periods, magnitude ranges and geographic regions. For the ElarmS event associator, individual earthquake processing histories can be examined, including details of the transmission and processing latencies associated with individual P-wave triggers. Individual station trigger and latency statistics are available. Detailed information about the ElarmS trigger association process for both alerted events and rejected events is also available. The Google Web Toolkit and Map API have been used to develop interactive web pages that link tabular and geographic information. Statistical analysis is provided by the R-Statistics System linked to a PostgreSQL database.

  19. Innovation system and knowledge-intensive entrepreneurship

    DEFF Research Database (Denmark)

    Timmermans, Bram

    2011-01-01

    The goal of this deliverable is to investigate the properties and the nature of knowledge-intensive entrepreneurship as a largely distributed phenomenon at firm, sector and national levels in Denmark. Following the guidelines previously developed in the Deliverable 2.2.1 “Innovation systems...... and knowledge-intensive entrepreneurship: Analytical framework and guidelines for case study research” I will investigate the interplay between national innovation systems and knowledge- intensive entrepreneurship by focusing on two main sectors: machine tools, and computer and related activities....

  20. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  1. Statistical fluctuations of electromagnetic transition intensities and electromagnetic moments in pf-shell nuclei

    International Nuclear Information System (INIS)

    Hamoudi, A.; Shahaliev, E.; Nazmitdinov, R. G.; Alhassid, Y.

    2002-01-01

    We study the fluctuation properties of ΔT=0 electromagnetic transition intensities and electromagnetic moments in A∼60 nuclei within the framework of the interacting shell model, using a realistic effective interaction for pf-shell nuclei with a 56 Ni core. The distributions of the transition intensities and of the electromagnetic moments are well described by the Gaussian orthogonal ensemble of random matrices. In particular, the transition intensity distributions follow a Porter-Thomas distribution. When diagonal matrix elements (i.e., moments) are included in the analysis of transition intensities, the distributions remain Porter-Thomas except for the isoscalar M1. This deviation is explained in terms of the structure of the isoscalar M1 operator

  2. Ground-Motion Simulations of Scenario Earthquakes on the Hayward Fault

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Graves, R; Larsen, S; Ma, S; Rodgers, A; Ponce, D; Schwartz, D; Simpson, R; Graymer, R

    2009-03-09

    We compute ground motions in the San Francisco Bay area for 35 Mw 6.7-7.2 scenario earthquake ruptures involving the Hayward fault. The modeled scenarios vary in rupture length, hypocenter, slip distribution, rupture speed, and rise time. This collaborative effort involves five modeling groups, using different wave propagation codes and domains of various sizes and resolutions, computing long-period (T > 1-2 s) or broadband (T > 0.1 s) synthetic ground motions for overlapping subsets of the suite of scenarios. The simulations incorporate 3-D geologic structure and illustrate the dramatic increase in intensity of shaking for Mw 7.05 ruptures of the entire Hayward fault compared with Mw 6.76 ruptures of the southern two-thirds of the fault. The area subjected to shaking stronger than MMI VII increases from about 10% of the San Francisco Bay urban area in the Mw 6.76 events to more than 40% of the urban area for the Mw 7.05 events. Similarly, combined rupture of the Hayward and Rodgers Creek faults in a Mw 7.2 event extends shaking stronger than MMI VII to nearly 50% of the urban area. For a given rupture length, the synthetic ground motions exhibit the greatest sensitivity to the slip distribution and location inside or near the edge of sedimentary basins. The hypocenter also exerts a strong influence on the amplitude of the shaking due to rupture directivity. The synthetic waveforms exhibit a weaker sensitivity to the rupture speed and are relatively insensitive to the rise time. The ground motions from the simulations are generally consistent with Next Generation Attenuation ground-motion prediction models but contain long-period effects, such as rupture directivity and amplification in shallow sedimentary basins that are not fully captured by the ground-motion prediction models.

  3. Shake, Rattle and Roll Horror Franchise and the Specter of Nation-Formation in the Philippines

    Directory of Open Access Journals (Sweden)

    Rolando B. Tolentino

    2016-06-01

    Full Text Available The paper looks into the most successful horror franchise in Philippine history. Shake, Rattle and Roll has had a successful 14-film run since its introduction in 1984, and is composed of a three-part segment, each tackling a horrific experience: ghosts and folk creatures in provincial and city settings. My paper maps out the narratives, and the social and political contexts of the series. Specif ically, the period beginning 1984 marks a series of national transition: the political crisis of the Marcoses, People Power 1, the rise of Corazon Aquino, the economic crises in 1997 and 2007, the ousting of Joseph Estrada, the rise of neoliberalism, the coming of Noynoy Aquino, and the incarceration of Gloria Arroyo. How might these films also be read as analog of the anxieties of the nation?

  4. Earthquake Early Warning: User Education and Designing Effective Messages

    Science.gov (United States)

    Burkett, E. R.; Sellnow, D. D.; Jones, L.; Sellnow, T. L.

    2014-12-01

    The U.S. Geological Survey (USGS) and partners are transitioning from test-user trials of a demonstration earthquake early warning system (ShakeAlert) to deciding and preparing how to implement the release of earthquake early warning information, alert messages, and products to the public and other stakeholders. An earthquake early warning system uses seismic station networks to rapidly gather information about an occurring earthquake and send notifications to user devices ahead of the arrival of potentially damaging ground shaking at their locations. Earthquake early warning alerts can thereby allow time for actions to protect lives and property before arrival of damaging shaking, if users are properly educated on how to use and react to such notifications. A collaboration team of risk communications researchers and earth scientists is researching the effectiveness of a chosen subset of potential earthquake early warning interface designs and messages, which could be displayed on a device such as a smartphone. Preliminary results indicate, for instance, that users prefer alerts that include 1) a map to relate their location to the earthquake and 2) instructions for what to do in response to the expected level of shaking. A number of important factors must be considered to design a message that will promote appropriate self-protective behavior. While users prefer to see a map, how much information can be processed in limited time? Are graphical representations of wavefronts helpful or confusing? The most important factor to promote a helpful response is the predicted earthquake intensity, or how strong the expected shaking will be at the user's location. Unlike Japanese users of early warning, few Californians are familiar with the earthquake intensity scale, so we are exploring how differentiating instructions between intensity levels (e.g., "Be aware" for lower shaking levels and "Drop, cover, hold on" at high levels) can be paired with self-directed supplemental

  5. Two-dimensional distribution of electron temperature in ergodic layer of LHD measured from line intensity ratio of CIV and NeVIII

    International Nuclear Information System (INIS)

    Wang, Erhui; Morita, Shigeru; Goto, Motoshi; Murakami, Izumi; Oishi, Tetsutarou; Dong, Chunfeng

    2013-01-01

    Two-dimensional distribution of impurity lines emitted from ergodic layer with stochastic magnetic field lines in Large Helical Device (LHD) has been observed using a space-resolved extreme ultraviolet (EUV) spectrometer. The two-dimensional electron temperature distribution in the ergodic layer is successfully measured using the line intensity ratio of Li-like NeVIII 2s-3p ( 2 S 1/2 - 2 P 3/2 : 88.09 Å, 2 S 1/2 - 2 P 1/2 : 88.13 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 102.91 Å, 2 P 3/2 - 2 S 1/2 : 103.09 Å) transitions emitted from radial location near Last Closed Flux Surface (LCFS). The intensity ratio analyzed with ADAS code shows no dependence on the electron density below 10 14 cm -3 . The result indicates a little higher temperature, i.e., 220 eV, in the poloidal location at high-field side near helical coils called O-point compared to the temperature near X-point, i.e., 170 eV. The electron temperature profile is also measured at the edge boundary of ergodic layer using the line intensity ratio of Li-like CIV 2p-3d ( 2 P 1/2 - 2 D 3/2 : 384.03 Å, 2 P 3/2 - 2 D 5/2 : 384.18 Å) to 2p-3s ( 2 P 1/2 - 2 S 1/2 : 419.53 Å, 2 P 3/2 - 2 S 1/2 : 419.71 Å) transitions. The intensity ratios analyzed with CHIANTI, ADAS and T.Kawachi codes show a slightly higher temperature near O-point, i.e., 25 eV for CHIANTI, 21 eV for ADAS and 11 eV for T.Kawachi's codes, compared to the temperature at X-point, i.e., 15 - 21 eV for CHIANTI, 9 - 15 eV for ADAS and 6 - 9 eV for T.Kawachi codes. It suggests that the transport coefficient in the ergodic layer is varied with three-dimensional structure. (author)

  6. A comparison of radiation treatment techniques for carcinomas of the larynx and hypopharynx using 3-D dose distributions and intensity modulation

    International Nuclear Information System (INIS)

    Morris, David; Miller, Elizabeth P.; Rosenman, Julian; Sailer, Scott; Tepper, Joel

    1997-01-01

    Purpose/Objective: Patients with carcinomas of the larynx and hypopharynx often cannot be treated effectively with a lateral/low anterior neck combination because the midline block will cover the tumor bed. Common alternatives to this approach often produce serious dose inhomogeneities. Our study was to determine whether modern 3D treatment planning techniques with intensity modulation could overcome these dose inhomogeneities and also allow us to omit the problematic posterior neck electron boost which often gives poor nodal coverage. Materials and Methods: Dose distribution studies were performed on patients who had received post-operative radiation following laryngectomy for advanced staged cancer. The clinical tumor volume or CTV (surgical bed and at-risk nodal stations) was defined on planning CT images. Four commonly used alternative plans, the MGH 'minimantle', 'kicked-out' laterals, the University of Florida 3-field, and a standard 3 field with a lateral cord block were evaluated using the Plan UNC (PLUNC) treatment planning software. New plans that might also preclude the use of posterior neck electrons were also evaluated. The plans were then intensity modulated to reduce the well known cold spots as described previously in IJROBP August 1991, Vol. 21, No. 3. All dose distributions were evaluated for dose homogeneity, minimum and maximum CTV dose, and dose to normal critical structures. The inhomogeneities were determined using standard dose-volume histogram (DVH) techniques but positional information was gathered by dividing the CTV into sensible anatomic regions and studying the DVH for each separately. Results: For the mini-mantle approach, intensity modulation substantially improved the dose inhomogeneities but did not affect the minimum CTV dose and had no effect on the cord dose. Intensity modulation decreased the maximum CTV dose (110% vs. 130%) but had the undesirable effect of lessening the dose difference between cord and CTV. For the kicked

  7. Three-dimensional cluster formation and structure in heterogeneous dose distribution of intensity modulated radiation therapy.

    Science.gov (United States)

    Chao, Ming; Wei, Jie; Narayanasamy, Ganesh; Yuan, Yading; Lo, Yeh-Chi; Peñagarícano, José A

    2018-05-01

    To investigate three-dimensional cluster structure and its correlation to clinical endpoint in heterogeneous dose distributions from intensity modulated radiation therapy. Twenty-five clinical plans from twenty-one head and neck (HN) patients were used for a phenomenological study of the cluster structure formed from the dose distributions of organs at risks (OARs) close to the planning target volumes (PTVs). Initially, OAR clusters were searched to examine the pattern consistence among ten HN patients and five clinically similar plans from another HN patient. Second, clusters of the esophagus from another ten HN patients were scrutinized to correlate their sizes to radiobiological parameters. Finally, an extensive Monte Carlo (MC) procedure was implemented to gain deeper insights into the behavioral properties of the cluster formation. Clinical studies showed that OAR clusters had drastic differences despite similar PTV coverage among different patients, and the radiobiological parameters failed to positively correlate with the cluster sizes. MC study demonstrated the inverse relationship between the cluster size and the cluster connectivity, and the nonlinear changes in cluster size with dose thresholds. In addition, the clusters were insensitive to the shape of OARs. The results demonstrated that the cluster size could serve as an insightful index of normal tissue damage. The clinical outcome of the same dose-volume might be potentially different. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Hydrogen Balmer alpha intensity distributions and line profiles from multiple scattering theory using realistic geocoronal models

    Science.gov (United States)

    Anderson, D. E., Jr.; Meier, R. R.; Hodges, R. R., Jr.; Tinsley, B. A.

    1987-01-01

    The H Balmer alpha nightglow is investigated by using Monte Carlo models of asymmetric geocoronal atomic hydrogen distributions as input to a radiative transfer model of solar Lyman-beta radiation in the thermosphere and atmosphere. It is shown that it is essential to include multiple scattering of Lyman-beta radiation in the interpretation of Balmer alpha airglow data. Observations of diurnal variation in the Balmer alpha airglow showing slightly greater intensities in the morning relative to evening are consistent with theory. No evidence is found for anything other than a single sinusoidal diurnal variation of exobase density. Dramatic changes in effective temperature derived from the observed Balmer alpha line profiles are expected on the basis of changing illumination conditions in the thermosphere and exosphere as different regions of the sky are scanned.

  9. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  10. Production of a recombinant phospholipase A2 in Escherichia coli using resonant acoustic mixing that improves oxygen transfer in shake flasks.

    Science.gov (United States)

    Valdez-Cruz, Norma A; Reynoso-Cereceda, Greta I; Pérez-Rodriguez, Saumel; Restrepo-Pineda, Sara; González-Santana, Jesus; Olvera, Alejandro; Zavala, Guadalupe; Alagón, Alejandro; Trujillo-Roldán, Mauricio A

    2017-07-25

    Shake flasks are widely used during the development of bioprocesses for recombinant proteins. Cultures of recombinant Escherichia coli with orbital mixing (OM) have an oxygen limitation negatively affecting biomass growth and recombinant-protein production. With the aim to improve mixing and aeration in shake flask cultures, we analyzed cultures subjected to OM and the novel resonant acoustic mixing (RAM) by applying acoustic energy to E. coli BL21-Gold (DE3): a producer of recombinant phospholipase A2 (rPLA2) from Micrurus laticollaris snake venom. Comparing OM with RAM (200 rpm vs. 7.5g) at the same initial volumetric oxygen transfer coefficient (k L a ≈ 80 h -1 ) ~69% less biomass was obtained with OM compared with RAM. We analyzed two more conditions increasing agitation until maximal speed (12.5 and 20g), and ~1.6- and ~1.4-fold greater biomass was obtained as compared with cultures at 7.5g. Moreover, the specific growth rate was statistically similar in all cultures carried out in RAM, but ~1.5-fold higher than that in cultures carried out under OM. Almost half of the glucose was consumed in OM, whereas between 80 and 100% of the glucose was consumed in RAM cultures, doubling biomass per glucose yields. Differential organic acid production was observed, but acetate production was prevented at the maximal RAM (20g). The amount of rPLA2 in both, OM and RAM cultures, represented 38 ± 5% of the insoluble protein. A smaller proportion of α-helices and β-sheet of purified inclusion bodies (IBs) were appreciated by ATR-FTIR from cultures carried out under OM, than those from RAM. At maximal agitation by RAM, internal E. coli localization patterns of protein aggregation changed, as well as, IBs proteolytic degradation, in conjunction with the formation of small external vesicles, although these changes did not significantly affect the cell survival response. In moderate-cell-density recombinant E. coli BL21-Gold (DE3) cultures, the agitation increases in

  11. Multi-exposure high dynamic range image synthesis with camera shake correction

    Science.gov (United States)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  12. Myocardium tracking via matching distributions.

    Science.gov (United States)

    Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali

    2009-01-01

    The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not

  13. Dynamic jump intensities and risk premiums

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Ornthanalai, Chayawat; Jacobs, Kris

    2012-01-01

    We build a new class of discrete-time models that are relatively easy to estimate using returns and/or options. The distribution of returns is driven by two factors: dynamic volatility and dynamic jump intensity. Each factor has its own risk premium. The models significantly outperform standard...... models without jumps when estimated on S&P500 returns. We find very strong support for time-varying jump intensities. Compared to the risk premium on dynamic volatility, the risk premium on the dynamic jump intensity has a much larger impact on option prices. We confirm these findings using joint...

  14. Beam monitoring system for intense neutron source

    International Nuclear Information System (INIS)

    Tron, A.M.

    2001-01-01

    Monitoring system realizing novel principle of operation and allowing to register a two-dimensional beam current distribution within entire aperture (100...200 mm) of ion pipe for a time in nanosecond range has been designed and accomplished for beam control of the INR intense neutron source, for preventing thermo-mechanical damage of its first wall. Key unit of the system is monitor of two-dimensional beam current distribution, elements of which are high resistant to heating by the beam and to radiation off the source. The description of the system and monitor are presented. Implementation of the system for the future sources with more high intensities are discussed. (author)

  15. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  16. Estimating economic losses from earthquakes using an empirical approach

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  17. Sequential attack with intensity modulation on the differential-phase-shift quantum-key-distribution protocol

    International Nuclear Information System (INIS)

    Tsurumaru, Toyohiro

    2007-01-01

    In this paper, we discuss the security of the differential-phase-shift quantum-key-distribution (DPSQKD) protocol by introducing an improved version of the so-called sequential attack, which was originally discussed by Waks et al. [Phys. Rev. A 73, 012344 (2006)]. Our attack differs from the original form of the sequential attack in that the attacker Eve modulates not only the phases but also the amplitude in the superposition of the single-photon states which she sends to the receiver. Concentrating especially on the 'discretized Gaussian' intensity modulation, we show that our attack is more effective than the individual attack, which had been the best attack up to present. As a result of this, the recent experiment with communication distance of 100 km reported by Diamanti et al. [Opt. Express 14, 13073 (2006)] turns out to be insecure. Moreover, it can be shown that in a practical experimental setup which is commonly used today, the communication distance achievable by the DPSQKD protocol is less than 95 km

  18. Shaking table test of a base isolated model in main control room of nuclear power plant using LRB (lead rubber bearing)

    International Nuclear Information System (INIS)

    Ham, K. W.; Lee, K. J.; Suh, Y. P.

    2005-01-01

    LRB(Lead Rubber Bearing) is a widely used isolation system which is installed between equipment and foundation to reduce seismic vibration from ground. LRB is consist of bearings which are resistant to lateral motion and torsion and has a high vertical stiffness. For that reason, several studies are conducted to apply LRB to the nuclear power plant. In this study, we designed two types of main control floor systems (type I, type II) and a number of shaking table tests with and without isolation system were conducted to evaluate floor isolation effectiveness of LRB

  19. Maternal Territoriality Achieved Through Shaking and Lunging: An Investigation of Patterns in Associated Behaviors and Substrate Vibrations in a Colonial Embiopteran, Antipaluria urichi

    Science.gov (United States)

    Dejan, Khaaliq A.; Fresquez, John M.; Meyer, Annika M.; Edgerly, Janice S.

    2013-01-01

    Substrate vibration communication is displayed by a variety of insects that rely on silk for shelter. Such signaling is often associated with territoriality and social interactions. The goal in this study was to explore the use of substrate vibration by subsocial insects of the little-studied order Embioptera (also known as Embiidina). Antipaluria urichi (Saussure) (Embioptera: Clothodidae) from Trinidad and Tobago, a large embiopteran, exhibits maternal care and facultatively colonial behavior. Previous observations suggested that they were aggressive while guarding eggs but gregarious when not. Egg-guarders in particular have been observed shaking and lunging their bodies, but to date these putative signals have not been recorded nor were their contexts known. Staged interactions were conducted in the laboratory using residents that had established silk domiciles enveloping piezo-electric film used to detect vibrations. Predictions from two competing hypotheses, the maternal territoriality hypothesis and the group cohesion hypothesis, were erected to explain the occurrence of signaling. Experiments pitted pre-reproductive and egg-guarding residents against female and male intruders, representing social partners that ranged from potentially threatening to innocuous or even helpful. Behavioral acts were identified and scored along with associated substrate vibrations, which were measured for associated body movements, duration, and frequency spectra. Signals, sorted by the distinct actions used to generate them, were lunge, shake, push up, and snapback. Egg-guarding females produced most signals in response to female intruders, a result that supported the maternal territoriality hypothesis. Female intruders generally responded to such signaling by moving away from egg-guarding residents. In contrast, pre-reproductive residents did not signal much, and intruders settled beside them. Theme software was used to analyze the behavioral event recordings to seek patterns

  20. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    Science.gov (United States)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  1. Detailed seismic intensity in Morioka area; Moriokashi ni okeru shosai shindo bunpu

    Energy Technology Data Exchange (ETDEWEB)

    Saito, T; Yamamoto, H; Settai, H [Iwate University, Iwate (Japan). Faculty of Engineering; Yamada, T [Obayashi Road Construction Co. Ltd., Tokyo (Japan)

    1996-10-01

    To reveal a seismic intensity distribution in individual areas, a large-scale detailed seismic intensity survey was conducted in Morioka City through questionnaire, as to the Hokkaido Toho-oki (HE) earthquake occurred on October 4, 1994 with a record of seismic intensity 4 at Morioka, and the Sanriku Haruka-oki (SH) earthquake occurred on December 28, 1994 with a record of seismic intensity 5 at Morioka. A relationship was also examined between the seismic intensity distribution and the properties of shallow basement in Morioka City. The range of seismic intensity was from 2.9 to 4.6 and the difference was 1.7 in the case of HE earthquake, and the range was from 3.1 to 5.0 and the difference was 1.9 in the case of SH earthquake. There were large differences in the seismic intensity at individual points. Morioka City has different geological structures in individual areas. There were differences in the S-wave velocity in the surface layer ranging from 150 to 600 m/sec, which were measured using a plate hammering seismic source at 76 areas in Morioka City. These properties of surface layers were in harmony with the seismic intensity distribution obtained from the questionnaire. For the observation of short frequency microtremors at about 490 points in the city, areas with large amplitudes, mean maximum amplitudes of vertical motion components more than 0.1 mkine were distributed in north-western region and a part of southern region. 4 refs., 9 figs., 1 tab.

  2. Ionizing nightglow: sources, intensity, and spatial distribution

    International Nuclear Information System (INIS)

    Young, J.M.; Troy, B.E. Jr.; Johnson, C.Y.; Holmes, J.C.

    1975-01-01

    Photometers carried aboard an Aerobee rocket mapped the ultraviolet night sky at White Sands, New Mexico. Maps for five 300 A passbands in the wavelength range 170 to 1400 A reveal spatial radiation patterns unique to each spectral subregion. The major ultraviolet features seen in these maps are ascribed to a variety of sources: 1) solar Lyman α (1216 A) and Lyman β (1026 A), resonantly scattered by geocoronal hydrogen; 2) solar HeII (304 A) resonantly scattered by ionized helium in the Earth's plasmasphere; 3) solar HeI (584 A) resonantly scattered by neutral helium in the interstellar wind and Doppler shifted so that it penetrates the Earth's helium blanket; and 4) starlight in the 912 to 1400 A band, primarily from early-type stars in the Orion region. Not explained are the presence of small, but measurable, albedo signals observed near the peak of flight. Intensities vary from several kilorayleighs for Lyman α to a few rayleighs for HeII. (auth)

  3. Intensity-modulated three-dimensional conformal radiotherapy

    International Nuclear Information System (INIS)

    Mohan, Radhe

    1996-01-01

    Optimized intensity-modulated treatments one of the important advances in photon radiotherapy. Intensity modulation provides a greatly increased control over dose distributions. Such control can be maximally exploited to achieve significantly higher levels of conformation to the desired clinical objectives using sophisticated optimization techniques. Safe, rapid and efficient delivery of intensity-modulated treatments has become feasible using a dynamic multi-leaf collimator under computer control. The need for all other field shaping devices such as blocks, wedges and compensators is eliminated. Planning and delivery of intensity-modulated treatments is amenable to automation and development of class solutions for each treatment site and stage which can be implemented not only at major academic centers but on a wide scale. A typical treatment involving as many as 10 fields can be delivered in times shorter than much simpler conventional treatments. The main objective of the course is to give an overview of the current state of the art of planning and delivery methods of intensity-modulated treatments. Specifically, the following topics will be covered using representative optimized plans and treatments: 1. A typical procedure for planning and delivering an intensity-modulated treatment. 2. Quantitative definition of criteria (i.e., the objective function) of optimization of intensity-modulated treatments. Clinical relevance of objectives and the dependence of the quality of optimized intensity-modulated plans upon whether the objectives are stated purely in terms of simple dose or dose-volume criteria or whether they incorporate biological indices. 3. Importance of the lateral transport of radiation in the design of intensity-modulated treatments. Impact on dose homogeneity and the optimum choice of margins. 4. Use of intensity-modulated treatments in escalation of tumor dose for the same or lower normal tissue dose. Fractionation of intensity-modulated treatments

  4. Intensity-modulated three-dimensional conformal radiotherapy

    International Nuclear Information System (INIS)

    Mohan, Radhe

    1997-01-01

    Optimized intensity-modulated treatments one of the important advances in photon radiotherapy. Intensity modulation provides a greatly increased control over dose distributions. Such control can be maximally exploited to achieve significantly higher levels of conformation to the desired clinical objectives using sophisticated optimization techniques. Safe, rapid and efficient delivery of intensity-modulated treatments has become feasible using a dynamic multi-leaf collimator under computer control. The need for all other field shaping devices such as blocks, wedges and compensators is eliminated. Planning and delivery of intensity-modulated treatments is amenable to automation and development of class solutions for each treatment site and stage which can be implemented not only at major academic centers but on a wide scale. A typical treatment involving as many as 10 fields can be delivered in times shorter than much simpler conventional treatments. The main objective of the course is to give an overview of the current state of the art of planning and delivery methods of intensity-modulated treatments. Specifically, the following topics will be covered using representative optimized plans and treatments: 1. A typical procedure for planning and delivering an intensity-modulated treatment. 2. Quantitative definition of criteria (i.e., the objective function) of optimization of intensity-modulated treatments. Clinical relevance of objectives and the dependence of the quality of optimized intensity-modulated plans upon whether the objectives are stated purely in terms of simple dose or dose-volume criteria or whether they incorporate biological indices. 3. Importance of the lateral transport of radiation in the design of intensity-modulated treatments. Impact on dose homogeneity and the optimum choice of margins. 4. Use of intensity-modulated treatments in escalation of tumor dose for the same or lower normal tissue dose. Fractionation of intensity-modulated treatments

  5. Influence of light intensity on the photodegradation of bisphenol A polycarbonate

    NARCIS (Netherlands)

    Diepens, M.; Gijsman, P.

    2009-01-01

    The influence of light intensity on the photodegradation rate and photodegradation mechanisms of an unstabilized BPA-PC film was studied by irradiating the BPA-PC samples with a wavelength distribution comparable to terrestrial sunlight and varying irradiation intensities. The highest intensity used

  6. Study on soil-pile-structure-TMD interaction system by shaking table model test

    Science.gov (United States)

    Lou, Menglin; Wang, Wenjian

    2004-06-01

    The success of the tuned mass damper (TMD) in reducing wind-induced structural vibrations has been well established. However, from most of the recent numerical studies, it appears that for a structure situated on very soft soil, soil-structure interaction (SSI) could render a damper on the structure totally ineffective. In order to experimentally verify the SSI effect on the seismic performance of TMD, a series of shaking table model tests have been conducted and the results are presented in this paper. It has been shown that the TMD is not as effective in controlling the seismic responses of structures built on soft soil sites due to the SSI effect. Some test results also show that a TMD device might have a negative impact if the SSI effect is neglected and the structure is built on a soft soil site. For structures constructed on a soil foundation, this research verifies that the SSI effect must be carefully understood before a TMD control system is designed to determine if the control is necessary and if the SSI effect must be considered when choosing the optimal parameters of the TMD device.

  7. The footprint of bottom trawling in European waters: distribution, intensity, and seabed integrity

    DEFF Research Database (Denmark)

    Eigaard, Ole Ritzau; Bastardie, Francois; Hinzen, N.T.

    2017-01-01

    for 2010-2012 at a grid cell resolution of 1 x 1 min longitude and latitude. Trawling intensity profiles with seabed impact at the surface and subsurface level are presented for 14 management areas in the North-east Atlantic, Baltic Sea and Mediterranean Sea. The footprint of the management areas ranged...... between 53-99% and 6-94% for the depth zone from 0 to 200 m (Shallow) and from 201 to 1000 m (Deep), respectively. The footprint was estimated as the total area of all grid cells that were trawled fully or partially. Excluding the untrawled proportions reduced the footprint estimates to 28-85% and 2......-77%. Largest footprints per unit landings were observed off Portugal and in the Mediterranean Sea. Mean trawling intensity ranged between 0.5 and 8.5 times per year, but was less in the Deep zone with a maximum intensity of 6.4. Highest intensities were recorded in the Skagerrak-Kattegat, Iberian Portuguese...

  8. On the distribution of electrons in the double ionization of helium-like ions by Compton scattering

    Energy Technology Data Exchange (ETDEWEB)

    Amusia, M Ya [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Drukarev, E G [Petersburg Nuclear Physics Institute, Gatchina, St Petersburg 188300 (Russian Federation)

    2003-06-28

    The Compton scattering of a high energy photon by a helium-like ion, followed by the ionization of two electrons, is considered outside of the Bethe surface of Compton scattering with the knock-out of a single electron. The role of shake-off (SO), of final state interactions (FSI) and of the quasi-free mechanism (QFM) is analysed. The triple and double differential distributions are calculated. It is demonstrated for the first time that in certain kinematical regions the process is dominated by the FSI and by the QFM, while the SO contribution is much smaller.

  9. Beyond bixels: Generalizing the optimization parameters for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Markman, Jerry; Low, Daniel A.; Beavis, Andrew W.; Deasy, Joseph O.

    2002-01-01

    Intensity modulated radiation therapy (IMRT) treatment planning systems optimize fluence distributions by subdividing the fluence distribution into rectangular bixels. The algorithms typically optimize the fluence intensity directly, often leading to fluence distributions with sharp discontinuities. These discontinuities may yield difficulties in delivery of the fluence distribution, leading to inaccurate dose delivery. We have developed a method for decoupling the bixel intensities from the optimization parameters; either by introducing optimization control points from which the bixel intensities are interpolated or by parametrizing the fluence distribution using basis functions. In either case, the number of optimization search parameters is reduced from the direct bixel optimization method. To illustrate the concept, the technique is applied to two-dimensional idealized head and neck treatment plans. The interpolation algorithms investigated were nearest-neighbor, linear and cubic spline, and radial basis functions serve as the basis function test. The interpolation and basis function optimization techniques were compared against the direct bixel calculation. The number of optimization parameters were significantly reduced relative to the bixel optimization, and this was evident in the reduction of computation time of as much as 58% from the full bixel optimization. The dose distributions obtained using the reduced optimization parameter sets were very similar to the full bixel optimization when examined by dose distributions, statistics, and dose-volume histograms. To evaluate the sensitivity of the fluence calculations to spatial misalignment caused either by delivery errors or patient motion, the doses were recomputed with a 1 mm shift in each beam and compared to the unshifted distributions. Except for the nearest-neighbor algorithm, the reduced optimization parameter dose distributions were generally less sensitive to spatial shifts than the bixel

  10. Investigation for Strong Ground Shaking across the Taipei Basin during the MW 7.0 Eastern Taiwan Offshore Earthquake of 31 March 2002

    Directory of Open Access Journals (Sweden)

    Yi-Ling Huang

    2010-01-01

    Full Text Available According to reconstructed ground motion snapshots of the northern Taiwan area during the MW 7.0 eastern Taiwan offshore earthquake of 31 March 2002, the composite effects indicated complicated wave propagation behavior in the ground motion of the Taipei basin. A major low frequency pulse arose after the S-wave with the duration of about 20 seconds was observed in northern Taiwan and dominated the radial direction. Observed waveforms of a low frequency pulse show amplification during the seismic wave across the Taipei basin from its eastern edge to western portion. This effect has been considered to be generated by an unusual source radiation, deep Moho reflection or basin bottom surface. In this study, recorded ground motions from a dense seismic network were analyzed using a frequency-wavenumber spectrum analysis for seismic wave propagation properties. We investigated temporal and spatial variations in strong shaking in different frequency bands. Results show that a simple pulse incident seismic wave strongly interacts with inside soft sediments and the surrounding topography of the Taipei basin which in turn extends its shaking duration. Evidence showed that seismic waves have been reflected back from its western boundary of basin with a dominant frequency near one Hz. Findings in this study have been rarely reported and may provide useful information to further constrain a three-dimensional numerical simulation for the basin response and velocity structure, and to predict ground motions of further large earthquakes.

  11. Shaking table test and dynamic response analysis of 3-D component base isolation system using multi-layer rubber bearings and coil springs

    Energy Technology Data Exchange (ETDEWEB)

    Tsutsumi, Hideaki; Yamada, Hiroyuki; Ebisawa, Katsumi; Shibata, Katsuyuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Fujimoto, Shigeru [Toshiba Corp., Tokyo (Japan)

    2001-06-01

    Introduction of the base isolation technique into the seismic design of nuclear power plant components as well as buildings has been expected as one of the effective countermeasure to reduce the seismic force applied to components. A research program on the base isolation of nuclear components has been carried out at the Japan Atomic Energy Research Institute (JAERI) since 1991. A methodology and a computer code (EBISA: Equipment Base Isolation System Analysis) for evaluating the failure frequency of the nuclear component with the base isolation were developed. In addition, a test program, which is concerned with the above development, aiming at improvement of failure frequency analysis models in the code has been conducted since 1996 to investigate the dynamic behavior and to verify the effectiveness of component base isolation systems. Two base isolation test systems with different characteristics were fabricated and static and dynamic characteristics were measured by static loading and free vibration tests. One which consists of ball bearings and air springs was installed on the test bed to observe the dynamic response under natural earthquake motion. The effect of base isolation system has been observed under several earthquakes. Three-dimensional response and effect of base isolation of another system using multi-layer-rubber-bearings and coil springs has been investigated under various large earthquake motions by shaking table test. This report describes the results of the shaking table tests and dynamic response analysis. (author)

  12. A preliminary assessment of earthquake ground shaking hazard at Yucca Mountain, Nevada and implications to the Las Vegas region

    International Nuclear Information System (INIS)

    Wong, I.G.; Green, R.K.; Sun, J.I.; Pezzopane, S.K.; Abrahamson, N.A.; Quittmeyer, R.C.

    1996-01-01

    As part of early design studies for the potential Yucca Mountain nuclear waste repository, the authors have performed a preliminary probabilistic seismic hazard analysis of ground shaking. A total of 88 Quaternary faults within 100 km of the site were considered in the hazard analysis. They were characterized in terms of their probability o being seismogenic, and their geometry, maximum earthquake magnitude, recurrence model, and slip rate. Individual faults were characterized by maximum earthquakes that ranged from moment magnitude (M w ) 5.1 to 7.6. Fault slip rates ranged from a very low 0.00001 mm/yr to as much as 4 mm/yr. An areal source zone representing background earthquakes up to M w 6 1/4 = 1/4 was also included in the analysis. Recurrence for these background events was based on the 1904--1994 historical record, which contains events up to M w 5.6. Based on this analysis, the peak horizontal rock accelerations are 0.16, 0.21, 0.28, and 0.50 g for return periods of 500, 1,000, 2,000, and 10,000 years, respectively. In general, the dominant contributor to the ground shaking hazard at Yucca Mountain are background earthquakes because of the low slip rates of the Basin and Range faults. A significant effect on the probabilistic ground motions is due to the inclusion of a new attenuation relation developed specifically for earthquakes in extensional tectonic regimes. This relation gives significantly lower peak accelerations than five other predominantly California-based relations used in the analysis, possibly due to the lower stress drops of extensional earthquakes compared to California events. Because Las Vegas is located within the same tectonic regime as Yucca Mountain, the seismic sources and path and site factors affecting the seismic hazard at Yucca Mountain also have implications to Las Vegas. These implications are discussed in this paper

  13. Analysis of the spectrum distribution of oscillation amplitudes of the concrete mix at shock vibration molding

    Directory of Open Access Journals (Sweden)

    Sharapov Rashid

    2017-01-01

    Full Text Available In the production of concrete structures widespread shaking tables of various designs. The effectiveness of vibroforming concrete items largely depends on the choice of rational modes of vibroeffect to the compacting mixture. The article discusses the propagation of a wave packet in the concrete mixture under shock and vibration molding. Studies have shown that the spectrum of a wave packet contains a large number of harmonics. The main parameter influencing the amplitude-frequency spectrum is the stiffness of elastic gaskets between mold and forming machine vibrating table. By varying the stiffness of the elastic gaskets can widely change the spectrum of the oscillations propagating in the concrete mix. Thus, it is possible to adjust the intensity of the process of vibroforming.

  14. Strong ground motion in Port-au-Prince, Haiti, during the M7.0 12 January 2010 Haiti earthquake

    Science.gov (United States)

    Hough, Susan E; Given, Doug; Taniguchi, Tomoyo; Altidor, J.R.; Anglade, Dieuseul; Mildor, S-L.

    2011-01-01

    No strong motion records are available for the 12 January 2010 M7.0 Haiti earthquake. We use aftershock recordings as well as detailed considerations of damage to estimate the severity and distribution of mainshock shaking in Port-au-Prince. Relative to ground motions at a hard - rock reference site, peak accelerations are amplified by a factor of approximately 2 at sites on low-lying deposits in central Port-au-Prince and by a factor of 2.5 - 3.5 on a steep foothill ridge in the southern Port-au-Prince metropolitan region. The observed amplification along the ridge cannot be explained by sediment - induced amplification , but is consistent with predicted topographic amplification by a steep, narrow ridge. Although damage was largely a consequence of poor construction , the damage pattern inferred from analysis of remote sensing imagery provides evidence for a correspondence between small-scale (0.1 - 1.0 km) topographic relief and high damage. Mainshock shaking intensity can be estimated crudely from a consideration of macroseismic effects . We further present detailed, quantitative analysis of the marks left on a tile floor by an industrial battery rack displaced during the mainshock, at the location where we observed the highest weak motion amplifications. Results of this analysis indicate that mainshock shaking was significantly higher at this location (~0.5 g , MMI VIII) relative to the shaking in parts of Port-au-Prince that experienced light damage. Our results further illustrate how observations of rigid body horizontal displacement during earthquakes can be used to estimate peak ground accelerations in the absence of instrumental data .

  15. Uniformity transition for ray intensities in random media

    Science.gov (United States)

    Pradas, Marc; Pumir, Alain; Wilkinson, Michael

    2018-04-01

    This paper analyses a model for the intensity of distribution for rays propagating without absorption in a random medium. The random medium is modelled as a dynamical map. After N iterations, the intensity is modelled as a sum S of {{\\mathcal N}} contributions from different trajectories, each of which is a product of N independent identically distributed random variables x k , representing successive focussing or de-focussing events. The number of ray trajectories reaching a given point is assumed to proliferate exponentially: {{\\mathcal N}}=ΛN , for some Λ>1 . We investigate the probability distribution of S. We find a phase transition as parameters of the model are varied. There is a phase where the fluctuations of S are suppressed as N\\to ∞ , and a phase where the S has large fluctuations, for which we provide a large deviation analysis.

  16. Noise distribution of an incubator with nebulizer at a neonatal intensive care unit in southern Taiwan.

    Science.gov (United States)

    Chen, H F; Chang, Y J

    2001-06-01

    The purpose of this study was to investigate the noise distribution and sources of peak noise inside an incubator with a nebulizer at a neonatal intensive care unit of a medical center in Southern Taiwan. Sound levels were monitored continuously with an electronic sound-meter for 24 hours daily over a one-week period. Three working hours (day, evening, and night hours) in the weekday and weekend (total 48 hours) were selected randomly from the one-week period of noise survey to observe peak noise at levels > or = 65 dBA. Results revealed that 24.8% of the total monitoring period had sound levels at or = 70 dBA. Furthermore, a total of 947 peak noises > or = 65 dBA were found within the 48 hours, of which 61.5% were in a range of 65-69 dBA, 24% of 70-74 dBA, 9.8% of 75-79 dBA, and 4.8% > or = 80 dBA. Human-related sources, equaling 79%, were the dominant peak noises. These noises included opening and closing doors, banging the incubator hood, conversation among staff, nursing activity inside the incubator, tearing and opening paper or bags, opening and closing trash can lids, and bumping metal carts or other apparatus. Nonhuman-related sources were 21% including alarms of monitors and running of the incubator motor. Results of this study showed that the noise distribution in the incubator with nebulizer was far above a protective limitation of 58 dBA, suggested by the American Academy of Pediatrics in 1974. However, most peak noises could be reduced by modification of staff behavior. Therefore, determinations of noise distribution and sources of peak noise in this study are useful for further noise reduction programs.

  17. Neutron intensity of fast reactor spent fuel

    Energy Technology Data Exchange (ETDEWEB)

    Takamatsu, Misao; Aoyama, Takafumi [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-03-01

    Neutron intensity of spent fuel of the JOYO Mk-II core with a burnup of 62,500 MWd/t and cooling time of 5.2 years was measured at the spent fuel storage pond. The measured data were compared with the calculated values based on the JOYO core management code system `MAGI`, and the average C/E approximately 1.2 was obtained. It was found that the axial neutron intensity didn`t simply follow the burnup distribution, and the neutron intensity was locally increased at the bottom end of the fuel region due to an accumulation of {sup 244}Cm. (author)

  18. Volumetric intensity dependence on the formation of molecular and atomic ions within a high intensity laser focus.

    Science.gov (United States)

    Robson, Lynne; Ledingham, Kenneth W D; McKenna, Paul; McCanny, Thomas; Shimizu, Seiji; Yang, Jiamin M; Wahlström, Claes-Göran; Lopez-Martens, Rodrigo; Varju, Katalin; Johnsson, Per; Mauritsson, Johan

    2005-01-01

    The mechanism of atomic and molecular ionization in intense, ultra-short laser fields is a subject which continues to receive considerable attention. An inherent difficulty with techniques involving the tight focus of a laser beam is the continuous distribution of intensities contained within the focus, which can vary over several orders of magnitude. The present study adopts time of flight mass spectrometry coupled with a high intensity (8 x 10(15) Wcm(-2)), ultra-short (20 fs) pulse laser in order to investigate the ionization and dissociation of the aromatic molecule benzene-d1 (C(6)H(5)D) as a function of intensity within a focused laser beam, by scanning the laser focus in the direction of propagation, while detecting ions produced only in a "thin" slice (400 and 800 microm) of the focus. The resultant TOF mass spectra varies significantly, highlighting the dependence on the range of specific intensities accessed and their volumetric weightings on the ionization/dissociation pathways accessed.

  19. Segmentation of 3-D High-Frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut With Energy Functional Adapted to Local Intensity Distribution.

    Science.gov (United States)

    Kuo, Jen-Wei; Mamou, Jonathan; Wang, Yao; Saegusa-Beecroft, Emi; Machi, Junji; Feleppa, Ernest J

    2017-10-01

    Previous studies by our group have shown that 3-D high-frequency quantitative ultrasound (QUS) methods have the potential to differentiate metastatic lymph nodes (LNs) from cancer-free LNs dissected from human cancer patients. To successfully perform these methods inside the LN parenchyma (LNP), an automatic segmentation method is highly desired to exclude the surrounding thin layer of fat from QUS processing and accurately correct for ultrasound attenuation. In high-frequency ultrasound images of LNs, the intensity distribution of LNP and fat varies spatially because of acoustic attenuation and focusing effects. Thus, the intensity contrast between two object regions (e.g., LNP and fat) is also spatially varying. In our previous work, nested graph cut (GC) demonstrated its ability to simultaneously segment LNP, fat, and the outer phosphate-buffered saline bath even when some boundaries are lost because of acoustic attenuation and focusing effects. This paper describes a novel approach called GC with locally adaptive energy to further deal with spatially varying distributions of LNP and fat caused by inhomogeneous acoustic attenuation. The proposed method achieved Dice similarity coefficients of 0.937±0.035 when compared with expert manual segmentation on a representative data set consisting of 115 3-D LN images obtained from colorectal cancer patients.

  20. Rainfall intensity characteristics at coastal and high altitude stations ...

    Indian Academy of Sciences (India)

    a given amount of rain occurs is important because heavier rainfall leads to greater runoff, greater soil erosion and less infiltration into the water table. A knowledge of rainfall intensity therefore becomes. Keywords. Rainfall intensity; Kerala; cumulative distribution. J. Earth Syst. Sci. 116, No. 5, October 2007, pp. 451–463.

  1. Knowledge base about earthquakes as a tool to minimize strong events consequences

    Science.gov (United States)

    Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej

    2017-04-01

    The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653

  2. EU energy-intensive industries and emissions trading: losers becoming winners?

    Energy Technology Data Exchange (ETDEWEB)

    Wettestad, Joergen

    2008-11-15

    The EU Emissions Trading System (ETS) initially treated power producers and energy-intensive industries similarly, despite clear structural differences between these industries regarding pass through of costs and vulnerability to global competition. Hence, the energy-intensive industries could be seen as losing out in the internal distribution. In the January 2008 proposal for a reformed ETS post-2012, a differentiated system was proposed where the energy-intensive industries come out relatively much better. What is the explanation for the change taking place? Although power producers still have a dominant position in the system, the increasing consensus about windfall profits has weakened their standing. Conversely, the energy-intensive industries have become better organised and more active. This balance shift is first and foremost noticeable in several important EU-level stake holder consultation processes. Energy-intensive industries have, however, also successfully utilised the national pathway to exert influence on Brussels policy-making. Finally, growing fear of lax global climate policies and related carbon leakage has strengthened the case of these industries further. The latter dimension indicates that although energy-intensive industries have managed to reduce internal distribution anomalies, external challenges remain. (author). 9 refs

  3. Low-intensity beam diagnostics with particle detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rovelli, A.; Ciavola, G.; Cuttone, G.; Finocchiaro, P.; Raia, G. [INFN-LNS, Via S. Sofia 44/A Catania, 95125 (Italy); De Martinis, C.; Giove, D. [INFN-LASA, Via F.lli Cervi 201 Segrate (Midway Islands), 20090 (Italy)

    1997-01-01

    The measure of low intensity beams at low-medium energy is one of the major challenge in beam diagnostics. This subject is of great interest for the design of accelerator-based medical and radioactive beam facilities. In this paper we discuss new developments in image-based devices to measure low-intensity beams. All the investigated devices must guarantee measurement of the total beam current and its transverse distribution. {copyright} {ital 1997 American Institute of Physics.}

  4. Low-intensity beam diagnostics with particle detectors

    International Nuclear Information System (INIS)

    Rovelli, A.; Ciavola, G.; Cuttone, G.; Finocchiaro, P.; Raia, G.; De Martinis, C.; Giove, D.

    1997-01-01

    The measure of low intensity beams at low-medium energy is one of the major challenge in beam diagnostics. This subject is of great interest for the design of accelerator-based medical and radioactive beam facilities. In this paper we discuss new developments in image-based devices to measure low-intensity beams. All the investigated devices must guarantee measurement of the total beam current and its transverse distribution. copyright 1997 American Institute of Physics

  5. Intensity dependence of focused ultrasound lesion position

    Science.gov (United States)

    Meaney, Paul M.; Cahill, Mark D.; ter Haar, Gail R.

    1998-04-01

    Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical to predicting lesion formation in focused ultrasound surgery. To date most models have used linear propagation models to predict the intensity profiles needed to compute the temporally varying temperature distributions. These can be used to compute thermal dose contours that can in turn be used to predict the extent of thermal damage. However, these simulations fail to adequately describe the abnormal lesion formation behavior observed for in vitro experiments in cases where the transducer drive levels are varied over a wide range. For these experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increasing transducer drive levels than would be predicted using linear propagation models. The simulations described herein, utilize the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused ultrasound waves, to demonstrate that the positions of the peak intensity and the lesion do indeed move closer to the transducer. This illustrates that for accurate modeling of heating during FUS, nonlinear effects must be considered.

  6. Measurement of angular distributions of K x-ray intensity of Ti and Cu thick targets following impact of 10–25 keV electrons

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Bhupendra; Kumar, Sunil; Prajapati, Suman; Singh, Bhartendu K. [Atomic Physics Laboratory, Department of Physics, Institute of Science, Banaras Hindu University, Varanasi 221005 (India); Llovet, Xavier [Scientific and Technological Centers, Universitat de Barcelona, Lluís Solé i Sabarís 1-3, 08028 Barcelona (Spain); Shanker, R., E-mail: shankerorama@gmail.com [Atomic Physics Laboratory, Department of Physics, Institute of Science, Banaras Hindu University, Varanasi 221005 (India)

    2017-04-15

    Highlights: • New results on the angular distributions of relative intensities of K-X-rays lines of Ti and Cu thick targets under electron bombardment are reported. • An increase of relative intensity of Kα and Kβ X-ray lines has been found to be about 60–70% in the detection range θ = 105{sup 0}–165{sup 0}. • There is a slight impact energy dependence of Cu Kα X-ray line. • A reasonable agreement between experimental and PENELOPE MC Calculations are obtained. - Abstract: We present new results on angular distributions of the relative intensity of K{sub α} and K{sub β} x-ray lines of thick targets of Ti (Z = 22) and Cu (Z = 29) pure elements following impact of 10–25 keV electrons. The angular measurements of the K x-radiations were accomplished by rotating the target surface with respect to the electron beam direction. The x-rays emerging from the target surface in reflection mode were detected by an energy dispersive Si P-I-N photodiode detector. The resulting variation of the relative intensity of the characteristic lines as a function of angle of detection and impact energy has been found to be anisotropic and it is considered to arise due to change in path lengths at a given incidence angle α for the photons generated by direct as well as by indirect K shell ionization processes. The measured angular variations of relative intensity of K{sub α} and K{sub β} x-ray lines of both targets are found to increase by about 60–70% in going from θ = 105{sup 0} to 165{sup 0} at a given impact energy; however there is a slight indication of impact energy dependence of Cu K{sub α} x-ray line as also noted by the earlier workers. We compare the experimental results with those obtained by Monte Carlo simulations using PENELOPE calculations; the agreement between experiment and theory is found to be satisfactory within uncertainties involved in the measurements and the theoretical results.

  7. Finite element modeling of a shaking table test to evaluate the dynamic behaviour of a soil-foundation system

    International Nuclear Information System (INIS)

    Abate, G.; Massimino, M. R.; Maugeri, M.

    2008-01-01

    The deep investigation of soil-foundation interaction behaviour during earthquakes represent one of the key-point for a right seismic design of structures, which can really behave well during earthquake, avoiding dangerous boundary conditions, such as weak foundations supporting the superstructures. The paper presents the results of the FEM modeling of a shaking table test involving a concrete shallow foundation resting on a Leighton Buzzard sand deposit. The numerical simulation is performed using a cap-hardening elasto-plastic constitutive model for the soil and specific soil-foundation contacts to allow slipping and up-lifting phenomena. Thanks to the comparison between experimental and numerical results, the power and the limits of the proposed numerical model are focused. Some aspects of the dynamic soil-foundation interaction are also pointed out

  8. Developing empirical collapse fragility functions for global building types

    Science.gov (United States)

    Jaiswal, K.; Wald, D.; D'Ayala, D.

    2011-01-01

    Building collapse is the dominant cause of casualties during earthquakes. In order to better predict human fatalities, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program requires collapse fragility functions for global building types. The collapse fragility is expressed as the probability of collapse at discrete levels of the input hazard defined in terms of macroseismic intensity. This article provides a simple procedure for quantifying collapse fragility using vulnerability criteria based on the European Macroseismic Scale (1998) for selected European building types. In addition, the collapse fragility functions are developed for global building types by fitting the beta distribution to the multiple experts’ estimates for the same building type (obtained from EERI’s World Housing Encyclopedia (WHE)-PAGER survey). Finally, using the collapse probability distributions at each shaking intensity level as a prior and field-based collapse-rate observations as likelihood, it is possible to update the collapse fragility functions for global building types using the Bayesian procedure.

  9. Rocking behavior of an instrumented unique building on the MIT campus identified from ambient shaking data

    Science.gov (United States)

    Çelebi, Mehmet; Toksöz, Nafi; Büyüköztürk, Oral

    2014-01-01

    A state-of-the-art seismic monitoring system comprising 36 accelerometers and a data-logger with real-time capability was recently installed at Building 54 on the Massachusetts Institute of Technology's (MIT) Cambridge, MA, campus. The system is designed to record translational, torsional, and rocking motions, and to facilitate the computation of drift between select pairs of floors. The cast-in-place, reinforced concrete building is rectangular in plan but has vertical irregularities. Heavy equipment is installed asymmetrically on the roof. Spectral analyses and system identification performed on five sets of low-amplitude ambient data reveal distinct and repeatable fundamental translational frequencies in the structural NS and EW directions (0.75 Hz and 0.68 Hz, respectively), a torsional frequency of 1.49 Hz, a rocking frequency of 0.75 Hz, and very low damping. Such results from low-amplitude data serve as a baseline against which to compare the behavior and performance of the building during stronger shaking caused by future earthquakes in the region.

  10. Science Resulting from U.S. Geological Survey's "Did You Feel It?" Citizen Science Portal

    Science.gov (United States)

    Wald, D. J.; Dewey, J. W.; Atkinson, G. M.; Worden, C. B.; Quitoriano, V. P. R.

    2016-12-01

    The U.S. Geological Survey (USGS) "Did You Feel It?" (DYFI) system, in operation since 1999, is an automated approach for rapidly collecting macroseismic intensity data from internet users' shaking and damage reports and generating intensity maps immediately following earthquakes felt around the globe. As with any citizen science project, a significant component of the DYFI system is public awareness and participation in the immediate aftermath of any widely felt earthquake, allowing the public and the USGS to exchange valuable post-earthquake information. The data collected are remarkably robust and useful, as indicated by the range of peer-reviewed literature that rely on these citizen-science intensity reports. A Google Scholar search results in 14,700 articles citing DYFI, a number of which rely exclusively on these data. Though focused on topics of earthquake seismology (including shaking attenuation and relationships with damage), other studies cover social media use in disasters, human risk perception, earthquake-induced landslides, rapid impact assessment, emergency response, and science education. DYFI data have also been analyzed for non-earthquake events, including explosions, aircraft sonic booms, and even bolides and DYFI is now one of the best data sources from which to study induced earthquakes. Yet, DYFI was designed primarily as an operational system to rapidly assess the effects of earthquakes for situational awareness. Oftentimes, DYFI data are the only data available pertaining to shaking levels for much of the United States. As such, DYFI provides site-specific constraints of the shaking levels that feed directly into ShakeMap; thus, these data are readily available to emergency managers and responders, the media, and the general public. As an early adopter of web-based citizen science and having worked out many kinks in the process, DYFI developers have provided guidance on many other citizen-science endeavors across a wide range of

  11. Sheet beam model for intense space charge: Application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam

    Directory of Open Access Journals (Sweden)

    Steven M. Lund

    2011-05-01

    Full Text Available A one-dimensional Vlasov-Poisson model for sheet beams is reviewed and extended to provide a simple framework for analysis of space-charge effects. Centroid and rms envelope equations including image-charge effects are derived and reasonable parameter equivalences with commonly employed 2D transverse models of unbunched beams are established. This sheet-beam model is then applied to analyze several problems of fundamental interest. A sheet-beam thermal equilibrium distribution in a continuous focusing channel is constructed and shown to have analogous properties to two- and three-dimensional thermal equilibrium models in terms of the equilibrium structure and Debye screening properties. The simpler formulation for sheet beams is exploited to explicitly calculate the distribution of particle oscillation frequencies within a thermal equilibrium beam. It is shown that as space-charge intensity increases, the frequency distribution becomes broad, suggesting that beams with strong space-charge can have improved stability relative to beams with weak space-charge.

  12. Effects of topographic position and geology on shaking damage to residential wood-framed structures during the 2003 San Simeon earthquake, western San Luis obispo county, California

    Science.gov (United States)

    McCrink, T.P.; Wills, C.J.; Real, C.R.; Manson, M.W.

    2010-01-01

    A statistical evaluation of shaking damage to wood-framed houses caused by the 2003 M6.5 San Simeon earthquake indicates that both the rate and severity of damage, independent of structure type, are significantly greater on hilltops compared to hill slopes when underlain by Cretaceous or Tertiary sedimentary rocks. This increase in damage is interpreted to be the result of topographic amplification. An increase in the damage rate is found for all structures built on Plio-Pleistocene rocks independent of topographic position, and this is interpreted to be the result of amplified shaking caused by geologic site response. Damage rate and severity to houses built on Tertiary rocks suggest that amplification due to both topographic position and geologic site response may be occurring in these rocks, but effects from other topographic parameters cannot be ruled out. For all geologic and topographic conditions, houses with raised foundations are more frequently damaged than those with slab foundations. However, the severity of damage to houses on raised foundations is only significantly greater for those on hill slopes underlain by Tertiary rocks. Structures with some damage-resistant characteristics experienced greater damage severity on hilltops, suggesting a spectral response to topographic amplification. ?? 2010, Earthquake Engineering Research Institute.

  13. X-rays diagnostics of the hot electron energy distribution in the intense laser interaction with metal targets

    Science.gov (United States)

    Kostenko, O. F.; Andreev, N. E.; Rosmej, O. N.

    2018-03-01

    A two-temperature hot electron energy distribution has been revealed by modeling of bremsstrahlung emission, measured by the radiation attenuation and half-shade methods, and Kα emission from a massive silver cylinder irradiated by a subpicosecond s-polarized laser pulse with a peak intensity of about 2 × 1019 W/cm2. To deduce parameters of the hot electron spectrum, we have developed semi-analytical models of generation and measurements of the x-rays. The models are based on analytical expressions and tabulated data on electron stopping power as well as cross-sections of generation and absorption of the x-rays. The Kα emission from thin silver foils deposited on low-Z substrates, both conducting and nonconducting, has been used to verify the developed models and obtained hot electron spectrum. The obtained temperatures of the colder and hotter electron components are in agreement with the values predicted by kinetic simulations of the cone-guided approach to fast ignition [Chrisman et al., Phys. Plasmas 15, 056309 (2008)]. The temperature of the low-energy component of the accelerated electron spectrum is well below the ponderomotive scaling and Beg's law. We have obtained relatively low conversion efficiency of laser energy into the energy of hot electrons propagating through the solid target of about 2%. It is demonstrated that the assumption about a single-temperature hot electron energy distribution with the slope temperature described by the ponderomotive scaling relationship, without detailed analysis of the hot electron spectrum, can lead to strong overestimation of the laser-to-electron energy-conversion efficiency, in particular, the conversion efficiency of laser energy into the high-temperature component of the hot electron distribution.

  14. The influence of fuel-air swirl intensity on flame structures of syngas swirl-stabilized diffusion flame

    Science.gov (United States)

    Shao, Weiwei; Xiong, Yan; Mu, Kejin; Zhang, Zhedian; Wang, Yue; Xiao, Yunhan

    2010-06-01

    Flame structures of a syngas swirl-stabilized diffusion flame in a model combustor were measured using the OH-PLIF method under different fuel and air swirl intensity. The flame operated under atmospheric pressure with air and a typical low heating-value syngas with a composition of 28.5% CO, 22.5% H2 and 49% N2 at a thermal power of 34 kW. Results indicate that increasing the air swirl intensity with the same fuel, swirl intensity flame structures showed little difference except a small reduction of flame length; but also, with the same air swirl intensity, fuel swirl intensity showed great influence on flame shape, length and reaction zone distribution. Therefore, compared with air swirl intensity, fuel swirl intensity appeared a key effect on the flame structure for the model combustor. Instantaneous OH-PLIF images showed that three distinct typical structures with an obvious difference of reaction zone distribution were found at low swirl intensity, while a much compacter flame structure with a single, stable and uniform reaction zone distribution was found at large fuel-air swirl intensity. It means that larger swirl intensity leads to efficient, stable combustion of the syngas diffusion flame.

  15. Precision measurements with LPCTrap and technical developments at GANIL: β-ν angular correlation (a_β_ν) and shake-off probability in the "6He"1"+ β decay, study of the production of new beams at SPIRAL

    International Nuclear Information System (INIS)

    Couratin, Claire

    2013-01-01

    The subject of this work is twofold. The main objective is to measure the β-ν angular correlation (a_β_ν) in the β decay of the "6He"+ at a 0.5% precision level as well as the associated shake-off probability. Evidences of a discrepancy between the experimental value of a_β_ν and the prediction in the Standard Model (SM) would mean the existence of a tensor current in the weak interaction. Consequently, it would question the V-A structure used to describe the weak interaction in the SM. A Paul trap is used to confine "6He"+ ions almost at rest in a very small volume in order to provide a decay source as well defined as possible. The emitted β particle and recoil ion (RI) are detected in coincidence by detectors surrounding the trap. The setup is sensitive to the different charge states of the RIs and also allows the measurement of shake-off probability. This latter and a_β_ν are deduced from comparison with realistic Monte Carlo simulations of the time of flight spectra of the recoiling ions using Geant4 and Simion. The set of measured distributions enables to control the main systematic effects. The second part of this work is dedicated to the development of a new target ion source system (TISS) using a FEBIAD (Forced Electron Beam Induced Arc Discharge) and SPIRAL1 target in the framework of its upgrade. Several experiments have been performed to test the reliability of the TISS and to measure the yields of expected new beams. These tests led to an improved design of the TISS to be tested in December 2013. Within the framework of this upgrade, new beams should be available in 2016. (author) [fr

  16. Shake Table Study on the Effect of Mainshock-Aftershock Sequences on Structures with SFSI

    Directory of Open Access Journals (Sweden)

    Xiaoyang Qin

    2017-01-01

    Full Text Available Observations from recent earthquakes have emphasised the need for a better understanding of the effects of structure-footing-soil interaction on the response of structures. In order to incorporate the influences of soil, a laminar box can be used to contain the soil during experiments. The laminar box simulates field boundary conditions by allowing the soil to shear during shake table tests. A holistic response of a structure and supporting soil can thus be obtained by placing a model structure on the surface of the soil in the laminar box. This work reveals the response of structure with SFSI under mainshock and aftershock earthquake sequences. A large (2 m by 2 m laminar box, capable of simulating the behaviour of both dry and saturated soils, was constructed. A model structure was placed on dry sand in the laminar box. The setup was excited by a sequence of earthquake excitations. The first excitation was used to obtain the response of the model on sand under the mainshock of an earthquake. The second and third excitations represented the first and second aftershocks, respectively.

  17. Vertical distribution of bacteria and intensity of microbiological processes in two stratified gypsum Karst Lakes in Lithuania

    Directory of Open Access Journals (Sweden)

    Krevs A.

    2011-08-01

    Full Text Available Physical-chemical parameters and the vertical distribution of bacteria and organic matter production-destruction processes were studied during midsummer stratification in two karst lakes (Kirkilai and Ramunelis located in northern Lithuania. The lakes were characterized by high sulfate concentrations (369–1248 mg·L-1. The O2/H2S intersection zone formed at 2–3 m depth. In Lake Kirkilai, the highest bacterial densities (up to 8.7 × 106 cell·mL-1 occurred at the O2/H2S intersection zone, whereas in Lake Ramunelis the highest densities were observed in the anoxic hypolimnion (up to 11 × 106 cell·mL-1. Pigment analysis revealed that green sulfur bacteria dominated in the microaerobic–anaerobic water layers in both lakes. The most intensive development of sulfate-reducing bacteria was observed in the anaerobic layer. Photosynthetic production of organic matter was highest in the upper layer. Rates of sulfate reduction reached 0.23 mg S2−·dm3·d-1 in the microaerobic-anaerobic water layer and 1.97 mg S2−·dm3·d-1 in sediments. Karst lakes are very sensitive to organic pollution, because under such impact in the presence of high sulfate amounts, sulfate reduction may become very intensive and, consequently, the increase in hydrogen sulfide and development of sulfur cycle bacteria may reduce the variety of other hydrobionts.

  18. Early estimation of epicenter seismic intensities according to co-seismic deformation

    OpenAIRE

    Weidong, Li; Chaojun, Zhang; Dahui, Li; Jiayong, He; Huizhong, Chen; Lomnitz, Cinna

    2010-01-01

    The absolute fault displacement in co-seismic deformation is derived assuming that location, depth, faulting mechanism and magnitude of the earthquake are known. The 2008 Wenchuan earthquake (M8.0) is used as an example to determine the distribution of seismic intensities using absolute displacement and a crustal model. We fnd that an early prediction of the distribution of seismic intensities after a large earthquake may be performed from the estimated absolute co-seismic displacements using...

  19. Three-dimensional inhomogeneous rain fields: implications for the distribution of intensity and polarization of the microwave thermal radiation.

    Science.gov (United States)

    Ilyushin, Yaroslaw; Kutuza, Boris

    Observations and mapping of the upwelling thermal radiation of the Earth is the very promising remote sensing technique for the global monitoring of the weather and precipitations. For reliable interpretation of the observation data, numerical model of the microwave radiative transfer in the precipitating atmosphere is necessary. In the present work, numerical simulations of thermal microwave radiation in the rain have been performed at three wavelengths (3, 8 and 22 mm). Radiative properties of the rain have been simulated using public accessible T-matrix codes (Mishchenko, Moroz) for non-spherical particles of fixed orientation and realistic raindrop size distributions (Marshall-Palmer) within the range of rain intensity 1-100 mm/h. Thermal radiation of infinite flat slab medium and isolated rain cell of kilometer size has been simulated with finite difference scheme for the vectorial radiative transfer equation (VRTE) in dichroic scattering medium. Principal role of cell structure of the rain field in the formation of angular and spatial distribution of the intensity and polarization of the upwelling thermal radiation has been established. Possible approaches to interpretation of satellite data are also discussed. It is necessary that spatial resolution of microwave radiometers be less than rain cell size. At the present time the resolution is approximately 15 km. It can be considerably improved, for example by two-dimensional synthetic aperture millimeter-wave radiometric interferometer for measuring full-component Stokes vector of emission from hydrometeors. The estimates show that in millimeter band it is possible to develop such equipment with spatial resolution of the order of 1-2 km, which is significantly less than the size of rain cell, with sensitivity 0.3-0.5 K. Under this condition the second Stokes parameter may by successfully measured and may be used for investigation of precipitation regions. Y-shaped phased array antenna is the most promising to

  20. Estimation of monthly solar radiation distribution for solar energy system analysis

    International Nuclear Information System (INIS)

    Coskun, C.; Oktay, Z.; Dincer, I.

    2011-01-01

    The concept of probability density frequency, which is successfully used for analyses of wind speed and outdoor temperature distributions, is now modified and proposed for estimating solar radiation distributions for design and analysis of solar energy systems. In this study, global solar radiation distribution is comprehensively analyzed for photovoltaic (PV) panel and thermal collector systems. In this regard, a case study is conducted with actual global solar irradiation data of the last 15 years recorded by the Turkish State Meteorological Service. It is found that intensity of global solar irradiance greatly affects energy and exergy efficiencies and hence the performance of collectors. -- Research highlights: → The first study to apply global solar radiation distribution in solar system analyzes. → The first study showing global solar radiation distribution as a parameter of the solar irradiance intensity. → Time probability intensity frequency and probability power distribution do not have similar distribution patterns for each month. → There is no relation between the distribution of annual time lapse and solar energy with the intensity of solar irradiance.

  1. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  2. Collimator setting optimization in intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Williams, M.; Hoban, P.

    2001-01-01

    Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  3. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    Science.gov (United States)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  4. Comparison of γ-ray intensity distribution around Hira fault with spatial pattern of major and/or sub fault system

    International Nuclear Information System (INIS)

    Nakanishi, Tatsuya; Mino, Kazuo; Ogasawara, Hiroshi; Katsura, Ikuo

    1999-01-01

    Major active faults generally consist of systems of a number of fractures with various dimensions, and contain a lot of ground water. Rn gas, moving with underground water, tends to accumulate along faults and emit γ-ray while it decays down to Pb through Bi. Therefore, it has been shown by a number of works that γ-ray intensity is generally high near the core of the major active fault and the γ-ray survey is one of the effective methods to look for the core of the major active fault. However, around the area near the tips of faults, a number of complicated sub-fault systems and the corresponding complicated geological structures are often seen and it has not been investigated well about what can be the relationship between the intensity distribution of γ-ray and the fault systems. In order to investigate the relationship in an area near the tips of major faults well, therefore, we carried out the γ-ray survey at about 1,100 sites in an area of about 2 km x 2 km that has the tips of the two major right lateral faults with significant thrusting components. We also investigated the lineaments by using the topographic map published in 1895 when artificial construction was seldom seen in the area and we can easily see the natural topography. In addition, we carried out the γ-ray survey in an area far from the fault tip to compare with the results in the area with the fault tips. Then: (1) we reconfirmed that in the case of the middle of the major active fault, γ-ray intensity is high in the limited area just adjacent to the core of the fault. (2) However, we found that in the case of the tip of the major active fault, high γ-ray intensity is seen in much wider area with clear lineaments that is inferred to be developed associated with the movement of the major faults. (author)

  5. Efeito da utilização de gomas na viscosidade e nas características sensoriais de shake à base de farinha de banana verde

    Directory of Open Access Journals (Sweden)

    Roberta Ribeiro Silva

    2018-01-01

    Full Text Available Resumo Este trabalho teve como objetivo testar diferentes proporções de gomas na elaboração de um shake desenvolvido com farinha de banana verde, além de avaliar as suas propriedades organolépticas. Analisaram-se seis formulações com os seguintes ingredientes: farinha de banana verde (FBV, leite em pó integral, sucralose, cacau e diferentes proporções de goma xantana (GX e goma guar (GG, nas proporções (GX:GG: F1 (1:0; F2 (0:1; F3 (1:1; F4 (1:3; F5 (3:1; F6 sem estabilizante. As formulações de shake foram submetidas ao teste de viscosidade, realizado a cada três dias, durante um período de 15 dias. Foi realizada análise sensorial com voluntários, em cabines individuais. A aceitabilidade dos atributos aparência, aroma, sabor, textura e impressão global foi avaliada utilizando-se a escala hedônica de 9 pontos. Foram realizadas análises estatísticas utilizando-se o Teste de Variância (ANOVA e o Teste de Tukey a 5% de probabilidade. O estudo foi importante para demonstrar que a viscosidade dos produtos influencia nas características sensoriais e, dessa forma, interfere na escolha dos consumidores.

  6. The spatial intensity distribution of selected emission lines for Herbig-Haro 1 - Comparison between theory and observations

    International Nuclear Information System (INIS)

    Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.

    1989-01-01

    In this paper, it is shown that most of the spatial intensity distribution of 11 selected emission lines for Herbig-Haro 1 (including the forbidden S II emission lines at 6731 A and 4069 A, the forbidden O III line at 5007 A, and the forbidden O II line at 3727 A) can be explained by a bow shock with a shock velocity of about 150-200 km/sec at the stagnation point, and under the assumption that the gas entering the shock is fully preionized. The results are based on three spectrograms (with a total exposure time of 180 min) obtained consecutively. Specifically, the ratios of each of the forbidden lines to H-alpha were studied, which permitted a critical test of the model. The agreement between the theoretical predictions and the observations was found to be remarkable, considering the complex geometry that a bow shock could have. 38 refs

  7. Computation of Ion Charge State Distributions After Inner-shell Ionization in Ne, Ar and Kr Atoms Using Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Ahmed Ghoneim, Adel Aly; Ghoneim, Adel A.; Al-Zanki, Jasem M.; El-Essawy, Ashraf H.

    2009-01-01

    Atomic reorganization starts by filling the initially inner-shell vacancy by a radiative transition (x-ray) or by a non-radiative transition (Auger and Coster-Kronig processes). New vacancies created during this atomic reorganization may in turn be filled by further radiative and non-radiative transitions until all vacancies reach the outermost occupied shells. The production of inner-shell vacancy in an atom and the de-excitation decays through radiative and non-radiative transitions may result in a change of the atomic potential; this change leads to the emission of an additional electron in the continuum (electron shake-off processes). In the present work, the ion charge state distributions (CSD) and mean atomic charge ions produced from inner shell vacancy de-excitation decay are calculated for neutral Ne , Ar and Kr atoms. The calculations are carried out using Monte Carlo (MC) technique to simulate the cascade development after primary vacancy production. The radiative and non-radiative transitions for each vacancy are calculated in the simulation. In addition, the change of transition energies and transition rates due to multi vacancies produced in the atomic configurations through the cascade development are considered in the present work. It is found that considering the electron shake off process and closing of non-allowed non-radiative channels improves the results of both charge state distributions (CSD) and average charge state. To check the validity of the present calculations, the results obtained are compared with available theoretical and experimental data. The present results are found to agree well with the available theoretical and experimental values. (author)

  8. Ultrasonic-energy enhance the ionic liquid-based dual microextraction to preconcentrate the lead in ground and stored rain water samples as compared to conventional shaking method.

    Science.gov (United States)

    Nizamani, Sooraj; Kazi, Tasneem G; Afridi, Hassan I

    2018-01-01

    An efficient preconcentration technique based on ultrasonic-assisted ionic liquid-based dual microextraction (UA-ILDµE) method has been developed to preconcentrate the lead (Pb +2 ) in ground and stored rain water. In the current proposed method, Pb +2 was complexed with a chelating agent (dithizone), whereas an ionic liquid (1-butyl-3-methylimidazolium hexafluorophosphate) was used for extraction purpose. The ultrasonic irradiation and electrical shaking system were applied to enhance the dispersion and extraction of Pb +2 complex in aqueous samples. For second phase, dual microextraction (DµE phase), the enriched Pb +2 complex in ionic liquid, extracted back into the acidic aqueous solution and finally determined by flame atomic absorption spectrometry. Some major analytical parameters that influenced the extraction efficiency of developed method, such as pH, concentration of ligand, volume of ionic liquid and samples, time of shaking in thermostatic electrical shaker and ultrasonic bath, effect of back extracting HNO 3 volume, matrix effect, centrifugation time and rate were optimized. At the sample volume of 25mL, the calculated preconcentration factor was 62.2. The limit of detection of proposed procedure for Pb +2 ions was found to be 0.54μgL -1 . The validation of developed method was performed by the analysis of certified sample of water SRM 1643e and standard addition method in a real water sample. The extraction recovery of Pb +2 was enhanced≥2% with shaking time of 80s in ultrasonic bath as compared to used thermostatic electrical shaker, where for optimum recovery up to 10min was required. The developed procedure was successfully used for the enrichment of Pb +2 in ground and stored rain water (surface water) samples of an endemic region of Pakistan. The resulted data indicated that the ground water samples were highly contaminated with Pb +2 , while some of the surface water samples were also have higher values of Pb +2 than permissible limit of

  9. A preliminary assessment of earthquake ground shaking hazard at Yucca Mountain, Nevada and implications to the Las Vegas region

    Energy Technology Data Exchange (ETDEWEB)

    Wong, I.G.; Green, R.K.; Sun, J.I. [Woodward-Clyde Federal Services, Oakland, CA (United States); Pezzopane, S.K. [Geological Survey, Denver, CO (United States); Abrahamson, N.A. [Abrahamson (Norm A.), Piedmont, CA (United States); Quittmeyer, R.C. [Woodward-Clyde Federal Services, Las Vegas, NV (United States)

    1996-12-31

    As part of early design studies for the potential Yucca Mountain nuclear waste repository, the authors have performed a preliminary probabilistic seismic hazard analysis of ground shaking. A total of 88 Quaternary faults within 100 km of the site were considered in the hazard analysis. They were characterized in terms of their probability o being seismogenic, and their geometry, maximum earthquake magnitude, recurrence model, and slip rate. Individual faults were characterized by maximum earthquakes that ranged from moment magnitude (M{sub w}) 5.1 to 7.6. Fault slip rates ranged from a very low 0.00001 mm/yr to as much as 4 mm/yr. An areal source zone representing background earthquakes up to M{sub w} 6 1/4 = 1/4 was also included in the analysis. Recurrence for these background events was based on the 1904--1994 historical record, which contains events up to M{sub w} 5.6. Based on this analysis, the peak horizontal rock accelerations are 0.16, 0.21, 0.28, and 0.50 g for return periods of 500, 1,000, 2,000, and 10,000 years, respectively. In general, the dominant contributor to the ground shaking hazard at Yucca Mountain are background earthquakes because of the low slip rates of the Basin and Range faults. A significant effect on the probabilistic ground motions is due to the inclusion of a new attenuation relation developed specifically for earthquakes in extensional tectonic regimes. This relation gives significantly lower peak accelerations than five other predominantly California-based relations used in the analysis, possibly due to the lower stress drops of extensional earthquakes compared to California events. Because Las Vegas is located within the same tectonic regime as Yucca Mountain, the seismic sources and path and site factors affecting the seismic hazard at Yucca Mountain also have implications to Las Vegas. These implications are discussed in this paper.

  10. Spatial variability and trends of the rain intensity over Greece

    Science.gov (United States)

    Kambezidis, H. D.; Larissi, I. K.; Nastos, P. T.; Paliatsos, A. G.

    2010-07-01

    In this study, the spatial and temporal variability of the mean annual rain intensity in Greece are examined during a 41-year period (1962-2002). The meteorological datasets concern monthly rain amounts (mm) and the respective monthly durations (h) recorded at thirty two meteorological stations of the Hellenic National Meteorological Service, which are uniformly distributed on Greek territory, in order to calculate the mean monthly rain intensity. All the rain time series used in the analysis were tested by the application of the short-cut Bartlett test of homogeneity. The spatial distribution of the mean annual rain intensity is studied using the Kriging interpolation method, while the temporal variability, concerning the mean annual rain intensity trends along with their significance (Mann-Kendall test), is analysed. The findings of the analysis show that statistically significant negative trends (95% confidence level) appear mainly in the west sub-regions of Greece, while statistically significant positive trends (95% confidence level) appear in the wider area of Athens and the complex of Cyclades Islands. Further analysis concerning the seasonal rain intensity is needed, because there are different seasonal patterns, taking into account that, convective rain in Greece occurs mainly within the summer season.

  11. [Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].

    Science.gov (United States)

    Zhao, Nan; Liu, Yi; Chen, Ji-Ning

    2009-11-01

    In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.

  12. Ultrafast photoionization dynamics at high laser intensities in the xuv regime

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, B.; Vagov, A.; Axt, V. M.; Pietsch, U. [Institut fuer Theoretische Physik III, Universitaet Bayreuth, D-95440 Bayreuth (Germany); Institut fuer Festkoerperphysik, Universitaet Siegen, D-57068 Siegen (Germany)

    2011-10-15

    We study the ionization dynamics in the soft-x-ray regime for high intensities and short pulses for excitations near the ionization threshold. Using a one-dimensional helium atom model, we compare exact numerical solutions with time-dependent Hartree-Fock results in order to identify the role of electron-electron correlations. At moderate intensities but still in the x-ray and short-pulse regime, we find that the Hartree-Fock theory reproduces well the dynamics of the ground-state occupation, while at high intensities strong correlation effects occur for excitations close to the threshold. From their characteristic momentum distributions, we can identify contributions to the double ionization from sequential three-photon and nonsequential or sequential two-photon processes. At elevated intensities these contributions deviate from their usual intensity scaling due to saturation effects, even though the total double-ionization probability stays below 10%. Furthermore, analysis of the time evolution of the momentum distribution reveals signatures of the energy-time uncertainty which indicate a coherent regime of the dynamics.

  13. Thirty-eight years of training distribution in olympic speed skaters.

    NARCIS (Netherlands)

    Orie, J.N.M.; Hofman, N.; de Koning, J.J.; Foster Jr., C.C.

    2014-01-01

    During the last decade discussion about training-intensity distribution has been an important issue in sports science. Training-intensity distribution has not been adequately investigated in speed skating, a unique activity requiring both high power and high endurance. Purpose: To quantify the

  14. Lithium formate EPR dosimetry for verifications of planned dose distributions prior to intensity-modulated radiation therapy

    Science.gov (United States)

    Gustafsson, H.; Lund, E.; Olsson, S.

    2008-09-01

    The objective of the present investigation was to evaluate lithium formate electron paramagnetic resonance (EPR) dosimetry for measurement of dose distributions in phantoms prior to intensity-modulated radiation therapy (IMRT). Lithium formate monohydrate tablets were carefully prepared, and blind tests were performed in clinically relevant situations in order to determine the precision and accuracy of the method. Further experiments confirmed that within the accuracy of the current method, the dosimeter response was independent of beam energies and dose rates used for IMRT treatments. The method was applied to IMRT treatment plans, and the dose determinations were compared to ionization chamber measurements. The experiments showed that absorbed doses above 3 Gy could be measured with an uncertainty of less than 2.5% of the dose (coverage factor k = 1.96). Measurement time was about 15 min using a well-calibrated dosimeter batch. The conclusion drawn from the investigation was that lithium formate EPR dosimetry is a promising new tool for absorbed dose measurements in external beam radiation therapy, especially for doses above 3 Gy.

  15. Lithium formate EPR dosimetry for verifications of planned dose distributions prior to intensity-modulated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Gustafsson, H; Lund, E [Department of Medical and Health Sciences, Radiation Physics, Faculty of Health Sciences, Linkoeping University, S-581 85 Linkoeping (Sweden); Olsson, S [Division of Radiation Physics, Linkoeping University Hospital, S-581 85 Linkoeping (Sweden)], E-mail: hakgu@imv.liu.se

    2008-09-07

    The objective of the present investigation was to evaluate lithium formate electron paramagnetic resonance (EPR) dosimetry for measurement of dose distributions in phantoms prior to intensity-modulated radiation therapy (IMRT). Lithium formate monohydrate tablets were carefully prepared, and blind tests were performed in clinically relevant situations in order to determine the precision and accuracy of the method. Further experiments confirmed that within the accuracy of the current method, the dosimeter response was independent of beam energies and dose rates used for IMRT treatments. The method was applied to IMRT treatment plans, and the dose determinations were compared to ionization chamber measurements. The experiments showed that absorbed doses above 3 Gy could be measured with an uncertainty of less than 2.5% of the dose (coverage factor k = 1.96). Measurement time was about 15 min using a well-calibrated dosimeter batch. The conclusion drawn from the investigation was that lithium formate EPR dosimetry is a promising new tool for absorbed dose measurements in external beam radiation therapy, especially for doses above 3 Gy.

  16. Improvement of dose distributions in abutment regions of intensity modulated radiation therapy and electron fields

    International Nuclear Information System (INIS)

    Dogan, Nesrin; Leybovich, Leonid B.; Sethi, Anil; Emami, Bahman

    2002-01-01

    In recent years, intensity modulated radiation therapy (IMRT) is used to radiate tumors that are in close proximity to vital organs. Targets consisting of a deep-seated region followed by a superficial one may be treated with abutting photon and electron fields. However, no systematic study regarding matching of IMRT and electron beams was reported. In this work, a study of dose distributions in the abutment region between tomographic and step-and-shoot IMRT and electron fields was carried out. A method that significantly improves dose homogeneity between abutting tomographic IMRT and electron fields was developed and tested. In this method, a target region that is covered by IMRT was extended into the superficial target area by ∼2.0 cm. The length and shape of IMRT target extension was chosen such that high isodose lines bent away from the region treated by the electrons. This reduced the magnitude of hot spots caused by the 'bulging effect' of electron field penumbra. To account for the uncertainties in positioning of the IMRT and electron fields, electron field penumbra was modified using conventional (photon) multileaf collimator (MLC). The electron beam was delivered in two steps: half of the dose delivered with MLCs in retracted position and another half with MLCs extended to the edge of electron field that abuts tomographic IMRT field. The experimental testing of this method using film dosimetry has demonstrated that the magnitude of the hot spots was reduced from ∼45% to ∼5% of the prescription dose. When an error of ±1.5 mm in field positioning was introduced, the dose inhomogeneity in the abutment region did not exceed ±15% of the prescription dose. With step-and-shoot IMRT, the most homogeneous dose distribution was achieved when there was a 3 mm gap between the IMRT and electron fields

  17. Reduction of intensity variations on a photovoltaic array with compound parabolic concentrators

    Science.gov (United States)

    Greenman, P.; Ogallagher, J.; Winston, R.; Costogue, E.

    1979-01-01

    The reduction of nonuniformities in the intensity distribution of light focused on a photovoltaic array by a compound parabolic concentrator is investigated. The introduction of small distortions into the surfaces of the reflector in order to diffuse the incident collimated light to fill the angular acceptance of the concentrator is calculated by means of ray tracing to decrease the irradiance nonuniformity at the cost of a lowered effective concentration of the concentrator. Measurements of the intensity distribution on a scale test model in terrestrial sunlight with corrugated aluminized mylar reflectors are shown to be in good agreement with the ray tracing results. A two-stage concentrator consisting of a focusing primary and a nonimaging secondary is also shown to result in a fairly uniform intensity distribution except in the case of a 4-deg incidence angle, which may be corrected by the introduction of distortions into one or both concentration stages.

  18. Testing of components on the shaking table facilities of AEP and contribution to full scale dynamic testing of Kozloduy NPP. Final report

    International Nuclear Information System (INIS)

    Ambriashvili, Y.

    1996-01-01

    This final report summarizes the results of components testing on the shaking table facilities of 'Atomenergoproject' which are considered as a contribution to the full scale dynamic testing of the Kozloduy nuclear power plant Units 5 and 6. It was designed on 1.0 g according to the calculations that were based on accelerograms which included artificial and already known recordings of real earthquakes. Maximum acceleration of the designed spectrum and new spectrum which are recommended are now within the range of frequencies 2.5-20 Hz. Active reactor and the primary loop are seismic stable as well as the tested equipment tested by 'Atomenergoproject'

  19. Quantization analysis of speckle intensity measurements for phase retrieval

    DEFF Research Database (Denmark)

    Maallo, Anne Margarette S.; Almoro, Percival F.; Hanson, Steen Grüner

    2010-01-01

    Speckle intensity measurements utilized for phase retrieval (PR) are sequentially taken with a digital camera, which introduces quantization error that diminishes the signal quality. Influences of quantization on the speckle intensity distribution and PR are investigated numerically...... and experimentally in the static wavefront sensing setup. Resultsshowthat 3 to 4 bits are adequate to represent the speckle intensities and yield acceptable reconstructions at relatively fast convergence rates. Computer memory requirements may be eased down by 2.4 times if a 4 bit instead of an 8 bit camera is used...

  20. Physical Activity and Abdominal Fat Distribution in Greenland.

    Science.gov (United States)

    Dahl-Petersen, Inger Katrine; Brage, Søren; Bjerregaard, Peter; Tolstrup, Janne Schurmann; Jørgensen, Marit Eika

    2017-10-01

    We examined how total volume of physical activity and reallocation of time spent at various objectively measured intensities of physical activity (PA) were associated with overall and abdominal fat distribution in adult Inuit in Greenland. Data were collected as part of a countrywide cross-sectional health survey in Greenland. A combined accelerometer and HR monitor measured total physical activity energy expenditure (PAEE) and intensities of PA (N = 1536). Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) were assessed by ultrasonography. Isotemporal substitution modeling was used to analyze the association between substitution of 1 h of sedentary time to light- or moderate-intensity PA and 1 h light-intensity PA to moderate- or vigorous-intensity PA in relation to body mass index (BMI), waist circumference (WC), SAT, and VAT. A negative linear association was found for total PAEE and BMI, WC, VAT, and SAT. Exchanging 1 h of sedentary time with light-intensity PA was associated with lower WC (-0.6 cm, P = 0.01), SAT (-0.08 cm, P abdominal fat distribution. Physical activity energy expenditure is associated with lower BMI, WC, and abdominal fat among Greenland Inuit. The importance of promoting an upward shift of the whole PA intensity distribution and to spur even short bouts of MVPA to limit excessive accumulation of SAT and VAT is highlighted.

  1. Beam intensity scanner system for three dimensional dose verification of IMRT

    International Nuclear Information System (INIS)

    Vahc, Young W.; Kwon, Ohyun; Park, Kwangyl; Park, Kyung R.; Yi, Byung Y.; Kim, Keun M.

    2003-01-01

    Patient dose verification is clinically one of the most important parts in the treatment delivery of radiation therapy. The three dimensional (3D) reconstruction of dose distribution delivered to target volume helps to verify patient dose and determine the physical characteristics of beams used in IMRT. Here we present beam intensity scanner (BInS) system for the pre-treatment dosimetric verification of two dimensional photon intensity. The BInS is a radiation detector with a custom-made software for dose conversion of fluorescence signals from scintillator. The scintillator is used to produce fluorescence from the irradiation of 6 MV photons on a Varian Clinac 21EX. The digitized fluoroscopic signals obtained by digital video camera-based scintillator (DVCS) will be processed by our custom made software to reproduce 3D- relative dose distribution. For the intensity modulated beam (IMB), the BInS calculates absorbed dose in absolute beam fluence which is used for the patient dose distribution. Using BInS, we performed various measurements related to IMRT and found the following: (1) The 3D-dose profiles of the IMBs measured by the BInS demonstrate good agreement with radiographic film, pin type ionization chamber and Monte Carlo simulation. (2) The delivered beam intensity is altered by the mechanical and dosimetric properties of the collimation of dynamic and/or step MLC system. This is mostly due to leaf transmission, leaf penumbra scattered photons from the round edges of leaves, and geometry of leaf. (3) The delivered dose depends on the operational detail of how to make multi leaf opening. These phenomena result in a fluence distribution that can be substantially different from the initial and calculated intensity modulation and therefore, should be taken into account by the treatment planning for accurate dose calculations delivered to the target volume in IMRT. (author)

  2. Real-Time Science on Social Media: The Example of Twitter in the Minutes, Hours, Days after the 2015 M7.8 Nepal Earthquake

    Science.gov (United States)

    Lomax, A.; Bossu, R.; Mazet-Roux, G.

    2015-12-01

    Scientific information on disasters such as earthquakes typically comes firstly from official organizations, news reports and interviews with experts, and later from scientific presentations and peer-reviewed articles. With the advent of the Internet and social media, this information is available in real-time from automated systems and within a dynamic, collaborative interaction between scientific experts, responders and the public. After the 2015 M7.8 Nepal earthquake, Twitter Tweets from earth scientists* included information, analysis, commentary and discussion on earthquake parameters (location, size, mechanism, rupture extent, high-frequency radiation, …), earthquake effects (distribution of felt shaking and damage, triggered seismicity, landslides, …), earthquake rumors (e.g. the imminence of a larger event) and other earthquake information and observations (aftershock forecasts, statistics and maps, source and regional tectonics, seismograms, GPS, InSAR, photos/videos, …).In the future (while taking into account security, false or erroneous information and identity verification), collaborative, real-time science on social media after a disaster will give earlier and better scientific understanding and dissemination of public information, and enable improved emergency response and disaster management.* A sample of scientific Tweets after the 2015 Nepal earthquake: In the first minutes: "mb5.9 Mwp7.4 earthquake Nepal 2015.04.25-06:11:25UTC", "Major earthquake shakes Nepal 8 min ago", "Epicenter between Pokhara and Kathmandu", "Major earthquake shakes Nepal 18 min ago. Effects derived from witnesses' reports". In the first hour: "shallow thrust faulting to North under Himalayas", "a very large and shallow event ... Mw7.6-7.7", "aftershocks extend east and south of Kathmandu, so likely ruptured beneath city", "Valley-blocking landslides must be a very real worry". In the first day: "M7.8 earthquake in Nepal 2hr ago: destructive in Kathmandu Valley and

  3. Distribution of congenital anomalies in a neonatal intensive care unit in Turkey.

    Science.gov (United States)

    Dursun, Arzu; Zenciroglu, Ayşegul; Hakan, Nilay; Karadag, Nilgun; Karagol, Belma Saygili; Aydin, Banu; Dilli, Dilek; Okumus, Nurullah; Beken, Serdar

    2014-07-01

    Congenital anomalies are one of the important reasons of mortality and morbidity in newborns. The aim of this study is to determine the incidence, distribution and the mortality of the congenital anomalies in a single neonatal intensive care unit (NICU) from Turkey. A retrospective analysis was performed between 2005 and 2012 in NICU using a computerized database. Variables including the type of anomaly, antenatal and postnatal history, gestational age, birth weight, consanguinity and other demographic, clinical and related laboratory variables were extracted from the computerized database using ICD-10 codes. Congenital anomalies were classified according to involved organ systems and also classified as single and multiple anomalies. A total of 1024 newborns with congenital anomaly (CA) (13.7%) were identified among the 7450 hospitalized newborns in NICU. The most affected system was the cardiovascular system (68.8%). Most of the anomalies (67.1%) were single anomalies. Of all, 59.4% had single major, 7.7% had single minor, 9% had single major plus single minor, 18.4% had multiple major and 2% had multiple minor anomalies. On the other hand, 96.3, 1.9, 0.1 and 1.7% of the newborns had malformation, deformation, disruption and dysplasia, respectively. Chromosomal analysis was only performed 24.8% of the newborns with CA and among them, 65.3% of these were in normal limits. The most frequently detected chromosomal abnormality was trisomy 21. Overall, mortality rate was 15.5% among the newborns with CA. In conclusion, the most common and mortal CA was cardio-vascular malformations in our hospital. The overall prevalence of cardio-vascular malformations among the newborn was higher than previously reported studies in Turkey. Further, studies with larger sample size are needed to determine CA in Turkey.

  4. Tropical Cyclone Lightning Distribution and Its Relationship to Convection and Intensity Change

    Science.gov (United States)

    Rodgers, Edward; Wienman, James; Pierce, Harold; Olson, William

    2000-01-01

    The long distance National Lightning Detection Network (NLDN) was used to monitor the distribution of lightning strokes in various 1998 and 1999 western North Atlantic tropical cyclones. These ground-based lightning observations together with the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I) and the Tropical Rain Mapping Mission (TRMM) Microwave Instrument (TMI) derived convective rain rates were used to monitor the propagation of electrically charged convective rain bands aid to qualitatively estimate intensification. An example of the lightning analyses was performed on hurricane George between 25-28 September, 1998 when the system left Key West and moved towards the Louisiana coast. During this period of time, George's maximum winds increased from 38 to 45 meters per second on 25 September and then remained steady state until it made landfall. Time-radius displays of the lightning strokes indicated that the greatest number of lightning strokes occurred within the outer core region (greater than 165 km) with little or no lightning strokes at radii less than 165 km. The trend in these lightning strokes decreased as George move into the Gulf of Mexico and showed no inward propagation. The lack inward propagating lightning strokes with time indicated that there was no evidence that an eye wall replacement was occurring that could alter George's intensity. Since George was steady state at this time, this result is not surprising. Time-azimuth displays of lightning strokes in an annulus whose outer and inner radii were respectively, 222 and 333 km from George's center were also constructed. A result from this analysis indicated that the maximum number of strokes occurred in the forward and rear right quadrant when George was over the Gulf of Mexico. This result is, consistent with the aircraft and satellite observations of maximum rainfall.

  5. The Association between Cigarette Smoking and Acne Intensity

    Directory of Open Access Journals (Sweden)

    Taheri Ramin

    2009-06-01

    Full Text Available Background: Acne vulgaris is a common chronic inflammatory disease of pilosebaceous unit. Different factors have been suggested to influence acne including diet, menstruation and occupation. The role of some of these factors on acne intensity is confirmed. The affect of Cigarette smoking on acne intensity has been suggested. In this research, we evaluated the association between cigarette smoking and the acne intensity.Materials and Methods: This cross-sectional study was performed on 278 smoker and 277non smoker males referred to dermatology clinics of Semnan during 2006-2007. The dermatologists interviewing the patients completed questionnaires based on clinical diagnosis and intensity of acne. Data analysis was performed using t-test, Mann-Whitney, Chi-square and Spearman coefficient tests. P-value less than 0.05 were considered significant. Results: Severe acne was observed in 16.6% of non-smokers and 22.7% of smokers. Distribution of acne intensity in both groups was significant (P=0.023. Association between duration of cigarette smoking and acne intensity was significant too (P<0.001. The association between dosage of cigarette smoking and acne intensity was also significant (P<0.001.Conclusion: Significant association between cigarette smoking and acne intensity showed that smoking withdrawal is helpful for reducing the acne intensity

  6. Physical Activity and Abdominal Fat Distribution in Greenland

    DEFF Research Database (Denmark)

    Dahl-Petersen, Inger Katrine; Brage, Søren; Bjerregaard, Peter

    2017-01-01

    with overall and abdominal fat distribution. CONCLUSION: Physical activity energy expenditure is associated with lower BMI, WC, and abdominal fat among Greenland Inuit. The importance of promoting an upward shift of the whole PA intensity distribution and to spur even short bouts of MVPA to limit excessive......PURPOSE: We examined how total volume of physical activity and reallocation of time spent at various objectively measured intensities of physical activity (PA) were associated with overall and abdominal fat distribution in adult Inuit in Greenland. METHODS: Data were collected as part...... of a countrywide cross-sectional health survey in Greenland. A combined accelerometer and HR monitor measured total physical activity energy expenditure (PAEE) and intensities of PA (N = 1536). Visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) were assessed by ultrasonography. Isotemporal...

  7. New characteristics of intensity assessment of Sichuan Lushan "4.20" M s7.0 earthquake

    Science.gov (United States)

    Sun, Baitao; Yan, Peilei; Chen, Xiangzhao

    2014-08-01

    The post-earthquake rapid accurate assessment of macro influence of seismic ground motion is of significance for earthquake emergency relief, post-earthquake reconstruction and scientific research. The seismic intensity distribution map released by the Lushan earthquake field team of the China Earthquake Administration (CEA) five days after the strong earthquake ( M7.0) occurred in Lushan County of Sichuan Ya'an City at 8:02 on April 20, 2013 provides a scientific basis for emergency relief, economic loss assessment and post-earthquake reconstruction. In this paper, the means for blind estimation of macroscopic intensity, field estimation of macro intensity, and review of intensity, as well as corresponding problems are discussed in detail, and the intensity distribution characteristics of the Lushan "4.20" M7.0 earthquake and its influential factors are analyzed, providing a reference for future seismic intensity assessments.

  8. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    Science.gov (United States)

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  9. Representation of chromatic distribution for lighting system

    Science.gov (United States)

    Rossi, Maurizio; Musante, Fulvio

    2015-01-01

    For the luminaire manufacturer, the measurement of the lighting intensity distribution (LID) emitted by lighting fixture is based on photometry. So light is measured as an achromatic value of intensity and there is no the possibility to discriminate the measurement of white vs. colored light. At the Laboratorio Luce of Politecnico di Milano a new instrument for the measurement of spectral radiant intensities distribution for lighting system has been built: the goniospectra- radiometer. This new measuring tool is based on a traditional mirror gonio-photometer with a CCD spectraradiometer controlled by a PC. Beside the traditional representation of photometric distribution we have introduced a new representation where, in addition to the information about the distribution of luminous intensity in space, new details about the chromaticity characteristic of the light sources have been implemented. Some of the results of this research have been applied in developing and testing a new line of lighting system "My White Light" (the research project "Light, Environment and Humans" funded in the Italian Lombardy region Metadistretti Design Research Program involving Politecnico di Milano, Artemide, Danese, and some other SME of the Lighting Design district), giving scientific notions and applicative in order to support the assumption that colored light sources can be used for the realization of interior luminaries that, other than just have low power consumption and long life, may positively affect the mood of people.

  10. Excitation-energy dependence of the resonant Auger transitions to the 4p4(1D)np (n=5,6) states across the 3d3/2-15p and 3d5/2-16p resonances in Kr

    International Nuclear Information System (INIS)

    Sankari, A.; Alitalo, S.; Nikkinen, J.; Kivimaeki, A.; Aksela, S.; Aksela, H.; Fritzsche, S.

    2007-01-01

    The energy dependencies of the intensities and angular distribution parameters β of the resonant Auger final states 4p 4 ( 1 D)np (n=5,6) of Kr were determined experimentally in the excitation-energy region of the overlapping 3d 3/2 -1 5p and 3d 5/2 -1 6p resonances. The experimental results were compared with the outcome of multiconfiguration Dirac-Fock calculations. Combining experimental and calculated results allowed us to study interference effects between the direct and several resonant channels that populate the 4p 4 ( 1 D)np states. The inclusion of the direct channel was crucial in order to reproduce the observed energy behavior of the angular distribution parameters. It was also important to take into account experimentally observed shake transitions

  11. Fan beam intensity modulated proton therapy

    Science.gov (United States)

    Hill, Patrick M.

    A fan beam proton therapy is developed which delivers intensity modulated proton therapy using distal edge tracking. The system may be retrofit onto existing proton therapy gantries without alterations to infrastructure in order to improve treatments through intensity modulation. A novel range and intensity modulation system is designed using acrylic leaves that are inserted or retracted from subsections of the fan beam. Leaf thicknesses are chosen in a base-2 system and motivated in a binary manner. Dose spots from individual beam channels range between 1 and 5 cm. Integrated collimators attempting to limit crosstalk among beam channels are investigated, but found to be inferior to uncollimated beam channel modulators. A treatment planning system performing data manipulation in MATLAB and dose calculation in MCNPX is developed. Beamlet dose is calculated on patient CT data and a fan beam source is manually defined to produce accurate results. An energy deposition tally follows the CT grid, allowing straightforward registration of dose and image data. Simulations of beam channels assume that a beam channel either delivers dose to a distal edge spot or is intensity modulated. A final calculation is performed separately to determine the deliverable dose accounting for all sources of scatter. Treatment plans investigate the effects that varying system parameters have on dose distributions. Beam channel apertures may be as large as 20 mm because the sharp distal falloff characteristic of proton dose provides sufficient intensity modulation to meet dose objectives, even in the presence of coarse lateral resolution. Dose conformity suffers only when treatments are delivered from less than 10 angles. Jaw widths of 1--2 cm produce comparable dose distributions, but a jaw width of 4 cm produces unacceptable target coverage when maintaining critical structure avoidance. Treatment time for a prostate delivery is estimated to be on the order of 10 minutes. Neutron production

  12. Intense diagnostic neutral beam development for ITER

    International Nuclear Information System (INIS)

    Rej, D.J.; Henins, I.; Fonck, R.J.; Kim, Y.J.

    1992-01-01

    For the next-generation, burning tokamak plasmas such as ITER, diagnostic neutral beams and beam spectroscopy will continue to be used to determine a variety of plasma parameters such as ion temperature, rotation, fluctuations, impurity content, current density profile, and confined alpha particle density and energy distribution. Present-day low-current, long-pulse beam technology will be unable to provide the required signal intensities because of higher beam attenuation and background bremsstrahlung radiation in these larger, higher-density plasmas. To address this problem, we are developing a short-pulse, intense diagnostic neutral beam. Protons or deuterons are accelerated using magnetic-insulated ion-diode technology, and neutralized in a transient gas cell. A prototype 25-kA, 100-kV, 1-μs accelerator is under construction at Los Alamos. Initial experiments will focus on ITER-related issues of beam energy distribution, current density, pulse length, divergence, propagation, impurity content, reproducibility, and maintenance

  13. Does intense monitoring matter? A quantile regression approach

    Directory of Open Access Journals (Sweden)

    Fekri Ali Shawtari

    2017-06-01

    Full Text Available Corporate governance has become a centre of attention in corporate management at both micro and macro levels due to adverse consequences and repercussion of insufficient accountability. In this study, we include the Malaysian stock market as sample to explore the impact of intense monitoring on the relationship between intellectual capital performance and market valuation. The objectives of the paper are threefold: i to investigate whether intense monitoring affects the intellectual capital performance of listed companies; ii to explore the impact of intense monitoring on firm value; iii to examine the extent to which the directors serving more than two board committees affects the linkage between intellectual capital performance and firms' value. We employ two approaches, namely, the Ordinary Least Square (OLS and the quantile regression approach. The purpose of the latter is to estimate and generate inference about conditional quantile functions. This method is useful when the conditional distribution does not have a standard shape such as an asymmetric, fat-tailed, or truncated distribution. In terms of variables, the intellectual capital is measured using the value added intellectual coefficient (VAIC, while the market valuation is proxied by firm's market capitalization. The findings of the quantile regression shows that some of the results do not coincide with the results of OLS. We found that intensity of monitoring does not influence the intellectual capital of all firms. It is also evident that intensity of monitoring does not influence the market valuation. However, to some extent, it moderates the relationship between intellectual capital performance and market valuation. This paper contributes to the existing literature as it presents new empirical evidences on the moderating effects of the intensity of monitoring of the board committees on the relationship between performance and intellectual capital.

  14. Workload modelling for data-intensive systems

    CERN Document Server

    Lassnig, Mario

    This thesis presents a comprehensive study built upon the requirements of a global data-intensive system, built for the ATLAS Experiment at CERN's Large Hadron Collider. First, a scalable method is described to capture distributed data management operations in a non-intrusive way. These operations are collected into a globally synchronised sequence of events, the workload. A comparative analysis of this new data-intensive workload against existing computational workloads is conducted, leading to the discovery of the importance of descriptive attributes in the operations. Existing computational workload models only consider the arrival rates of operations, however, in data-intensive systems the correlations between attributes play a central role. Furthermore, the detrimental effect of rapid correlated arrivals, so called bursts, is assessed. A model is proposed that can learn burst behaviour from captured workload, and in turn forecast potential future bursts. To help with the creation of a full representative...

  15. Earthquake early warning system using real-time signal processing

    Energy Technology Data Exchange (ETDEWEB)

    Leach, R.R. Jr.; Dowla, F.U.

    1996-02-01

    An earthquake warning system has been developed to provide a time series profile from which vital parameters such as the time until strong shaking begins, the intensity of the shaking, and the duration of the shaking, can be derived. Interaction of different types of ground motion and changes in the elastic properties of geological media throughout the propagation path result in a highly nonlinear function. We use neural networks to model these nonlinearities and develop learning techniques for the analysis of temporal precursors occurring in the emerging earthquake seismic signal. The warning system is designed to analyze the first-arrival from the three components of an earthquake signal and instantaneously provide a profile of impending ground motion, in as little as 0.3 sec after first ground motion is felt at the sensors. For each new data sample, at a rate of 25 samples per second, the complete profile of the earthquake is updated. The profile consists of a magnitude-related estimate as well as an estimate of the envelope of the complete earthquake signal. The envelope provides estimates of damage parameters, such as time until peak ground acceleration (PGA) and duration. The neural network based system is trained using seismogram data from more than 400 earthquakes recorded in southern California. The system has been implemented in hardware using silicon accelerometers and a standard microprocessor. The proposed warning units can be used for site-specific applications, distributed networks, or to enhance existing distributed networks. By producing accurate, and informative warnings, the system has the potential to significantly minimize the hazards of catastrophic ground motion. Detailed system design and performance issues, including error measurement in a simple warning scenario are discussed in detail.

  16. Valence one-electron and shake-up ionization bands of fluorene, carbazole and dibenzofuran

    International Nuclear Information System (INIS)

    Reza Shojaei, S.H.; Morini, Filippo; Deleuze, Michael S.

    2013-01-01

    Highlights: • The photoelectron spectra of the title compounds are assigned in details. • Shake-up lines are found to severely contaminate both π- and σ-ionization bands. • σ-ionization onsets are subject to severe vibronic coupling complications. • We compare the results of OVGF, ADC(3) and TDDFT calculations. - Abstract: A comprehensive study of the He (I) ultra-violet photoelectron spectra of fluorene, carbazole and dibenzofuran is presented with the aid of one-particle Green’s Function calculations employing the outer-valence Green’s Function (OVGF) approach and the third-order algebraic diagrammatic construction [ADC(3)] scheme, along with Dunning’s correlation consistent basis sets of double and triple zeta quality (cc-pVDZ, cc-pVTZ). Extrapolations of the ADC(3) results for the outermost one-electron π-ionization energies to the cc-pVTZ basis set enable theoretical insights into He (I) measurements within ∼0.15 eV accuracy, up to the σ-ionization onset. The lower ionization energy of carbazole is the combined result of mesomeric and electronic relaxation effects. OVGF/cc-pVDZ or OVGF/cc-pVTZ pole strengths smaller than 0.85 systematically corroborate a breakdown of the orbital picture of ionization at the ADC(3) level. Comparison is made with calculations of the lowest doublet–doublet excitation energies of the radical cation of fluorene, by means of time-dependent density functional theory (TDDFT)

  17. Comparison of Kodak EDR2 and Gafchromic EBT film for intensity-modulated radiation therapy dose distribution verification.

    Science.gov (United States)

    Sankar, A; Ayyangar, Komanduri M; Nehru, R Mothilal; Kurup, P G Gopalakrishna; Murali, V; Enke, Charles A; Velmurugan, J

    2006-01-01

    The quantitative dose validation of intensity-modulated radiation therapy (IMRT) plans require 2-dimensional (2D) high-resolution dosimetry systems with uniform response over its sensitive region. The present work deals with clinical use of commercially available self-developing Radio Chromic Film, Gafchromic EBT film, for IMRT dose verification. Dose response curves were generated for the films using a VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak extended dose range 2 (EDR2) films. The EBT film had a linear response between the dose range of 0 to 600 cGy. The dose-related characteristics of the EBT film, such as post irradiation color growth with time, film uniformity, and effect of scanning orientation, were studied. There was up to 8.6% increase in the color density between 2 to 40 hours after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative differences between calculated and measured dose distributions were analyzed using DTA and Gamma index with the tolerance of 3% dose difference and 3-mm distance agreement. The EDR2 films showed consistent results with the calculated dose distributions, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large-field IMRT verification. For IMRT of smaller field sizes (4.5 x 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films.

  18. Comparison of Kodak EDR2 and Gafchromic EBT film for intensity-modulated radiation therapy dose distribution verification

    International Nuclear Information System (INIS)

    Sankar, A.; Ayyangar, Komanduri M.; Nehru, R. Mothilal; Gopalakrishna Kurup, P.G.; Murali, V.; Enke, Charles A.; Velmurugan, J.

    2006-01-01

    The quantitative dose validation of intensity-modulated radiation therapy (IMRT) plans require 2-dimensional (2D) high-resolution dosimetry systems with uniform response over its sensitive region. The present work deals with clinical use of commercially available self-developing Radio Chromic Film, Gafchromic EBT film, for IMRT dose verification. Dose response curves were generated for the films using a VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak extended dose range 2 (EDR2) films. The EBT film had a linear response between the dose range of 0 to 600 cGy. The dose-related characteristics of the EBT film, such as post irradiation color growth with time, film uniformity, and effect of scanning orientation, were studied. There was up to 8.6% increase in the color density between 2 to 40 hours after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative differences between calculated and measured dose distributions were analyzed using DTA and Gamma index with the tolerance of 3% dose difference and 3-mm distance agreement. The EDR2 films showed consistent results with the calculated dose distributions, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large-field IMRT verification. For IMRT of smaller field sizes (4.5 x 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films

  19. Factors Contributing to the Catastrophe in Mexico City During the Earthquake of September 19, 1985

    OpenAIRE

    Beck, James L.; Hall, John F.

    1986-01-01

    The extensive damage to high‐rise buildings in Mexico City during the September 19, 1985 earthquake is primarily due to the intensity of the ground shaking exceeding what was previously considered credible for the city by Mexican engineers. There were two major factors contributing to the catastrophe, resonance in the sediments of an ancient lake that once existed in the Valley of Mexico, and the long duration of shaking compared with other coastal earthquakes in the last 50 years. Both of th...

  20. Intensity maps of MeV electrons and protons below the radiation belt

    International Nuclear Information System (INIS)

    Kohno, T.; Munakata, K.; Murakami, H.; Nakamoto, A.; Hasebe, N.; Kikuchi, J.; Doke, T.

    1988-01-01

    The global distributions of energetic electrons (0.19 - 3.2 MeV) and protons (0.64 - 35 MeV) are shown in the form of contour maps. The data were obtained by two sets of energetic particle telescopes on board the satellite OHZORA. The observed altitude range is 350 - 850 Km. Ten degress meshes in longitude and latitude were used to obtain the intensity contours. A pitch angle distribution of J(α) = J(90). sin n α with n = 5 A is assumed to get the average intensity in each mesh. (author) [pt

  1. Coupling biophysical processes and water rights to simulate spatially distributed water use in an intensively managed hydrologic system

    Science.gov (United States)

    Han, Bangshuai; Benner, Shawn G.; Bolte, John P.; Vache, Kellie B.; Flores, Alejandro N.

    2017-07-01

    Humans have significantly altered the redistribution of water in intensively managed hydrologic systems, shifting the spatiotemporal patterns of surface water. Evaluating water availability requires integration of hydrologic processes and associated human influences. In this study, we summarize the development and evaluation of an extensible hydrologic model that explicitly integrates water rights to spatially distribute irrigation waters in a semi-arid agricultural region in the western US, using the Envision integrated modeling platform. The model captures both human and biophysical systems, particularly the diversion of water from the Boise River, which is the main water source that supports irrigated agriculture in this region. In agricultural areas, water demand is estimated as a function of crop type and local environmental conditions. Surface water to meet crop demand is diverted from the stream reaches, constrained by the amount of water available in the stream, the water-rights-appropriated amount, and the priority dates associated with particular places of use. Results, measured by flow rates at gaged stream and canal locations within the study area, suggest that the impacts of irrigation activities on the magnitude and timing of flows through this intensively managed system are well captured. The multi-year averaged diverted water from the Boise River matches observations well, reflecting the appropriation of water according to the water rights database. Because of the spatially explicit implementation of surface water diversion, the model can help diagnose places and times where water resources are likely insufficient to meet agricultural water demands, and inform future water management decisions.

  2. Seismic hazard, risk, and design for South America

    Science.gov (United States)

    Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison

    2018-01-01

    We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best

  3. Geometrical theory of nonlinear phase distortion of intense laser beams

    International Nuclear Information System (INIS)

    Glaze, J.A.; Hunt, J.T.; Speck, D.R.

    1975-01-01

    Phase distortion arising from whole beam self-focusing of intense laser pulses with arbitrary spatial profiles is treated in the limit of geometrical optics. The constant shape approximation is used to obtain the phase and angular distribution of the geometrical rays in the near field. Conditions for the validity of this approximation are discussed. Geometrical focusing of the aberrated beam is treated for the special case of a beam with axial symmetry. Equations are derived that show both the shift of the focus and the distortion of the intensity distribution that are caused by the nonlinear index of refraction of the optical medium. An illustrative example treats the case of beam distortion in a Nd:Glass amplifier

  4. On the Capacity of the Intensity-Modulation Direct-Detection Optical Broadcast Channel

    KAUST Repository

    Chaaban, Anas

    2016-01-12

    The capacity of the intensity-modulation directdetection optical broadcast channel (OBC) is investigated, under both average and peak intensity constraints. An outer bound on the capacity region is derived by adapting Bergmans’ approach to the OBC. Inner bounds are derived by using superposition coding with either truncated-Gaussian (TG) distributions or discrete distributions. While the discrete distribution achieves higher rates, the TG distribution leads to a simpler representation of the achievable rate region. At high signal-to-noise ratio (SNR), it is shown that the TG distribution is nearly optimal. It achieves the symmetric-capacity within a constant gap (independent of SNR), which approaches half a bit as the number of users grows. It also achieves the capacity region within a constant gap. At low SNR, it is shown that on-off keying (OOK) with time-division multipleaccess (TDMA) is optimal. This is interesting in practice since both OOK and TDMA have low complexity. At moderate SNR (typically [0,8] dB), a discrete distribution with a small alphabet size achieves fairly good performance.

  5. On the Capacity of the Intensity-Modulation Direct-Detection Optical Broadcast Channel

    KAUST Repository

    Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim

    2016-01-01

    The capacity of the intensity-modulation directdetection optical broadcast channel (OBC) is investigated, under both average and peak intensity constraints. An outer bound on the capacity region is derived by adapting Bergmans’ approach to the OBC. Inner bounds are derived by using superposition coding with either truncated-Gaussian (TG) distributions or discrete distributions. While the discrete distribution achieves higher rates, the TG distribution leads to a simpler representation of the achievable rate region. At high signal-to-noise ratio (SNR), it is shown that the TG distribution is nearly optimal. It achieves the symmetric-capacity within a constant gap (independent of SNR), which approaches half a bit as the number of users grows. It also achieves the capacity region within a constant gap. At low SNR, it is shown that on-off keying (OOK) with time-division multipleaccess (TDMA) is optimal. This is interesting in practice since both OOK and TDMA have low complexity. At moderate SNR (typically [0,8] dB), a discrete distribution with a small alphabet size achieves fairly good performance.

  6. Characterization of the fast electrons distribution produced in a high intensity laser target interaction

    Energy Technology Data Exchange (ETDEWEB)

    Westover, B. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California 92093 (United States); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Chen, C. D.; Patel, P. K.; McLean, H. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Beg, F. N., E-mail: fbeg@ucsd.edu [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, California 92093 (United States)

    2014-03-15

    Experiments on the Titan laser (∼150 J, 0.7 ps, 2 × 10{sup 20} W cm{sup −2}) at the Lawrence Livermore National Laboratory were carried out in order to study the properties of fast electrons produced by high-intensity, short pulse laser interacting with matter under conditions relevant to Fast Ignition. Bremsstrahlung x-rays produced by these fast electrons were measured by a set of compact filter-stack based x-ray detectors placed at three angles with respect to the target. The measured bremsstrahlung signal allows a characterization of the fast electron beam spectrum, conversion efficiency of laser energy into fast electron kinetic energy and angular distribution. A Monte Carlo code Integrated Tiger Series was used to model the bremsstrahlung signal and infer a laser to fast electron conversion efficiency of 30%, an electron slope temperature of about 2.2 MeV, and a mean divergence angle of 39°. Simulations were also performed with the hybrid transport code ZUMA which includes fields in the target. In this case, a conversion efficiency of laser energy to fast electron energy of 34% and a slope temperature between 1.5 MeV and 4 MeV depending on the angle between the target normal direction and the measuring spectrometer are found. The observed temperature of the bremsstrahlung spectrum, and therefore the inferred electron spectrum are found to be angle dependent.

  7. Combining spray nozzle simulators with meshes: characterization of rainfall intensity and drop properties

    Science.gov (United States)

    Carvalho, Sílvia C. P.; de Lima, João L. M. P.; de Lima, M. Isabel P.

    2013-04-01

    Rainfall simulators can be a powerful tool to increase our understanding of hydrological and geomorphological processes. Nevertheless, rainfall simulators' design and operation might be rather demanding, for achieving specific rainfall intensity distributions and drop characteristics. The pressurized simulators have some advantages over the non-pressurized simulators: drops do not rely on gravity to reach terminal velocity, but are sprayed out under pressure; pressurized simulators also yield a broad range of drop sizes in comparison with drop-formers simulators. The main purpose of this study was to explore in the laboratory the potential of combining spray nozzle simulators with meshes in order to change rainfall characteristics (rainfall intensity and diameters and fall speed of drops). Different types of spray nozzles were tested, such as single full-cone and multiple full-cone nozzles. The impact of the meshes on the simulated rain was studied by testing different materials (i.e. plastic and steel meshes), square apertures and wire thicknesses, and different vertical distances between the nozzle and the meshes underneath. The diameter and fall speed of the rain drops were measured using a Laser Precipitation Monitor (Thies Clima). The rainfall intensity range and coefficients of uniformity of the sprays and the drop size distribution, fall speed and kinetic energy were analysed. Results show that when meshes intercept drop trajectories the spatial distribution of rainfall intensity and the drop size distribution are affected. As the spray nozzles generate typically small drop sizes and narrow drop size distributions, meshes can be used to promote the formation of bigger drops and random their landing positions.

  8. A Scheduling Algorithm for the Distributed Student Registration System in Transaction-Intensive Environment

    Science.gov (United States)

    Li, Wenhao

    2011-01-01

    Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…

  9. Moment Distributions of Phase Type

    DEFF Research Database (Denmark)

    Bladt, Mogens; Nielsen, Bo Friis

    In this paper we prove that the class of distributions on the positive reals with a rational Laplace transform, also known as matrix-exponential distributions, is closed under formation of moment distributions. In particular, the results are hence valid for the well known class of phase-type dist...... alternative representation in terms of sub{intensity matrices. Finally we are able to nd explicit expressions for both the Lorenz curve and the Gini index....

  10. Norm-Minimized Scattering Data from Intensity Spectra

    Directory of Open Access Journals (Sweden)

    Alexander Seel

    2016-01-01

    Full Text Available We apply the l1 minimizing technique of compressive sensing (CS to nonlinear quadratic observations. For the example of coherent X-ray scattering we provide the formulas for a Kalman filter approach to quadratic CS and show how to reconstruct the scattering data from their spatial intensity distribution.

  11. Some aspects of the design of intensity modulated beams for breast radiotherapy

    International Nuclear Information System (INIS)

    Evans, PM; Hansen, VN; Swindell, W

    1995-01-01

    An electronic portal imaging system has been used to design intensity modulated beams to achieve compensation for missing tissue and tissue heterogeneity in tangential irradiation of the breast. A portal image of the breast is calibrated for radiological thickness and an estimate of the outline of lung and soft tissue is made. This is used with the desired dose prescription to design intensity modulated beams, IMBs. The practical implementation of the IMBs may be achieved using a multileaf collimator, MLC. The leaves of the MLC may be scanned dynamically or a set of multiple static fields may be used. We have compared the uniformity of the achievable dose distribution for both cases. In the static case, the effects of varying the number of fields and their relative intensities have been investigated. The use of scanning leaves yields a dose distribution which is close to optimal. Multiple static fields produce results close to optimal if a large number, typically 30 are used. However, even for the more practicable case of 5 fields, the hot and cold spots are significantly reduced compared to a simple wedge. When studying the optimum intensity distribution for the set of static fields, it was found that having the first field with a large intensity irradiating the whole target volume and a set of 'top-up' fields of equal magnitude was best. This study suggests that an MLC may indeed be used to deliver IMBs for radiotherapy of the breast. We can presently deliver the multiple static field technique. For the small number of beams which are presently deliverable, an improvement of dosimetry over the use of a simple wedge is indicated. In the future, with the scanning leaves technique, dose distributions with greatly reduced dose inhomogeneities should be achievable

  12. Nonlinear Delta-f Particle Simulations of Collective Effects in High-Intensity Bunched Beams

    CERN Document Server

    Qin, Hong; Hudson, Stuart R; Startsev, Edward

    2005-01-01

    The collective effects in high-intensity 3D bunched beams are described self-consistently by the nonlinear Vlasov-Maxwell equations.* The nonlinear delta-f method,** a particle simulation method for solving the nonlinear Vlasov-Maxwell equations, is being used to study the collective effects in high-intensity 3D bunched beams. The delta-f method, as a nonlinear perturbative scheme, splits the distribution function into equilibrium and perturbed parts. The perturbed distribution function is represented as a weighted summation over discrete particles, where the particle orbits are advanced by equations of motion in the focusing field and self-consistent fields, and the particle weights are advanced by the coupling between the perturbed fields and the zero-order distribution function. The nonlinear delta-f method exhibits minimal noise and accuracy problems in comparison with standard particle-in-cell simulations. A self-consistent 3D kinetic equilibrium is first established for high intensity bunched beams. The...

  13. Modelos da distribuição temporal de chuvas intensas em Piracicaba, SP Time distribution models of intense rainfall in Piracicaba, SP, Brazil

    Directory of Open Access Journals (Sweden)

    Décio E. Cruciani

    2002-04-01

    Full Text Available O estudo da variação temporal de chuvas intensas é de grande importância na hidrologia, para a análise e previsão de eventos extremos, necessárias em projetos de controle de engenharia. Com esse objetivo, foram analisados dados de pluviogramas da cidade de Piracicaba, SP, do período de 1966 a 2000, para se determinar a distribuição temporal de chuvas intensas de 60 e de 120 min de duração. As chuvas de 60 min foram subdivididas em três intervalos iguais e sucessivos de 20 min cada um, enquanto as chuvas de 120 min foram subdivididas em quatro intervalos iguais e sucessivos de 30 min cada um. O modelo de distribuição da precipitação que predominou para as chuvas de 60 e 120 min, foi do tipo exponencial negativo, com 85,7 e 50,7% dos casos, respectivamente. Para as chuvas de 60 min, com altura pluviométrica média de 20,7 mm, a distribuição foi de 72,3, 21,4 e 6,2% do total precipitado, respectivamente, nos três intervalos sucessivos de 20 min. Para as chuvas de 120 min, com altura pluviométrica média de 33,3 mm, o resultado foi de 60,1, 25,2, 11,1 e 3,6%, respectivamente, nos quatro intervalos sucessivos de 30 min. O modelo de distribuição temporal dessas chuvas não foi modificado pelo total precipitado nem pela sua duração, nos intervalos em questão.Time distribution models of intense and short rains are very important in hydrology and for extreme predictions in engineering projects. With this purpose, rain data of Piracicaba, SP, Brazil, from 1966 to 2000 were analyzed to establish time distribution models of 60 and 120 min intense rains, during the rainy season from October through March. Time distribution models were assessed by three intervals of twenty minutes duration, for 60 min rains and by four intervals of thirty minutes duration for 120 min rains. The prevailing precipitation model for both, 60 and 120 min rains was a negative exponential distribution, in 85.7 and 50.7% of cases, respectively. For 60 min

  14. Predicting the distribution of intensive poultry farming in Thailand

    OpenAIRE

    Van Boeckel, Thomas P; Thanapongtharm, Weerapong; Robinson, Timothy; D’Aietti, Laura; Gilbert, Marius

    2012-01-01

    Intensification of animal production can be an important factor in the emergence of infectious diseases because changes in production structure influence disease transmission patterns. In 2004 and 2005, Thailand was subject to two highly pathogenic avian influenza epidemic waves and large surveys were conducted of the poultry sector, providing detailed spatial data on various poultry types. This study analysed these data with the aim of establishing the distributions of extensive and intensiv...

  15. Two-stage stochastic day-ahead optimal resource scheduling in a distribution network with intensive use of distributed energy resources

    DEFF Research Database (Denmark)

    Sousa, Tiago; Ghazvini, Mohammad Ali Fotouhi; Morais, Hugo

    2015-01-01

    The integration of renewable sources and electric vehicles will introduce new uncertainties to the optimal resource scheduling, namely at the distribution level. These uncertainties are mainly originated by the power generated by renewables sources and by the electric vehicles charge requirements....... This paper proposes a two-state stochastic programming approach to solve the day-ahead optimal resource scheduling problem. The case study considers a 33-bus distribution network with 66 distributed generation units and 1000 electric vehicles....

  16. Skinning of argon clusters by Coulomb explosion induced with an intense femtosecond laser pulse

    International Nuclear Information System (INIS)

    Sakabe, S.; Shirai, K.; Hashida, M.; Shimizu, S.; Masuno, S.

    2006-01-01

    The energy distributions of ions emitted from argon clusters Coulomb exploded at an intensity of 17 W/cm 2 with an intense femtosecond laser have been experimentally studied. The power m of energy E of the ion energy distribution (dN/dE∼E m ) is expected to be 1/2 for spherical ion clusters, but it is in fact reduced smaller than 1/2 as the laser intensity is decreased. This reduction can be well interpreted as resulting from the instantaneous ionization of the surface of the cluster. The validity of this interpretation was confirmed by experiments with double pulse irradiation. A cluster irradiated by the first pulse survives as a skinned cluster, and the remaining core part is Coulomb exploded by the second pulse. It is shown that a cluster can be skinned by an intense short laser pulse, and the laser-intensity dependence of the skinned layer thickness can be reasonably explained by the laser-induced space charge field created in the cluster

  17. Optimal margin and edge-enhanced intensity maps in the presence of motion and uncertainty

    International Nuclear Information System (INIS)

    Chan, Timothy C Y; Tsitsiklis, John N; Bortfeld, Thomas

    2010-01-01

    In radiation therapy, intensity maps involving margins have long been used to counteract the effects of dose blurring arising from motion. More recently, intensity maps with increased intensity near the edge of the tumour (edge enhancements) have been studied to evaluate their ability to offset similar effects that affect tumour coverage. In this paper, we present a mathematical methodology to derive margin and edge-enhanced intensity maps that aim to provide tumour coverage while delivering minimum total dose. We show that if the tumour is at most about twice as large as the standard deviation of the blurring distribution, the optimal intensity map is a pure scaling increase of the static intensity map without any margins or edge enhancements. Otherwise, if the tumour size is roughly twice (or more) the standard deviation of motion, then margins and edge enhancements are preferred, and we present formulae to calculate the exact dimensions of these intensity maps. Furthermore, we extend our analysis to include scenarios where the parameters of the motion distribution are not known with certainty, but rather can take any value in some range. In these cases, we derive a similar threshold to determine the structure of an optimal margin intensity map.

  18. High-yield production of biologically active recombinant protein in shake flask culture by combination of enzyme-based glucose delivery and increased oxygen transfer

    Directory of Open Access Journals (Sweden)

    Ukkonen Kaisa

    2011-12-01

    Full Text Available Abstract This report describes the combined use of an enzyme-based glucose release system (EnBase® and high-aeration shake flask (Ultra Yield Flask™. The benefit of this combination is demonstrated by over 100-fold improvement in the active yield of recombinant alcohol dehydrogenase expressed in E. coli. Compared to Terrific Broth and ZYM-5052 autoinduction medium, the EnBase system improved yield mainly through increased productivity per cell. Four-fold increase in oxygen transfer by the Ultra Yield Flask contributed to higher cell density with EnBase but not with the other tested media, and consequently the product yield per ml of EnBase culture was further improved.

  19. Emission spectrochemical determination of boron in steels with pulse height distribution analyzer

    International Nuclear Information System (INIS)

    Ito, Minao; Sato, Shoki; Fushida, Hiroshi; Narita, Masanao

    1983-01-01

    The method for rapid determination of total, acid soluble and insoluble boron was established by using emission spectrochemical apparatus equipped with pulse height distribution analyzer. By using the analyzer, emission intensity can be expressed as different level intensity of pulse height distribution. It was made clear that soluble and insoluble boron had different contribution degree to each intensity and that this contribution degree varied at different pre-spark. Therefore, it is necessary for accurate determination of boron that this contribution degree should be corrected by using two intensities, of which contribution degrees are different. It was found on this two intensities method that total and soluble boron corresponded well to 50 % intensities at zero pre-spark and at 2000 pre-spark and that insoluble boron corresponded well to 70 % intensity at zero pre-spark and 50 % intensity at 2000 pre-spark. (author)

  20. Data intensive high energy physics analysis in a distributed cloud

    International Nuclear Information System (INIS)

    Charbonneau, A; Impey, R; Podaima, W; Agarwal, A; Anderson, M; Armstrong, P; Fransham, K; Gable, I; Harris, D; Leavett-Brown, C; Paterson, M; Sobie, R J; Vliet, M

    2012-01-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  1. Data intensive high energy physics analysis in a distributed cloud

    Science.gov (United States)

    Charbonneau, A.; Agarwal, A.; Anderson, M.; Armstrong, P.; Fransham, K.; Gable, I.; Harris, D.; Impey, R.; Leavett-Brown, C.; Paterson, M.; Podaima, W.; Sobie, R. J.; Vliet, M.

    2012-02-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  2. Wavelength dependence of momentum-space images of low-energy electrons generated by short intense laser pulses at high intensities

    International Nuclear Information System (INIS)

    Maharjan, C M; Alnaser, A S; Litvinyuk, I; Ranitovic, P; Cocke, C L

    2006-01-01

    We have measured momentum-space images of low-energy electrons generated by the interaction of short intense laser pulses with argon atoms at high intensities. We have done this over a wavelength range from 400 to 800 nm. The spectra show considerable structure in both the energy and angular distributions of the electrons. Some, but not all, energy features can be identified as multi-photon resonances. The angular structure shows a regularity which transcends the resonant structure and may be due instead to diffraction. The complexity of the results defies easy model-dependent interpretations and invites full solutions to Schroedinger's equation for these systems

  3. Analysis and comparison model for measuring tropospheric scintillation intensity for Ku-band frequency in Malaysia

    Directory of Open Access Journals (Sweden)

    Mandeep JS

    2011-06-01

    Full Text Available This study has been based on understanding local propagation signal data distribution characteristics and identifying and predicting the overall impact of significant attenuating factors regarding the propagation path such as impaired propagation for a signal being transmitted. Predicting propagation impairment is important for accurate link budgeting, thereby leading to better communication network system designation. This study has thus used sample data for one year concerning beacon satellite operation in Malaysia from April 2008 to April 2009. Data concerning 12GHz frequency (Ku-band and 40° elevation angle was collected and analysed, obtaining average signal amplitude value, ÷ and also standard deviation ó which is normally measured in dB to obtain long-term scintillation intensity distribution. This analysis showed that scintillation intensity distribution followed Gaussian distribution for long-term data distribution. A prediction model was then selected based on the above; Karasawa,
    ITU-R, Van de Kamp and Otung models were compared to obtain the best prediction model performance for selected data regarding specific meteorological conditions. This study showed that the Karasawa model had the best performance for predicting scintillation intensity for the selected da ta.

  4. Temperature distributions of a conductively heated filament

    International Nuclear Information System (INIS)

    Tamura, Koji; Ohba, Hironori; Shibata, Takemasa

    1999-07-01

    Temperature distributions of a heated filament were measured. A W-Re(5%) filament (0.25 mm in diameter, 24.7 mm in length) was conductively heated by currents between 5A and 7A with a DC power supply, and the surface of the filament was imaged with a charge coupled device (CCD) camera through a monochromatic filter. The spectral radiation intensity at the filament center region was almost uniform. Since the temperature distribution was also uniform and the energy loss by thermal conduction was negligible, temperature in this region was determined from the energy balance between applied power and radiation loss. Temperature distribution of the filament was determined based on the Planck's law of radiation from the spectral radiation intensity ratio of the filament surface using obtained temperature as a reference. It was found that temperature distribution of a filament was easily measured by this method. (author)

  5. Observations of a free-energy source for intense electrostatic waves. [in upper atmosphere near upper hybrid resonance frequency

    Science.gov (United States)

    Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.

    1980-01-01

    Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.

  6. A Model-Based Approach for Joint Analysis of Pain Intensity and Opioid Consumption in Postoperative Pain

    DEFF Research Database (Denmark)

    Juul, Rasmus V; Knøsgaard, Katrine R; Olesen, Anne E

    2016-01-01

    Joint analysis of pain intensity and opioid consumption is encouraged in trials of postoperative pain. However, previous approaches have not appropriately addressed the complexity of their interrelation in time. In this study, we applied a non-linear mixed effects model to simultaneously study pain...... intensity and opioid consumption in a 4-h postoperative period for 44 patients undergoing percutaneous kidney stone surgery. Analysis was based on 748 Numerical Rating Scale (NRS) scores of pain intensity and 51 observed morphine and oxycodone dosing events. A joint model was developed to describe...... the recurrent pattern of four key phases determining the development of pain intensity and opioid consumption in time; (A) Distribution of pain intensity scores which followed a truncated Poisson distribution with time-dependent mean score ranging from 0.93 to 2.45; (B) Probability of transition to threshold...

  7. Incidence and distribution of transplantable organs from donors after circulatory determination of death in U.S. intensive care units.

    Science.gov (United States)

    Halpern, Scott D; Hasz, Richard D; Abt, Peter L

    2013-04-01

    All U.S. acute care hospitals must maintain protocols for recovering organs from donors after circulatory determination of death (DCDD), but the numbers, types, and whereabouts of available organs are unknown. To assess the maximal potential supply and distribution of DCDD organs in U.S. intensive care units. We conducted a population-based cohort study among a randomly selected sample of 50 acute care hospitals in the highest-volume donor service area in the United States. We identified all potentially eligible donors dying within 90 minutes of the withdrawal of life-sustaining therapy from July 1, 2008 to June 30, 2009. Using prespecified criteria, potential donors were categorized as optimal, suboptimal, or ineligible to donate their lungs, kidneys, pancreas, or liver. If only optimal DCDD organs were used, the deceased donor supplies of these organs could increase by up to 22.7, 8.9, 7.4, and 3.3%, respectively. If optimal and suboptimal DCDD organs were used, the corresponding supply increases could be up to 50.0, 19.7, 18.5, and 10.9%. Three-quarters of DCDD organs could be recovered from the 17.2% of hospitals with the highest annual donor volumes-typically those with trauma centers and more than 20 intensive care unit beds. Universal identification and referral of DCDD could increase the supply of transplantable lungs by up to one-half, and would not increase any other organ supply by more than one-fifth. The marked clustering of DCDD among a small number of identifiable hospitals could guide targeted interventions to improve DCDD identification, referral, and management.

  8. Effect of Machine Smoking Intensity and Filter Ventilation Level on Gas-Phase Temperature Distribution Inside a Burning Cigarette

    Directory of Open Access Journals (Sweden)

    Li Bin

    2015-01-01

    Full Text Available Accurate measurements of cigarette coal temperature are essential to understand the thermophysical and thermo-chemical processes in a burning cigarette. The last system-atic studies of cigarette burning temperature measurements were conducted in the mid-1970s. Contemporary cigarettes have evolved in design features and multiple standard machine-smoking regimes have also become available, hence there is a need to re-examine cigarette combustion. In this work, we performed systematic measurements on gas-phase temperature of burning cigarettes using an improved fine thermocouple technique. The effects of machine-smoking parameters (puff volume and puff duration and filter ventilation levels were studied with high spatial and time resolutions during single puffs. The experimental results were presented in a number of differ-ent ways to highlight the dynamic and complex thermal processes inside a burning coal. A mathematical distribution equation was used to fit the experimental temperature data. Extracting and plotting the distribution parameters against puffing time revealed complex temperature profiles under different coal volume as a function of puffing intensities or filter ventilation levels. By dividing the coal volume prior to puffing into three temperature ranges (low-temperature from 200 to 400 °C, medium-temperature from 400 to 600 °C, and high-temperature volume above 600 °C by following their development at different smoking regimes, useful mechanistic details were obtained. Finally, direct visualisation of the gas-phase temperature through detailed temperature and temperature gradient contour maps provided further insights into the complex thermo-physics of the burning coal. [Beitr. Tabakforsch. Int. 26 (2014 191-203

  9. X-ray polarization measurements at relativistic laser intensities

    International Nuclear Information System (INIS)

    Beiersdorfer, P.; Shepherd, R.; Mancini, R.C.

    2004-01-01

    An effort has been started to measure the short pulse laser absorption and energy partition at relativistic laser intensities up to 10 21 W/cm 2 . Plasma polarization spectroscopy is expected to play an important role in determining fast electron generation and measuring the electron distribution function. (author)

  10. Foundations of data-intensive science: Technology and practice for high throughput, widely distributed, data management and analysis systems

    Science.gov (United States)

    Johnston, William; Ernst, M.; Dart, E.; Tierney, B.

    2014-04-01

    Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large

  11. From Intensity Profile to Surface Normal: Photometric Stereo for Unknown Light Sources and Isotropic Reflectances.

    Science.gov (United States)

    Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi

    2015-10-01

    We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.

  12. Effective charge model in the theory of infrared intensities and its application for study of charge di.stribution in the molecules of organometallic compounds

    International Nuclear Information System (INIS)

    Aleksanyan, V.T.; Samvelyan, S.Kh.

    1984-01-01

    General principles of plotting the parametric theory of IR spectrum intensities of polyatomic molecules are outlined. The development of the effective charges model in this theory is considered and the mathematical formalism of the first approximation of the method of effective atom charges is described in detail. The results of calculations of charges distribution in the Mo(CO) 6 , W(CO) 6 , Cp 2 V, Cp 2 Ru and others (Cp-cyclopentadiene), performed in the frame work of the outlined scheme are presented. It is shown that in the investigated carbonyles the effective charge on oxygen and metal atoms is negative, on carbon atom - positive. In dicyclopentavienyl complexes the effective charge on the metal atom is positive and is not over 0.6e; charge values on hydrogen and carbon atoms do not exceed, 0.10-0.15e. The notions of ''electrovalence'' of coordination bond and charge distribution in the case of metallocenes are not correlated

  13. The Puerto Rico Seismic Network Broadcast System: A user friendly GUI to broadcast earthquake messages, to generate shakemaps and to update catalogues

    Science.gov (United States)

    Velez, J.; Huerfano, V.; von Hillebrandt, C.

    2007-12-01

    The Puerto Rico Seismic Network (PRSN) has historically provided locations and magnitudes for earthquakes in the Puerto Rico and Virgin Islands (PRVI) region. PRSN is the reporting authority for the region bounded by latitudes 17.0N to 20.0N, and longitudes 63.5W to 69.0W. The main objective of the PRSN is to record, process, analyze, provide information and research local, regional and teleseismic earthquakes, providing high quality data and information to be able to respond to the needs of the emergency management, academic and research communities, and the general public. The PRSN runs Earthworm software (Johnson et al, 1995) to acquire and write waveforms to disk for permanent archival. Automatic locations and alerts are generated for events in Puerto Rico, the Intra America Seas, and the Atlantic by the EarlyBird system (Whitmore and Sokolowski, 2002), which monitors PRSN stations as well as some 40 additional stations run by networks operating in North, Central and South America and other sites in the Caribbean. PRDANIS (Puerto Rico Data Analysis and Information System) software, developed by PRSN, supports manual locations and analyst review of automatic locations of events within the PRSN area of responsibility (AOR), using all the broadband, strong-motion and short-period waveforms Rapidly available information regarding the geographic distribution of ground shaking in relation to the population and infrastructure at risk can assist emergency response communities in efficient and optimized allocation of resources following a large earthquake. The ShakeMap system developed by the USGS provides near real-time maps of instrumental ground motions and shaking intensity and has proven effective in rapid assessment of the extent of shaking and potential damage after significant earthquakes (Wald, 2004). In Northern and Southern California, the Pacific Northwest, and the states of Utah and Nevada, ShakeMaps are used for emergency planning and response, loss

  14. Latter-day Mother Irelands: The Role of Women in Michael Collins and The Wind that Shakes the Barley

    Directory of Open Access Journals (Sweden)

    Pilar Villar-Argáiz

    2007-03-01

    Full Text Available Despite the experimental and subversive work of Irish feminist filmmakers such as Pat Murphy and Margo Harkin in the 1980s, as Gerardine Meaney has contended, “the image of woman as Ireland, Ireland as woman, remains powerful and pervasive in the new Irish cinema” (1998: 250. The cinematic convention of representing Ireland through female characters becomes particularly relevant in two recent Irish historical films: Michael Collins (1996, directed and written by Irish Neil Jordan, and The Wind that Shakes the Barley (2006, written by Scottish Paul Laverty and directed by English Ken Loach. In their dealing with themes such as military occupation, colonisation and the heated debate about the Treaty, both films maintain the nationalist rhetoric that represents Ireland as a woman/mother in a direct manner. Over the course of this essay, I shall try to chart the implications of both films’ representations of women, with a view to demonstrating how, even at present, the trope of Mother Ireland continues to be deep in the national unconscious.

  15. [Effects of precipitation intensity on soil organic carbon fractions and their distribution under subtropical forests of South China].

    Science.gov (United States)

    Chen, Xiao-mei; Liu, Ju-xiu; Deng, Qi; Chu, Guo-wei; Zhou, Guo-yi; Zhang, De-qiang

    2010-05-01

    From December 2006 to June 2008, a field experiment was conducted to study the effects of natural precipitation, doubled precipitation, and no precipitation on the soil organic carbon fractions and their distribution under a successional series of monsoon evergreen broad-leaf forest, pine and broad-leaf mixed forest, and pine forest in Dinghushan Mountain of Southern China. Different precipitation treatments had no significant effects on the total organic carbon (TOC) concentration in the same soil layer under the same forest type (P > 0.05). In treatment no precipitation, particulate organic carbon (POC) and light fraction organic carbon (LFOC) were mainly accumulated in surface soil layer (0-10 cm); but in treatments natural precipitation and doubled precipitation, the two fractions were infiltrated to deeper soil layers. Under pine forest, soil readily oxidizable organic carbon (ROC) was significantly higher in treatment no precipitation than in treatments natural precipitation and doubled precipitation (P organic carbon storage. Precipitation intensity less affected TOC, but had greater effects on the labile components POC, ROC, and LFOC.

  16. Demonstration of two-electron (shake-up) photoionization and population inversions in the visible and VUV

    International Nuclear Information System (INIS)

    Silfvast, W.T.; Wood, O.R. II; Al-Salameh, D.Y.

    1986-01-01

    The two-electron (shake-up) photoionization process has been shown to be an effective mechanism for producing large population inversions in He/sup +/ with gain at 164 nm and in Ar/sup +/ with gain at 428 and 477 nm and for observing the first autoionizing states in Cd/sup +/. Such a mechanism was recently proposed as an excitation mechanism for a VUV laser in lithium. In each species the rapid excitation and detection using broadband emission from a 30-mJ 100-ps duration laser-produced plasma and a detection system with subnanosecond time resolution were essential in observing these effects. In He, gains of up to 0.8 cm/sup -1/ for durations of 2-4 ns at 164.0 nm on the He-like (n = 3-2) transition in He/sup +/ were measured by comparing the plasma emission from a well-defined volume with and without the presence of a mirror of known reflectivity. The n = 3 upper laser level is pumped not only directly via two-electron photoionization from the neutral ground state but also indirectly (in times of the order of 1-2 ns) via electron collisions from photoionization-pumped higher-lying levels. The decay rate of the photoionization-pumped radiation-trapped lower laser level is increased by a unique process involving absorption of radiation via photoionization of ground state neutral helium atoms

  17. SHAKING TABLE TEST AND EFFECTIVE STRESS ANALYSIS ON SEISMIC PERFORMANCE WITH SEISMIC ISOLATION RUBBER TO THE INTERMEDIATE PART OF PILE FOUNDATION IN LIQUEFACTION

    Science.gov (United States)

    Uno, Kunihiko; Otsuka, Hisanori; Mitou, Masaaki

    The pile foundation is heavily damaged at the boundary division of the ground types, liquefied ground and non-liquefied ground, during an earthquake and there is a possibility of the collapse of the piles. In this study, we conduct a shaking table test and effective stress analysis of the influence of soil liquefaction and the seismic inertial force exerted on the pile foundation. When the intermediate part of the pile, there is at the boundary division, is subjected to section force, this part increases in size as compared to the pile head in certain instances. Further, we develop a seismic resistance method for a pile foundation in liquefaction using seismic isolation rubber and it is shown the middle part seismic isolation system is very effective.

  18. Combining multiple earthquake models in real time for earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  19. Unusual patch-matrix organization in the retrosplenial cortex of the reeler mouse and Shaking rat Kawasaki.

    Science.gov (United States)

    Ichinohe, Noritaka; Knight, Adrian; Ogawa, Masaharu; Ohshima, Toshio; Mikoshiba, Katsuhiko; Yoshihara, Yoshihiro; Terashima, Toshio; Rockland, Kathleen S

    2008-05-01

    The rat granular retrosplenial cortex (GRS) is a simplified cortex, with distinct stratification and, in the uppermost layers, distinct modularity. Thalamic and cortical inputs are segregated by layers and in layer 1 colocalize, respectively, with apical dendritic bundles originating from neurons in layers 2 or 5. To further investigate this organization, we turned to reelin-deficient reeler mouse and Shaking rat Kawasaki. We found that the disrupted lamination, evident in Nissl stains in these rodents, is in fact a patch-matrix mosaic of segregated afferents and dendrites. Patches consist of thalamocortical connections, visualized by vesicular glutamate transporter 2 (VGluT2) or AChE. The surrounding matrix consists of corticocortical terminations, visualized by VGluT1 or zinc. Dendrites concentrate in the matrix or patches, depending on whether they are OCAM positive (matrix) or negative (patches). In wild-type rodents and, presumably, mutants, OCAM(+) structures originate from layer 5 neurons. By double labeling for dendrites (filled by Lucifer yellow in fixed slice) and OCAM immunofluorescence, we ascertained 2 populations in reeler: dendritic branches either preferred (putative layer 5 neurons) or avoided (putative supragranular neurons) the OCAM(+) matrix. We conclude that input-target relationships are largely preserved in the mutant GRS and that dendrite-dendrite interactions involving OCAM influence the formation of the mosaic configuration.

  20. Study on seismic design margin based upon inelastic shaking test of the piping and support system

    International Nuclear Information System (INIS)

    Ishiguro, Takami; Eto, Kazutoshi; Ikeda, Kazutoyo; Yoshii, Toshiaki; Kondo, Masami; Tai, Koichi

    2009-01-01

    In Japan, according to the revised Regulatory Guide for Aseismic Design of Nuclear Power Reactor Facilities, September 2006, criteria of design basis earthquakes of Nuclear Power Reactor Facilities become more severe. Then, evaluating seismic design margin took on a great importance and it has been profoundly discussed. Since seismic safety is one of the major key issues of nuclear power plant safety, it has been demonstrated that nuclear piping system possesses large safety margins by various durability test reports for piping in ultimate conditions. Though the knowledge of safety margin has been accumulated from these reports, there still remain some technical uncertainties about the phenomenon when both piping and support structures show inelastic behavior in extremely high seismic excitation level. In order to obtain the influences of inelastic behavior of the support structures to the whole piping system response when both piping and support structures show inelastic behavior, we examined seismic proving tests and we conducted simulation analyses for the piping system which focused on the inelastic behavior of the support to the whole piping system response. This paper introduces major results of the seismic shaking tests of the piping and support system and the simulation analyses of these tests. (author)

  1. Intensity modulated radiotherapy (IMRT) with compensators

    International Nuclear Information System (INIS)

    Salz, H.; Wiezorek, T.; Scheithauer, M.; Kleen, W.; Schwedas, M.; Wendt, T.G.

    2002-01-01

    The irradiation with intensity-modulated fields is possible with static as well as dynamic methods. In our university hospital, the intensity-modulated radiotherapy (IMRT) with compensators was prepared and used for the first time for patient irradiation in July 2001. The compensators consist of a mixture of tin granulate and wax, which is filled in a milled negative mould. The treatment planning is performed with Helax-TMS (MDS Nordion). An additional software is used for editing the modulation matrix ('Modifix'). Before irradiation of the first patient, extensive measurements have been carried out in terms of quality assurance of treatment planning and production of compensators. The results of the verification measurements have shown that IMRT with compensators possesses high spatial and dosimetric exactness. The calculated dose distributions are applied correctly. The accuracy of the calculated monitor units is normally better than 3%; in small volumes, further dosimetric inaccuracies between calculated and measured dose distributions are mostly less than 3%. Therefore, the compensators contribute to the achievement of high-level IMRT even when apparatuses without MLC are used. This paper describes the use of the IMRT with compensators, presents the limits of this technology, and discusses the first practical experiences. (orig.) [de

  2. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    International Nuclear Information System (INIS)

    Chaudhary, Kailash; Chaudhary, Himanshu

    2015-01-01

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  3. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhary, Kailash; Chaudhary, Himanshu [Malaviya National Institute of Technology, Jaipur (Malaysia)

    2015-11-15

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  4. Bayesian Inference of Nonstationary Precipitation Intensity-Duration-Frequency Curves for Infrastructure Design

    Science.gov (United States)

    2016-03-01

    each IDF curve and subsequently used to force a calibrated and validated precipitation - runoff model. Probability-based, risk-informed hydrologic...ERDC/CHL CHETN-X-2 March 2016 Approved for public release; distribution is unlimited. Bayesian Inference of Nonstationary Precipitation Intensity...based means by which to develop local precipitation Intensity-Duration-Frequency (IDF) curves using historical rainfall time series data collected for

  5. Rapid estimation of the economic consequences of global earthquakes

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis

  6. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    Science.gov (United States)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  7. A high-intensity plasma-sputter heavy negative ion source

    International Nuclear Information System (INIS)

    Alton, G.D.; Mori, Y.; Takagi, A.; Ueno, A.; Fukumoto, S.

    1989-01-01

    A multicusp magnetic field plasma surface ion source, normally used for H/sup /minus//ion beam formation, has been modified for the generation of high-intensity, pulsed, heavy negative ion beams suitable for a variety of uses. To date, the source has been utilized to produce mA intensity pulsed beams of more than 24 species. A brief description of the source, and basic pulsed-mode operational data, (e.g., intensity versus cesium oven temperature, sputter probe voltage, and discharge pressure), are given. In addition, illustrative examples of intensity versus time and the mass distributions of ion beams extracted from a number of samples along with emittance data, are also presented. Preliminary results obtained during dc operation of the source under low discharge power conditions suggest that sources of this type may also be used to produce high-intensity (mA) dc beams. The results of these investigations are given, as well, and the technical issues that must be addressed for this mode of operation are discussed. 15 refs., 10 figs., 2 tabs

  8. h-Adaptive Mesh Generation using Electric Field Intensity Value as a Criterion (in Japanese)

    OpenAIRE

    Toyonaga, Kiyomi; Cingoski, Vlatko; Kaneda, Kazufumi; Yamashita, Hideo

    1994-01-01

    Finite mesh divisions are essential to obtain accurate solution of two dimensional electric field analysis. It requires the technical knowledge to generate a suitable fine mesh divisions. In electric field problem, analysts are usually interested in the electric field intensity and its distribution. In order to obtain electric field intensity with high-accuracy, we have developed and adaptive mesh generator using electric field intensity value as a criterion.

  9. Cueing properties of the decrease of white noise intensity for avoidance conditioning in cats.

    Science.gov (United States)

    Zieliński, K

    1979-01-01

    In the main experiment two groups of 6 cats each were trained in active bar-pressing avoidance to a CS consisting of either a 10 dB or 20 dB decrease of the background white noise of 70 dB intensity. The two groups did not differ in rapidity of learning, however cats trained to the greater change .in background noise performed avoidance responses with shorter latencies than did cats trained to smaller change. Within-groups comparisons of cumulative distributions of response latencies for consecutive Vincentized fifths of avoidance acquisition showed the greatest changes in the region of latencies longer than the median latency of instrumental responses. On the other hand, the effects of CS intensity found in between-groups comparisons were located in the region of latencies shorter than the median latency of either group. Comparisons with data obtained in a complementary experiment employing additional 17 cats showed that subjects trained to stimuli less intense than the background noise level were marked by an exceptionally low level of avoidance responding with latencies shorter than 1.1 s, which was lower than expected from the probability of intertrial responses for this period of time. Due to this property of stimuli less intense than the background, the distributions of response latencies were moved to the right, in effect, prefrontal lesions influenced a greater part of latency distributions than in cats trained to stimuli more intense than the background.

  10. Determining the most suitable frequency and shaking time for olive harvesting by a pneumatic branch shaker

    Directory of Open Access Journals (Sweden)

    A Rezaei

    2016-09-01

    Full Text Available Introduction Olive (Oleo europaea includes about 20 species of small trees from Oleaceae family. This point should be considered that Iran has allocated only a small universal market to its olive products in spite of having high production potentials; so that about 23 provinces of this country can produce olive products. Therefore mechanizing of olive production and encouraging to develop olive trade are among the effective methods for development of this market. On the basis of IOOC report, the production of olive oil in 2008-2009 in Iran and all over the world has been 3 and 2866.5 thousand tons, respectively. Currently, harvesting olive product is done by hand in Iran. The expensiveness of work force and providing the needed workers are considered as the biggest problem in olive harvesting. While harvesting the tall trees, the workers use beating method by wood sticks which causes the fruits to be damaged and their quality to be decreased. The harvesting method which the quality and quantity of the olive final products is under its effect and also high expenses of harvesting by hand are considered as the two important factors in developing the mechanical harvesting of olive. For this purpose, the mechanized harvesting of olive should be considered for producing olive conserve and olive oil and decreasing expenses of harvesting. Considering the conducted studies on one hand and shortage of informational resources in the country on the other hand, a research was designed and performed with the following purposes: Designing and fabricating of a portable pneumatic branch shaking system. Determining the best frequency and oscillation duration for harvesting olive by the constructed system. Materials and Methods The branch shaking system is made of two general parts: (a The set of branch shaker driving unit. (b The portable vibration arm. For constructing the set of vibrating arm, two experiments “elasticity and inflectionˮ of tree branches were

  11. High resolution measurement of earthquake impacts on rock slope stability and damage using pre- and post-earthquake terrestrial laser scans

    Science.gov (United States)

    Hutchinson, Lauren; Stead, Doug; Rosser, Nick

    2017-04-01

    Understanding the behaviour of rock slopes in response to earthquake shaking is instrumental in response and relief efforts following large earthquakes as well as to ongoing risk management in earthquake affected areas. Assessment of the effects of seismic shaking on rock slope kinematics requires detailed surveys of the pre- and post-earthquake condition of the slope; however, at present, there is a lack of high resolution monitoring data from pre- and post-earthquake to facilitate characterization of seismically induced slope damage and validate models used to back-analyze rock slope behaviour during and following earthquake shaking. Therefore, there is a need for additional research where pre- and post- earthquake monitoring data is available. This paper presents the results of a direct comparison between terrestrial laser scans (TLS) collected in 2014, the year prior to the 2015 earthquake sequence, with that collected 18 months after the earthquakes and two monsoon cycles. The two datasets were collected using Riegl VZ-1000 and VZ-4000 full waveform laser scanners with high resolution (c. 0.1 m point spacing as a minimum). The scans cover the full landslide affected slope from the toe to the crest. The slope is located in Sindhupalchok District, Central Nepal which experienced some of the highest co-seismic and post-seismic landslide intensities across Nepal due to the proximity to the epicenters (<20 km) of both of the main aftershocks on April 26, 2015 (M 6.7) and May 12, 2015 (M7.3). During the 2015 earthquakes and subsequent 2015 and 2016 monsoons, the slope experienced rockfall and debris flows which are evident in satellite imagery and field photographs. Fracturing of the rock mass associated with the seismic shaking is also evident at scales not accessible through satellite and field observations. The results of change detection between the TLS datasets with an emphasis on quantification of seismically-induced slope damage is presented. Patterns in the

  12. A theoretical study of the possibilities for localization of anomalous density distribution in rock by means of underground cosmic ray muon intensity measurements

    International Nuclear Information System (INIS)

    Jacobsson, L.; Joensson, G.; Kristiansson, K.; Malmqvist, L.

    1977-05-01

    The possibilities for in situ rock density determinations by means of sub-surface cosmic ray muon intensity measurements have been studied. The calculations are based on an hypothetical scintillation counter telescope intended for registration in a gallery. It is shown that fairly accurate density measurements are possible and that a certain spatial resolution can be achieved. The measurements are only influenced by the density distribution in the forward direction which can make the muon technique valuable in connection with gravity measurements. Different prospecting situations have been studied. It is found that in certain prospecting situations the accuracy needed for the indication of a massive ore body can be reached within an acceptable registration period. (Auth.)

  13. Intra-pulse transition between ion acceleration mechanisms in intense laser-foil interactions

    Energy Technology Data Exchange (ETDEWEB)

    Padda, H.; King, M.; Gray, R. J.; Powell, H. W.; Gonzalez-Izquierdo, B.; Wilson, R.; Dance, R. J.; MacLellan, D. A.; Butler, N. M. H.; Capdessus, R.; McKenna, P., E-mail: paul.mckenna@strath.ac.uk [SUPA Department of Physics, University of Strathclyde, Glasgow G4 0NG (United Kingdom); Stockhausen, L. C. [Centro de Laseres Pulsados (CLPU), Parque Cientifico, Calle del Adaja s/n. 37185 Villamayor, Salamanca (Spain); Carroll, D. C. [Central Laser Facility, STFC Rutherford Appleton Laboratory, Oxfordshire OX11 0QX (United Kingdom); Yuan, X. H. [Key Laboratory for Laser Plasmas (Ministry of Education) and Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Collaborative Innovation Center of IFSA (CICIFSA), Shanghai Jiao Tong University, Shanghai 200240 (China); Borghesi, M. [Centre for Plasma Physics, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Neely, D. [SUPA Department of Physics, University of Strathclyde, Glasgow G4 0NG (United Kingdom); Central Laser Facility, STFC Rutherford Appleton Laboratory, Oxfordshire OX11 0QX (United Kingdom)

    2016-06-15

    Multiple ion acceleration mechanisms can occur when an ultrathin foil is irradiated with an intense laser pulse, with the dominant mechanism changing over the course of the interaction. Measurement of the spatial-intensity distribution of the beam of energetic protons is used to investigate the transition from radiation pressure acceleration to transparency-driven processes. It is shown numerically that radiation pressure drives an increased expansion of the target ions within the spatial extent of the laser focal spot, which induces a radial deflection of relatively low energy sheath-accelerated protons to form an annular distribution. Through variation of the target foil thickness, the opening angle of the ring is shown to be correlated to the point in time transparency occurs during the interaction and is maximized when it occurs at the peak of the laser intensity profile. Corresponding experimental measurements of the ring size variation with target thickness exhibit the same trends and provide insight into the intra-pulse laser-plasma evolution.

  14. Wii Fit U intensity and enjoyment in adults.

    Science.gov (United States)

    Tripette, Julien; Murakami, Haruka; Ando, Takafumi; Kawakami, Ryoko; Tanaka, Noriko; Tanaka, Shigeho; Miyachi, Motohiko

    2014-08-26

    The Wii Fit series (Nintendo Inc., Japan) provides active video games (AVGs) to help adults to maintain a sufficient level of daily physical activity (PA). The second generation of home AVG consoles is now emerging with new game modalities (including a portable mini screen in the case of the new Wii U). The present study was performed to investigate the intensity and enjoyment of Wii Fit U games among adults. Metabolic equivalent (METs, i.e., intensity) of the Wii Fit U activities were evaluated using metabolic chambers in 16 sedentary adults (8 women and 8 men). A short version of the physical activity enjoyment scale was completed for each activity. Wii Fit U activities were distributed over a range from 2.2  ±  0.4 METs (Hula dance) to 4.7  ±  1.2 (Hip-hop dance). Seven activities were classified as light-intensity PA (game modality does not induce higher METs. Men exercised at higher intensities than women. There was no correlation between enjoyment and MET values in women or men. More and more moderate-intensity activities are available through video gaming, but the average intensity (3.2  ±  0.6) is still low. Users should be aware that AVGs alone cannot fulfill the recommendations for PA, and the video games industry still must innovate further to enhance gaming intensity and make the tool more attractive to health and fitness professionals.

  15. China’s Energy Intensity, Determinants and Spatial Effects

    Directory of Open Access Journals (Sweden)

    Lei Jiang

    2016-06-01

    Full Text Available In the shadow of the energy crisis and environmental degradation, energy intensity is a hot topic in academic circles in China. The energy intensity distribution map of China indicates the fairly large geographic disparities globally and clustering locally in some areas, ascending from the southeast regions to the northwest provinces. Although energy intensity and its determinants vary from place to place, few studies have been made from the spatial perspective. Determinates of energy intensity and spatial spillover effects should be taken into consideration. Controlling for seven exogenous variables (per capita GDP; the share of the secondary sector; foreign direct investment; international trade, energy price, the share of coal, and transport sector and their spatial lags, we apply a spatial Durbin model to test for spatial spillover effects among energy intensity and exogenous variables from a panel of 29 Chinese provinces over 1998 to 2014. We find that per capita GDP has an insignificant and negative direct and indirect effect, but has a significant and negative total effect on energy intensity. The share of the secondary sector and the share of coal are found to have significant and positive direct and indirect effects on energy intensity. Foreign Direct Investment (FDI and Trade have significant and negative direct and indirect effects on energy intensity. The direct effect of energy price is found to be significantly positive while the indirect effect is negative. Only the direct effect of the Transport variable is significant and positive. The results of this study offer some theoretical evidence for differential localized policy making related to reduction in energy intensity.

  16. Standing Wave Field Distribution in Graded-Index Antireflection Coatings

    Directory of Open Access Journals (Sweden)

    Hongxiang Deng

    2018-01-01

    Full Text Available Standing wave field distributions in three classic types of graded-index antireflection coatings are studied. These graded-index antireflection coatings are designed at wavelengths from 200 nm to 1200 nm, which is the working wavelength range of high energy laser system for inertial-fusion research. The standing wave field distributions in these coatings are obtained by the numerical calculation of electromagnetic wave equation. We find that standing wave field distributions in these three graded-index anti-reflection coatings are quite different. For the coating with linear index distribution, intensity of standing wave field decreases periodically from surface to substrate with narrow oscillation range and the period is proportional to the incident wavelength. For the coating with exponential index distribution, intensity of standing wave field decreases periodically from surface to substrate with large oscillation range and the period is also proportional to the incident wavelength. Finally, for the coating with polynomial index, intensity of standing wave field is quickly falling down from surface to substrate without an obvious oscillation. We find that the intensity of standing wave field in the interface between coating and substrate for linear index, exponential index and polynomial index are about 0.7, 0.9 and 0.7, respectively. Our results indicate that the distributions of standing wave field in linear index coating and polynomial index coating are better than that in exponential index coating for the application in high energy laser system. Moreover, we find that the transmittance of linear index coating and polynomial index coating are also better than exponential index coating at the designed wavelength range. Present simulation results are useful for the design and application of graded-index antireflection coating in high energy laser system.

  17. Long-Range Coulomb Effect in Intense Laser-Driven Photoelectron Dynamics.

    Science.gov (United States)

    Quan, Wei; Hao, XiaoLei; Chen, YongJu; Yu, ShaoGang; Xu, SongPo; Wang, YanLan; Sun, RenPing; Lai, XuanYang; Wu, ChengYin; Gong, QiHuang; He, XianTu; Liu, XiaoJun; Chen, Jing

    2016-06-03

    In strong field atomic physics community, long-range Coulomb interaction has for a long time been overlooked and its significant role in intense laser-driven photoelectron dynamics eluded experimental observations. Here we report an experimental investigation of the effect of long-range Coulomb potential on the dynamics of near-zero-momentum photoelectrons produced in photo-ionization process of noble gas atoms in intense midinfrared laser pulses. By exploring the dependence of photoelectron distributions near zero momentum on laser intensity and wavelength, we unambiguously demonstrate that the long-range tail of the Coulomb potential (i.e., up to several hundreds atomic units) plays an important role in determining the photoelectron dynamics after the pulse ends.

  18. Communication infrastructure and data management for operating smart distribution systems

    NARCIS (Netherlands)

    Brunner, C.; Buchholz, B.M.; Gelfand, A.; Kamphuis, I.G.; Naumann, A.

    2012-01-01

    The enhancement of distribution networks into smart grids is based on three pillars: • Energy management on distribution level, • Distribution system automation, • Involvement of the consumers into the market for electricity by smart metering All these functions require an intensive exchange of

  19. Intensity modulated radiation therapy: Analysis of patient specific quality control results, experience of Rene-Gauducheau Centre

    International Nuclear Information System (INIS)

    Chiavassa, S.; Brunet, G.; Gaudaire, S.; Munos-Llagostera, C.; Delpon, G.; Lisbona, A.

    2011-01-01

    Purpose. - Systematic verifications of patient's specific intensity-modulated radiation treatments are usually performed with absolute and relative measurements. The results constitute a database which allows the identification of potential systematic errors. Material and methods. - We analyzed 1270 beams distributed in 232 treatment plans. Step-and-shoot intensity-modulated radiation treatments were performed with a Clinac (6 and 23 MV) and sliding window intensity-modulated radiation treatments with a Novalis (6 MV). Results. - The distributions obtained do not show systematic error and all the control meet specified tolerances. Conclusion. - These results allow us to reduce controls specific patients for treatments performed under identical conditions (location, optimization and segmentation parameters of treatment planning system, etc.). (authors)

  20. Stress intensity factor analyses of surface cracks in three-dimensional structures

    International Nuclear Information System (INIS)

    Miyazaki, Noriyuki; Shibata, Katsuyuki; Watanabe, Takayuki; Tagata, Kazunori.

    1983-11-01

    The stress intensity factor analyses of surface cracks in various three-dimensional structures were performed using the finite element computer program EPAS-J1. The results obtained by EPAS-J1 were compared with other finite element solutions or results obtained by the simplified estimation methods. Among the simplified estimation methods, the equations proposed by Newman and Raju give the distributions of the stress intensity factor along a crack front, which were compared with the result obtained by EPAS-J1. It was confirmed by comparing the results that EPAS-J1 gives reasonable stress intensity factors of surface cracks in three-dimensional structures. (author)

  1. Impact of anatomical changes on dose distribution of intensity-modulated radiotherapy for nasopharyngeal carcinoma

    International Nuclear Information System (INIS)

    Huang Shaomin; Deng Xiaowu; Zhao Chong; Han Fei; Gao Xingwang; Lu Taixiang; Wang Shi

    2010-01-01

    Objective: To observe the physique and anatomy changes in patients with nasopharyngeal carcinoma (NPC) during intensity-modulated radiotherapy (IMRT), using repeated CT images and deformable registration technique, and analyze their impact on delivery dose distribution. Methods: Ten NPC patients were randomly selected from those who had received IMRT treatment.Gross tumor volume of nasopharynx (GTV nx ), GTV of metastastatic lymph node (GTV nd ), clinical target volume (CTV) and normal tissue or organ (OAR) were re-contoured on the in-course repeated CT images using a kind of deformable registration and auto-segmentation software according to the original planning contouring. Changes in volume of treatment targets and organs at risk were evaluated and the trends were then analyzed. Dose distributions were recalculated with repeated CT images and compared to the original plans. Results: The volume of GTV nx were decreased by 6.44%, 10.23% and 9.72%(F=1.34, P=0.278) in the 2-, 4- and 6-week after IMRT comparing with before IMRT, with 6.59%, 30.98 % and 35.13 % (F = 9.22, P =0.000) in GTV nd , 0.73%, 1.86% and 1.41% (F=0.33, P=0.722) in CTV 1 , -1.78%, -6.47% and -9.34% (F =16.89, P =0.000) in CTV 2 , 13.96%, 32.97% and 37.77%(F=17.17, P=0.000) in the left parotid, and 3.56% , 29.57% and 35.63% (F = 13.49, P = 0.000) in the right parotid. The mean dose change rate of GTV nx were -0.39%, 0.08% and 0.32% (F =0.15, P =0.860) in the 2-, 4- and 6-week after IMRT comparing with planning faction dose, with 0.53%, 1.19% and 0.69% (F=0.81, P=0.455) in GTV nd , 1.95%, 2.70% and 3.78% (F=0.61, P=0.552) in the spinal cord, 0.32%, 0.81% and 0.62% (F=0.03, P=0.975) in the brain stem, 4.50%, 4.66% and 7.20% (F=0.33, P=0.725) in the left parotid, 2.20%, 7.17% and 7.12% (F= 1.24, P=0.306) in the right parotid. Conclusions: The GTV nd , CTV 2 and parotids shrinks obviously along with the treatment times for NPC patients during IMRT. Although changes in fraction dose of GTV, CTV, spinal

  2. Spatial distribution and implications to sources of halogenated flame retardants in riverine sediments of Taizhou, an intense e-waste recycling area in eastern China.

    Science.gov (United States)

    Zhou, Shanshan; Fu, Jie; He, Huan; Fu, Jianjie; Tang, Qiaozhi; Dong, Minfeng; Pan, Yongqiang; Li, An; Liu, Weiping; Zhang, Limin

    2017-10-01

    Concentrations and spatial distribution pattern of organohalogen flame retardants were investigated in the riverine surface sediments from Taizhou, an intensive e-waste recycling region in China. The analytes were syn- and anti- Dechlorane Plus (DP), Dechloranes 602, 603, and 604, a DP monoadduct, two dechlorinated DPs and 8 congeners of polybrominated diphenyl ethers (PBDEs). The concentrations of Σ 8 PBDEs, ΣDP, ΣDec600s, and ΣDP-degradates ranged from recycling facilities. Such patterns were largely shared by Dec602 and dechlorinated DP, although their concentration levels were much lower. These major flame retardants significantly correlate with each other, and cluster together in the loading plot of principle component analysis. In contrast, most non-deca PBDE congeners do not correlate with DPs. Dec604 stood out having distinctly different spatial distribution pattern, which could be linked to historical use of mirex. Organic matter content of the sediment was not the dominant factor in determining the spatial pattern of pollution by halogenated flame retardants in the rivers of this study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Fan-beam intensity modulated proton therapy.

    Science.gov (United States)

    Hill, Patrick; Westerly, David; Mackie, Thomas

    2013-11-01

    falloff of a proton depth-dose distribution was found to provide sufficient control over the dose distribution to meet objectives, even with coarse lateral resolution and channel widths as large as 2 cm. Treatment plans on both phantom and patient data show that dose conformity suffers when treatments are delivered from less than approximately ten angles. Treatment time for a sample prostate delivery is estimated to be on the order of 10 min, and neutron production is estimated to be comparable to that found for existing collimated systems. Fan beam proton therapy is a method of delivering intensity modulated proton therapy which may be employed as an alternative to magnetic scanning systems. A fan beam of protons can be created by a set of quadrupole magnets and modified by a dual-purpose range and intensity modulator. This can be used to deliver inversely planned treatments, with spot intensities optimized to meet user defined dose objectives. Additionally, the ability of a fan beam delivery system to effectively treat multiple beam spots simultaneously may provide advantages as compared to spot scanning deliveries.

  4. Variations in plasma wave intensity with distance along the electron foreshock boundary at Venus

    Science.gov (United States)

    Crawford, G. K.; Strangeway, R. J.; Russell, C. T.

    1991-01-01

    Plasma waves are observed in the solar wind upstream of the Venus bow shock by the Pioneer Venus Orbiter. These wave signatures occur during periods when the interplanetary magnetic field through the spacecraft position intersects the bow shock, thereby placing the spacecraft in the foreshock region. Wave intensity is analyzed as a function of distance along the electron foreshock boundary. It is found that the peak wave intensity may increase along the foreshock boundary from the tangent point to a maximum value at several Venus radii, then decrease in intensity with subsequent increase in distance. These observations could be associated with the instability process: the instability of the distribution function increasing with distance from the tangent point to saturation at the peak. Thermalization of the beam for distances beyond this point could reduce the distribution function instability resulting in weaker wave signatures.

  5. Radiation-induced crosslinking of aromatic polymers with cardo group

    International Nuclear Information System (INIS)

    Xu Jun; Zhang Wanxi

    1991-01-01

    The effects of irradiation on the aromatic polymers with cardo group, such as polyetherketone with cardo group (PEK-C) and polyethersulfone with cardo group (PES-C) were studied. It was found that PEK-C and PES-C can be crosslinked by irradiation under vacuum. Moreover, it was also found that the intensity of the shake-up peak of X-ray photoelectron spectroscopy (XPS) for PEK-C and PES-C varies as irradiation dose. Gelation doses (Rg) of PEK-C and PES-C were estimated by shake-up peaks of XPS. (author) 6 refs.; 8 figs.; 3 tabs

  6. Efficient Production Process for Food Grade Acetic Acid by Acetobacter aceti in Shake Flask and in Bioreactor Cultures

    Directory of Open Access Journals (Sweden)

    Hassan M. Awad

    2012-01-01

    Full Text Available Acetic acid is one of the important weak acids which had long history in chemical industries. This weak organic acid has been widely used as one of the key intermediate for many chemical, detergent, wood and food industries. The production of this acid is mainly carried out using submerged fermentation system and the standard strain Acetobacter aceti. In the present work, six different media were chosen from the literatures and tested for acetic acid production. The highest acetic acid production was produced in medium composed of glucose, yeast extract and peptone. The composition of this medium was optimized by changing the concentration of medium components. The optimized medium was composed of (g/L: glucose, 100; yeast extract, 12 and peptone 5 and yielded 53 g/L acetic acid in shake flask after 144 h fermentation. Further optimization in the production process was achieved by transferring the process to semi-industrial scale 16-L stirred tank bioreactor and cultivation under controlled pH condition. Under fully aerobic conditions, the production of acetic acid reached maximal concentration of about 76 g/L and 51 g/L for uncontrolled and controlled pH cultures, respectively.

  7. Evaluation of intense rainfall parameters interpolation methods for the Espírito Santo State

    Directory of Open Access Journals (Sweden)

    José Eduardo Macedo Pezzopane

    2009-12-01

    Full Text Available Intense rainfalls are often responsible for the occurrence of undesirable processes in agricultural and forest areas, such as surface runoff, soil erosion and flooding. The knowledge of intense rainfall spatial distribution is important to agricultural watershed management, soil conservation and to the design of hydraulic structures. The present paper evaluated methods of spatial interpolation of the intense rainfall parameters (“K”, “a”, “b” and “c” for the Espírito Santo State, Brazil. Were compared real intense rainfall rates with those calculated by the interpolated intense rainfall parameters, considering different durations and return periods. Inverse distance to the 5th power IPD5 was the spatial interpolation method with better performance to spatial interpolated intense rainfall parameters.

  8. Shakespeare revisité, entre fidélité et parodie : de La Nuit des Rois à Shake de Dan Jemmett Shakespeare Revisited, Between Fidelity and Parody: From Twelfth Night to Shake by Dan Jemmett

    Directory of Open Access Journals (Sweden)

    Isabelle Schwartz-Gastine

    2009-11-01

    Full Text Available William Shakespeare himself was a master of re-writing older material as he abundantly used this technique, which was totally justified at the Renaissance, to compose his poems or plays, from various sources whether literary (prose or verse, historical, or any other—and sometimes most unusual—background.The play I am considering in this paper is a very recent re-writing in English by Dan Jemmett (Peter Brook’s son-in-law, but performed in Marie-Paul Remo’s French translation at the Vidy Theatre in Lausanne during the 2001 season. It is called Shake, with a modest sub-title « around Twelfth Night », but which is indeed at the heart of the topic.Through the exploration of three themes: symmetry (of situations, of twin binarities, love’s misunderstanding, and music, I will argue that this comedy, whose title is a mix between the name of the Bard and the etymological meaning of the verb “to shake” as far as traditions are concerned, is faithful to the spirit (rather than the letter of the Shakespearean original in a very healthy comic vein.It is not worth wondering if the spectators fully understood the meaning of this comedy in which the four actors change roles all the time: their frequent bursts of laughter clearly showed that they enjoyed the spirit of the comedy, whether they knew Twelfth Night or not.

  9. January 1995 Hanshin-Awaji (Kobe), Japan Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — On the morning of January 17, 1995 (January 16 at 20:46 GMT), a major earthquake occurred near the City of Kobe, Japan. The greatest intensity of shaking for the 6.9...

  10. Escaping Electrons from Intense Laser-Solid Interactions as a Function of Laser Spot Size

    Directory of Open Access Journals (Sweden)

    Rusby Dean

    2018-01-01

    Full Text Available The interaction of a high-intensity laser with a solid target produces an energetic distribution of electrons that pass into the target. These electrons reach the rear surface of the target creating strong electric potentials that act to restrict the further escape of additional electrons. The measurement of the angle, flux and spectra of the electrons that do escape gives insights to the initial interaction. Here, the escaping electrons have been measured using a differentially filtered image plate stack, from interactions with intensities from mid 1020-1017 W/cm2, where the intensity has been reduced by defocussing to increase the size of the focal spot. An increase in electron flux is initially observed as the intensity is reduced from 4x1020 to 6x1018 W/cm2. The temperature of the electron distribution is also measured and found to be relatively constant. 2D particle-in-cell modelling is used to demonstrate the importance of pre-plasma conditions in understanding these observations.

  11. Determination of beam intensity in a single step for IMRT inverse planning

    International Nuclear Information System (INIS)

    Chuang, Keh-Shih; Chen, Tzong-Jer; Kuo, Shan-Chi; Jan, Meei-Ling; Hwang, Ing-Ming; Chen, Sharon; Lin, Ying-Chuan; Wu, Jay

    2003-01-01

    In intensity modulated radiotherapy (IMRT), targets are treated by multiple beams at different orientations each with spatially-modulated beam intensities. This approach spreads the normal tissue dose to a greater volume and produces a higher dose conformation to the target. In general, inverse planning is used for IMRT treatment planning. The inverse planning requires iterative calculation of dose distribution in order to optimize the intensity profile for each beam and is very computation intensive. In this paper, we propose a single-step method utilizing a figure of merit (FoM) to estimate the beam intensities for IMRT treatment planning. The FoM of a ray is defined as the ratio between the delivered tumour dose and normal tissue dose and is a good index for the dose efficacy of the ray. To maximize the beam utility, it is natural to irradiate the tumour with intensity of each ray proportional to the value of the FoM. The nonuniform beam intensity profiles are then fixed and the weights of the beam are determined iteratively in order to yield a uniform tumour dose. In this study, beams are employed at equispaced angles around the patient. Each beam with its field size that just covers the tumour is divided into a fixed number of beamlets. The FoM is calculated for each beamlet and this value is assigned to be the beam intensity. Various weighting factors are incorporated in the FoM computation to accommodate different clinical considerations. Two clinical datasets are used to test the feasibility of the algorithm. The resultant dose-volume histograms of this method are presented and compared to that of conformal therapy. Preliminary results indicate that this method reduces the critical organ doses at a small expense of uniformity in tumour dose distribution. This method estimates the beam intensity in one single step and the computation time is extremely fast and can be finished in less than one minute using a regular PC

  12. Sleep quality and covariates as predictors of pain intensity among the general population in rural China.

    Science.gov (United States)

    Liu, Xiao-Kun; Xiao, Shui-Yuan; Zhou, Liang; Hu, Mi; Zhou, Wei; Liu, Hui-Ming

    2018-01-01

    The aims of this study were to investigate the distribution of sleep quality and its relationship with the prevalence of pain among rural Chinese people and to explore the association between sleep quality and pain intensity among the general population in real-life settings. This cross-sectional survey included a total of 2052 adults from rural areas in Liuyang, Hunan Province, recruited through random multistage sampling. The distributions of sleep quality and pain prevalence among the participants over a 4-week period were described. Because of multicollinearity among variables, the influence of self-rated sleep quality and psychosocial covariates on pain intensity was explored using a ridge regression model. The data showed that participants reporting all categories of sleep quality experienced some degree of pain. Sleep quality, along with physical and mental health, was a negative predictor of pain intensity among the general population. Symptoms of depression positively predicted pain intensity. Poor sleep quality increased pain intensity among the participants. Both previous research and the present data suggest that improving sleep quality may significantly decrease pain intensity in the general population. The relationship between sleep and pain may be bidirectional. This finding also suggests that treatment for sleep disorders and insomnia should be addressed in future efforts to alleviate pain intensity.

  13. CISN Display - Reliable Delivery of Real-time Earthquake Information, Including Rapid Notification and ShakeMap to Critical End Users

    Science.gov (United States)

    Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Given, D.

    2002-12-01

    The California Integrated Seismic Network (CISN) Display is part of a Web-enabled earthquake notification system alerting users in near real-time of seismicity, and also valuable geophysical information following a large earthquake. It will replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering graphical earthquake information to users at emergency operations centers, and other organizations. Features distinguishing the CISN Display from other GUI tools are a state-full client/server relationship, a scalable message format supporting automated hyperlink creation, and a configurable platform-independent client with a GIS mapping tool; supporting the decision-making activities of critical users. The CISN Display is the front-end of a client/server architecture known as the QuakeWatch system. It is comprised of the CISN Display (and other potential clients), message queues, server, server "feeder" modules, and messaging middleware, schema and generators. It is written in Java, making it platform-independent, and offering the latest in Internet technologies. QuakeWatch's object-oriented design allows components to be easily upgraded through a well-defined set of application programming interfaces (APIs). Central to the CISN Display's role as a gateway to other earthquake products is its comprehensive XML-schema. The message model starts with the CUBE message format, but extends it by provisioning additional attributes for currently available products, and those yet to be considered. The supporting metadata in the XML-message provides the data necessary for the client to create a hyperlink and associate it with a unique event ID. Earthquake products deliverable to the CISN Display are ShakeMap, Ground Displacement, Focal Mechanisms, Rapid Notifications, OES Reports, and Earthquake Commentaries. Leveraging the power of the XML-format, the CISN Display provides prompt access to

  14. Intensity Of Agricultural Labour Use By Gender In Rural Households ...

    African Journals Online (AJOL)

    ... the intensity of agricultural labour use by gender and its determinants in rural households of Imo State. Data were collected with structured questionnaire from 60 male and 60 female headed households, and analysed using means, frequency distribution, percentages and ordinary least squares multiple regression model.

  15. Estimation of neutron energy distributions from prompt gamma emissions

    Science.gov (United States)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  16. A Design Method for Graded Insulation of Transformers by Transient Electric Field Intensity Analysis

    OpenAIRE

    Yamashita, Hideo; Cingoski, Vlatko; Namera, Akihiro; Nakamae, Eihachiro; Kitamura, Hideo

    2000-01-01

    In this paper, a calculation method for transient electric field distribution inside a transformer impressed with voltage is proposed: The concentrated electric network for the transformer is constructed by dividing transformer windings into several blocks, and the transient voltage and electric field intensity distributions inside the transformer are calculated by using the axisymmetrical finite element method. Moreover, an animated display of the distributions is realized: The visualization...

  17. Minimum intensity projection technique in the evaluation of pulmonary emphysema

    International Nuclear Information System (INIS)

    Ishii, Chikako; Tada, Shinpei; Fukuda, Kunihiko; Hayashi, Naganobu

    2000-01-01

    Clinically diagnosed 30 pulmonary emphysema patients were evaluated with helical CT. From 10 mm thickness and 10 mm/sec table speed helical CT date set, minimum intensity projection (Min-IP) were generated. Min-IP coronal images were well demonstrated distribution and degree of emphysema. Compared to the high resolution CT images (2 mm thickness), Min-IP images were as same as well evaluated the disease. Min-IP technique seem to be useful for evaluate distribution and degree of pulmonary emphysema. (author)

  18. Beam halo in high-intensity hadron linacs

    Energy Technology Data Exchange (ETDEWEB)

    Gerigk, F

    2006-12-21

    This document aims to cover the most relevant mechanisms for the development of beam halo in high-intensity hadron linacs. The introduction outlines the various applications of high-intensity linacs and it will explain why, in the case of the CERN Superconducting Proton Linac (SPL) study a linac was chosen to provide a high-power beam, rather than a different kind of accelerator. The basic equations, needed for the understanding of halo development are derived and employed to study the effects of initial and distributed mismatch on high-current beams. The basic concepts of the particle-core model, envelope modes, parametric resonances, the free-energy approach, and the idea of core-core resonances are introduced and extended to study beams in realistic linac lattices. The approach taken is to study the behavior of beams not only in simplified theoretical focusing structures but to highlight the beam dynamics in realistic accelerators. All effects which are described and derived with simplified analytic models, are tested in realistic lattices and are thus related to observable effects in linear accelerators. This approach involves the use of high-performance particle tracking codes, which are needed to simulate the behavior of the outermost particles in distributions of up to 100 million macro particles. In the end a set of design rules are established and their impact on the design of a typical high-intensity machine, the CERN SPL, is shown. The examples given in this document refer to two different design evolutions of the SPL study: the first conceptual design report (SPL I) and the second conceptual design report (SPL II). (orig.)

  19. Intensive care bereavement practices across New Zealand and Australian intensive care units: a qualitative content analysis.

    Science.gov (United States)

    Coombs, Maureen; Mitchell, Marion; James, Stephen; Wetzig, Krista

    2017-10-01

    End-of-life and bereavement care is an important consideration in intensive care. This study describes the type of bereavement care provided in intensive care units across Australia and New Zealand. Inductive qualitative content analysis was conducted on free-text responses to a web-based survey exploring unit-based bereavement practice distributed to nurse managers in 229 intensive care units in New Zealand and Australia. A total of 153 (67%) surveys were returned with 68 respondents making free-text responses. Respondents were mainly Australian (n = 54, 85·3%), from the public sector (n = 51, 75%) and holding Nurse Unit Managers/Charge Nurse roles (n = 39, 52·9%). From the 124 free-text responses, a total of 187 individual codes were identified focussing on bereavement care practices (n = 145, 77·5%), educational provision to support staff (n = 15, 8%) and organisational challenges (n = 27, 14·4%). Bereavement care practices described use of memory boxes, cultural specificity, annual memorial services and use of community support services. Educational provision identified local in-service programmes, and national bereavement courses for specialist bereavement nurse coordinators. Organisational challenges focussed on lack of funding, especially for provision of bereavement follow-up. This is the first Australasian-wide survey, and one of the few international studies, describing bereavement practices within intensive care, an important aspect of nursing practice. However, with funding for new bereavement services and education for staff lacking, there are continued challenges in developing bereavement care. Given knowledge about the impact of these areas of care on bereaved family members, this requires review. Nurses remain committed to supporting bereaved families during and following death in intensive care. With limited resource to support bereavement care, intensive care nurses undertake a range of bereavement care practices at time of death

  20. Atomic motion in a high-intensity standing wave laser field

    International Nuclear Information System (INIS)

    Saez Ramdohr, L.F.

    1987-01-01

    This work discusses the effect of a high-intensity standing wave laser field on the motion of neutral atoms moving with a relatively high velocity. The analysis involves a detailed calculation of the force acting on the atoms and the calculation of the diffusion tensor associated with the fluctuations of the quantum force operator. The high-intensity laser field limit corresponds to a Rabi frequency much greater than the natural rate of the atom. The general results are valid for any atomic velocity. Results are then specialized to the case of slow and fast atoms where the Doppler shift of the laser frequency due to the atomic motion is either smaller or larger than the natural decay rate of the atom. The results obtained for the force and diffusion tensor are applied to a particular ideal experiment that studies the evolution of a fast atomic beam crossing a high-intensity laser beam. The theories developed previously, for a similar laser configuration, discuss only the low atomic velocities case and not the more realistic case of fast atoms. Here, an approximate solution of the equation for the distribution is obtained. Starting from the approximate distribution function, the deflection angle and dispersion angle for the atomic beam with respect to the free motion are calculated

  1. The intensity of precipitation during extratropical cyclones in global warming simulations: a link to cyclone intensity?

    Energy Technology Data Exchange (ETDEWEB)

    Watterson, I.G. [CSIRO Atmospheric Research, Aspendale (Australia)

    2006-01-01

    Simulations of global warming over the coming century from two CSIRO GCMs are analysed to assess changes in the intensity of extratropical cyclones, and the potential role of increased latent heating associated with precipitation during cyclones. A simple surface cyclone detection scheme is applied to a four-member ensemble of simulations from the Mark 2 GCM, under rising greenhouse gas concentrations. The seasonal distribution of cyclones appears broadly realistic during 1961-1990. By 2071-2100, with 3 K global warming, numbers over 20 deg N to 70 deg N decrease by 6% in winter and 2% annually, with similar results for the south. The average intensity of cyclones, from relative central pressure and other measures, is largely unchanged however. 30-yr extremes of dynamic intensity also show little clear change, including values averaged over continents. Mean rain rates at cyclone centres are typically at least double rates from all days. Rates during cyclones increase by an average 14% in the northern winter under global warming. Rates over adjacent grid squares and during the previous day increase similarly, as do extreme rates. Results from simulations of the higher-resolution (1.8 deg grid) Mark 3 GCM are similar, with widespread increases in rain rates but not in cyclone intensity. The analyses suggest that latent heating during storms increases, as anticipated due to the increased moisture capacity of the warmer atmosphere. However, any role for enhanced heating in storm development in the GCMs is apparently masked by other factors. An exception is a 5% increase in extreme intensity around 55 deg S in Mark 3, despite decreased numbers of lows, a factor assessed using extreme value theory. Further studies with yet higher-resolution models may be needed to examine the potential realism of these results, particularly with regard to extremes at smaller scale.

  2. Momentum distributions of selected rare-gas atoms probed by intense femtosecond laser pulses

    DEFF Research Database (Denmark)

    Abu-Samha, Mahmoud; Madsen, Lars Bojer

    2011-01-01

    We provide a direct comparison between numerical and experimental (Rudenko et al 2004 J. Phys. B: At. Mol. Opt. Phys. 37 L407) photoelectron momentum distributions in strong-field ionization of selected rare-gas atoms (He, Ne and Ar), probed by femtosecond linearly polarized laser pulses. The cal......We provide a direct comparison between numerical and experimental (Rudenko et al 2004 J. Phys. B: At. Mol. Opt. Phys. 37 L407) photoelectron momentum distributions in strong-field ionization of selected rare-gas atoms (He, Ne and Ar), probed by femtosecond linearly polarized laser pulses....... The calculations are performed by solving the time-dependent Schrödinger equation within the single-active-electron approximation, and focal-volume effects are taken into account by appropriately averaging the results. The resulting momentum distributions are in quantitative agreement with the experimental...

  3. Reinvestigation of the charge density distribution in arc discharge fusion system

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Lin Horng; Yee, Lee Kim; Nan, Phua Yeong; Thung, Yong Yun; Khok, Yong Thian; Rahman, Faidz Abd [Centre of Photonics and Advance Material, Universiti Tunku Abdul Rahman Kuala Lumpur (Malaysia)

    2015-04-24

    A continual arc discharge system has been setup and the light intensity of arc discharge has been profiled. The mathematical model of local energy density distribution in arc discharge fusion has been simulated which is in good qualitative agreement with light intensity profile of arc discharge in the experiments. Eventually, the local energy density distribution of arc discharge system is able to be precisely manipulated to act as heat source in the fabrication of fused fiber devices.

  4. Reinvestigation of the charge density distribution in arc discharge fusion system

    International Nuclear Information System (INIS)

    Sheng, Lin Horng; Yee, Lee Kim; Nan, Phua Yeong; Thung, Yong Yun; Khok, Yong Thian; Rahman, Faidz Abd

    2015-01-01

    A continual arc discharge system has been setup and the light intensity of arc discharge has been profiled. The mathematical model of local energy density distribution in arc discharge fusion has been simulated which is in good qualitative agreement with light intensity profile of arc discharge in the experiments. Eventually, the local energy density distribution of arc discharge system is able to be precisely manipulated to act as heat source in the fabrication of fused fiber devices

  5. Development of distributed target

    CERN Document Server

    Yu Hai Jun; Li Qin; Zhou Fu Xin; Shi Jin Shui; Ma Bing; Chen Nan; Jing Xiao Bing

    2002-01-01

    Linear introduction accelerator is expected to generate small diameter X-ray spots with high intensity. The interaction of the electron beam with plasmas generated at the X-ray converter will make the spot on target increase with time and debase the X-ray dose and the imaging resolving power. A distributed target is developed which has about 24 pieces of thin 0.05 mm tantalum films distributed over 1 cm. due to the structure adoption, the distributed target material over a large volume decreases the energy deposition per unit volume and hence reduces the temperature of target surface, then reduces the initial plasma formalizing and its expansion velocity. The comparison and analysis with two kinds of target structures are presented using numerical calculation and experiments, the results show the X-ray dose and normalized angle distribution of the two is basically the same, while the surface of the distributed target is not destroyed like the previous block target

  6. Comparison of pressure transient response in intensely and sparsely fractured reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Johns, R.T.

    1989-04-01

    A comprehensive analytical model is presented to study the pressure transient behavior of a naturally fractured reservoir with a continuous matrix block size distribution. Geologically realistic probability density functions of matrix block size are used to represent reservoirs of varying fracture intensity and uniformity. Transient interporosity flow is assumed and interporosity skin is incorporated. Drawdown and interference pressure transient tests are investigated. The results show distinctions in the pressure response from intensely and sparsely fractured reservoirs in the absence of interporosity skin. Also, uniformly and nonuniformly fractured reservoirs exhibit distinct responses, irrespective of the degree of fracture intensity. The pressure response in a nonuniformly fractured reservoir with large block size variability, approaches a nonfractured (homogeneous) reservoir response. Type curves are developed to estimate matrix block size variability and the degree of fracture intensity from drawdown and interference well tests.

  7. Statistics of spatially integrated speckle intensity difference

    DEFF Research Database (Denmark)

    Hanson, Steen Grüner; Yura, Harold

    2009-01-01

    We consider the statistics of the spatially integrated speckle intensity difference obtained from two separated finite collecting apertures. For fully developed speckle, closed-form analytic solutions for both the probability density function and the cumulative distribution function are derived...... here for both arbitrary values of the mean number of speckles contained within an aperture and the degree of coherence of the optical field. Additionally, closed-form expressions are obtained for the corresponding nth statistical moments....

  8. Development of the U.S. Geological Survey's PAGER system (Prompt Assessment of Global Earthquakes for Response)

    Science.gov (United States)

    Wald, D.J.; Earle, P.S.; Allen, T.I.; Jaiswal, K.; Porter, K.; Hearne, M.

    2008-01-01

    The Prompt Assessment of Global Earthquakes for Response (PAGER) System plays a primary alerting role for global earthquake disasters as part of the U.S. Geological Survey’s (USGS) response protocol. We provide an overview of the PAGER system, both of its current capabilities and our ongoing research and development. PAGER monitors the USGS’s near real-time U.S. and global earthquake origins and automatically identifies events that are of societal importance, well in advance of ground-truth or news accounts. Current PAGER notifications and Web pages estimate the population exposed to each seismic intensity level. In addition to being a useful indicator of potential impact, PAGER’s intensity/exposure display provides a new standard in the dissemination of rapid earthquake information. We are currently developing and testing a more comprehensive alert system that will include casualty estimates. This is motivated by the idea that an estimated range of possible number of deaths will aid in decisions regarding humanitarian response. Underlying the PAGER exposure and loss models are global earthquake ShakeMap shaking estimates, constrained as quickly as possible by finite-fault modeling and observed ground motions and intensities, when available. Loss modeling is being developed comprehensively with a suite of candidate models that range from fully empirical to largely analytical approaches. Which of these models is most appropriate for use in a particular earthquake depends on how much is known about local building stocks and their vulnerabilities. A first-order country-specific global building inventory has been developed, as have corresponding vulnerability functions. For calibrating PAGER loss models, we have systematically generated an Atlas of 5,000 ShakeMaps for significant global earthquakes during the last 36 years. For many of these, auxiliary earthquake source and shaking intensity data are also available. Refinements to the loss models are ongoing

  9. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  10. Large airplane crash on a nuclear plant: Design study against excessive shaking of components

    International Nuclear Information System (INIS)

    Petrangeli, Gianni

    2010-01-01

    The problem of the strong shaking of structures and of components in case of an aircraft impact is the subject of this study. This problem is solved in some designs by protecting the external Nuclear Island block (N.I.) by an external thick wall, capable to withstand the aircraft impact. This wall is connected to the rest of the N.I. by the common foundation slab only. The first part of this study consists of the evaluation of the order of magnitude of the vibration attenuation which can be obtained by this design scheme. Should the attenuation obtained be not sufficient for some parts of the internal structures, some additional design provision could be adopted. In order to solve this problem, a specific design solution is here suggested. It essentially consists in connecting critical parts of structures to the common foundation slab with restraints having an adequate degree of deformability, so that the transmission of high frequency impact forces from other parts of the whole structure is minimized. In a previous paper, the structural protection of the reactor dome and of connected structures of a modern nuclear plant is dealt with. In the present paper, the protection of internal parts of the plant (the internal containment is chosen) in case of strong impact on lateral walls is studied. The indicative result of this study is that the enhancement of attenuation in the transmission of acceleration from the impact point to some representative point in the inner structure is of the order of 75. This result cannot be generalized, as it depends on many parameters of the structure and of the soil.

  11. Large airplane crash on a nuclear plant: Design study against excessive shaking of components

    Energy Technology Data Exchange (ETDEWEB)

    Petrangeli, Gianni, E-mail: g.petrangeli@gmail.i [University of Pisa, Via C. Maes 53, 00162 Roma (Italy)

    2010-12-15

    The problem of the strong shaking of structures and of components in case of an aircraft impact is the subject of this study. This problem is solved in some designs by protecting the external Nuclear Island block (N.I.) by an external thick wall, capable to withstand the aircraft impact. This wall is connected to the rest of the N.I. by the common foundation slab only. The first part of this study consists of the evaluation of the order of magnitude of the vibration attenuation which can be obtained by this design scheme. Should the attenuation obtained be not sufficient for some parts of the internal structures, some additional design provision could be adopted. In order to solve this problem, a specific design solution is here suggested. It essentially consists in connecting critical parts of structures to the common foundation slab with restraints having an adequate degree of deformability, so that the transmission of high frequency impact forces from other parts of the whole structure is minimized. In a previous paper, the structural protection of the reactor dome and of connected structures of a modern nuclear plant is dealt with. In the present paper, the protection of internal parts of the plant (the internal containment is chosen) in case of strong impact on lateral walls is studied. The indicative result of this study is that the enhancement of attenuation in the transmission of acceleration from the impact point to some representative point in the inner structure is of the order of 75. This result cannot be generalized, as it depends on many parameters of the structure and of the soil.

  12. Peer-Assisted Content Distribution with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Ledet-Pedersen, Jeppe; Sluyterman, Georg

    2014-01-01

    Peer-to-peer networks constitute a widely used, cost-effective and scalable technology to distribute bandwidth-intensive content. The technology forms a great platform to build distributed cloud storage without the need of a central provider. However, the majority of todays peer-to-peer systems...

  13. Rapid post-earthquake modelling of coseismic landslide intensity and distribution for emergency response decision support

    Directory of Open Access Journals (Sweden)

    T. R. Robinson

    2017-09-01

    Full Text Available Current methods to identify coseismic landslides immediately after an earthquake using optical imagery are too slow to effectively inform emergency response activities. Issues with cloud cover, data collection and processing, and manual landslide identification mean even the most rapid mapping exercises are often incomplete when the emergency response ends. In this study, we demonstrate how traditional empirical methods for modelling the total distribution and relative intensity (in terms of point density of coseismic landsliding can be successfully undertaken in the hours and days immediately after an earthquake, allowing the results to effectively inform stakeholders during the response. The method uses fuzzy logic in a GIS (Geographic Information Systems to quickly assess and identify the location-specific relationships between predisposing factors and landslide occurrence during the earthquake, based on small initial samples of identified landslides. We show that this approach can accurately model both the spatial pattern and the number density of landsliding from the event based on just several hundred mapped landslides, provided they have sufficiently wide spatial coverage, improving upon previous methods. This suggests that systematic high-fidelity mapping of landslides following an earthquake is not necessary for informing rapid modelling attempts. Instead, mapping should focus on rapid sampling from the entire affected area to generate results that can inform the modelling. This method is therefore suited to conditions in which imagery is affected by partial cloud cover or in which the total number of landslides is so large that mapping requires significant time to complete. The method therefore has the potential to provide a quick assessment of landslide hazard after an earthquake and may therefore inform emergency operations more effectively compared to current practice.

  14. Preface

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    earthquake of 26 January 2001, held at New Delhi during October 3–5, ... the stresses and modes of failure of this mid-plate region. ... effects of flexure in imposing a stress system ... masonry structures to intense ground shaking of the area ...

  15. On dose distribution comparison

    International Nuclear Information System (INIS)

    Jiang, Steve B; Sharp, Greg C; Neicu, Toni; Berbeco, Ross I; Flampouri, Stella; Bortfeld, Thomas

    2006-01-01

    In radiotherapy practice, one often needs to compare two dose distributions. Especially with the wide clinical implementation of intensity-modulated radiation therapy, software tools for quantitative dose (or fluence) distribution comparison are required for patient-specific quality assurance. Dose distribution comparison is not a trivial task since it has to be performed in both dose and spatial domains in order to be clinically relevant. Each of the existing comparison methods has its own strengths and weaknesses and there is room for improvement. In this work, we developed a general framework for comparing dose distributions. Using a new concept called maximum allowed dose difference (MADD), the comparison in both dose and spatial domains can be performed entirely in the dose domain. Formulae for calculating MADD values for various comparison methods, such as composite analysis and gamma index, have been derived. For convenience in clinical practice, a new measure called normalized dose difference (NDD) has also been proposed, which is the dose difference at a point scaled by the ratio of MADD to the predetermined dose acceptance tolerance. Unlike the simple dose difference test, NDD works in both low and high dose gradient regions because it considers both dose and spatial acceptance tolerances through MADD. The new method has been applied to a test case and a clinical example. It was found that the new method combines the merits of the existing methods (accurate, simple, clinically intuitive and insensitive to dose grid size) and can easily be implemented into any dose/intensity comparison tool

  16. Urban MEMS based seismic network for post-earthquakes rapid disaster assessment

    Science.gov (United States)

    D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe

    2014-05-01

    Life losses following disastrous earthquake depends mainly by the building vulnerability, intensity of shaking and timeliness of rescue operations. In recent decades, the increase in population and industrial density has significantly increased the exposure to earthquakes of urban areas. The potential impact of a strong earthquake on a town center can be reduced by timely and correct actions of the emergency management centers. A real time urban seismic network can drastically reduce casualties immediately following a strong earthquake, by timely providing information about the distribution of the ground shaking level. Emergency management centers, with functions in the immediate post-earthquake period, could be use this information to allocate and prioritize resources to minimize loss of human life. However, due to the high charges of the seismological instrumentation, the realization of an urban seismic network, which may allow reducing the rate of fatalities, has not been achieved. Recent technological developments in MEMS (Micro Electro-Mechanical Systems) technology could allow today the realization of a high-density urban seismic network for post-earthquakes rapid disaster assessment, suitable for the earthquake effects mitigation. In the 1990s, MEMS accelerometers revolutionized the automotive-airbag system industry and are today widely used in laptops, games controllers and mobile phones. Due to their great commercial successes, the research into and development of MEMS accelerometers are actively pursued around the world. Nowadays, the sensitivity and dynamics of these sensors are such to allow accurate recording of earthquakes with moderate to strong magnitude. Due to their low cost and small size, the MEMS accelerometers may be employed for the realization of high-density seismic networks. The MEMS accelerometers could be installed inside sensitive places (high vulnerability and exposure), such as schools, hospitals, public buildings and places of

  17. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    Science.gov (United States)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  18. Computationally intensive econometrics using a distributed matrix-programming language.

    Science.gov (United States)

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  19. Study of steady-state heat transfer with various large beam intensities in ADS windowless spallation target

    International Nuclear Information System (INIS)

    Liu, Jie; Gao, Lei; Tong, Jian-fei; Mehmood, Irfan; Lu, Wen-qiang

    2015-01-01

    Thermal hydraulics of spallation target, which is regarded as the ‘heart’ of the accelerator driven system (ADS), is very complicated due to the flow of the heavy liquid metal, spallation reaction and the coupling. In this paper, the steady-state temperature distribution, based on the flow pattern and the heat deposition, in the windowless spallation target with various large beam intensities from 10 mA to 40 mA is obtained to be in line with the development of ADS in China. The results show that the shape of temperature distribution is the same as broken wing of the butterfly but the temperature gradient and the maximum temperature vary in proportion with beam intensity. The variation of temperature gradient in different zones is also used to figure out the effect of large beam intensity. It has been found that large radial and axial temperature gradient leads to large temperature gradient on the wall. This may cause extremely large thermal stresses which leads to structural material damage. The results may be applied to the future design and optimization of ADS in China. - Highlights: • Shape of temperature distribution is the same but temperature gradient and maximum temperature vary with intensity. • The variation of temperature gradient in different zones reveals the effect of large beam intensity. • Large radial and axial temperature gradient leads to large temperature gradient on the wall

  20. VS30 – A site-characterization parameter for use in building Codes, simplified earthquake resistant design, GMPEs, and ShakeMaps

    Science.gov (United States)

    Borcherdt, Roger D.

    2012-01-01

    VS30, defined as the average seismic shear-wave velocity from the surface to a depth of 30 meters, has found wide-spread use as a parameter to characterize site response for simplified earthquake resistant design as implemented in building codes worldwide. VS30 , as initially introduced by the author for the US 1994 NEHRP Building Code, provides unambiguous definitions of site classes and site coefficients for site-dependent response spectra based on correlations derived from extensive borehole logging and comparative ground-motion measurement programs in California. Subsequent use of VS30 for development of strong ground motion prediction equations (GMPEs) and measurement of extensive sets of VS borehole data have confirmed the previous empirical correlations and established correlations of SVS30 with VSZ at other depths. These correlations provide closed form expressions to predict S30 V at a large number of additional sites and further justify S30 V as a parameter to characterize site response for simplified building codes, GMPEs, ShakeMap, and seismic hazard mapping.