WorldWideScience

Sample records for maximum uhi intensity

  1. Seasonal and Spatial Characteristics of Urban Heat Islands (UHIs in Northern West Siberian Cities

    Directory of Open Access Journals (Sweden)

    Victoria Miles

    2017-09-01

    Full Text Available Anthropogenic heat and modified landscapes raise air and surface temperatures in urbanized areas around the globe. This phenomenon is widely known as an urban heat island (UHI. Previous UHI studies, and specifically those based on remote sensing data, have not included cities north of 60°N. A few in situ studies have indicated that even relatively small cities in high latitudes may exhibit significantly amplified UHIs. The UHI characteristics and factors controlling its intensity in high latitudes remain largely unknown. This study attempts to close this knowledge gap for 28 cities in northern West Siberia (NWS. NWS cities are convenient for urban intercomparison studies as they have relatively similar cold continental climates, and flat, rather homogeneous landscapes. We investigated the UHI in NWS cities using the moderate-resolution imaging spectroradiometer (MODIS MOD 11A2 land surface temperature (LST product in 8-day composites. The analysis reveals that all 28 NWS cities exhibit a persistent UHI in summer and winter. The LST analysis found differences in summer and winter regarding the UHI effect, and supports the hypothesis of seasonal differences in the causes of UHI formation. Correlation analysis found the strongest relationships between the UHI and population (log P. Regression models using log P alone could explain 65–67% of the variability of UHIs in the region. Additional explanatory power—at least in summer—is provided by the surrounding background temperatures, which themselves are strongly correlated with latitude. The performed regression analysis thus confirms the important role of the surrounding temperature in explaining spatial–temporal variation of UHI intensity. These findings suggest a climatological basis for these phenomena and, given the importance of climatic warming, an aspect that deserves future study.

  2. Global Urban Heat Island (UHI) Data Set, 2013

    Data.gov (United States)

    National Aeronautics and Space Administration — The Urban Heat Island (UHI) effect represents the relatively higher temperatures found in urban areas compared to surrounding rural areas owing to higher proportions...

  3. Maximum intensity projection MR angiography using shifted image data

    International Nuclear Information System (INIS)

    Machida, Yoshio; Ichinose, Nobuyasu; Hatanaka, Masahiko; Goro, Takehiko; Kitake, Shinichi; Hatta, Junicchi.

    1992-01-01

    The quality of MR angiograms has been significantly improved in past several years. Spatial resolution, however, is not sufficient for clinical use. On the other hand, MR image data can be filled at anywhere using Fourier shift theorem, and the quality of multi-planar reformed image has been reported to be improved remarkably using 'shifted data'. In this paper, we have clarified the efficiency of 'shifted data' for maximum intensity projection MR angiography. Our experimental studies and theoretical consideration showd that the quality of MR angiograms has been significantly improved using 'shifted data' as follows; 1) remarkable reduction of mosaic artifact, 2) improvement of spatial continuity for the blood vessels, and 3) reduction of variance for the signal intensity along the blood vessels. In other words, the angiograms looks much 'finer' than conventional ones, although the spatial resolution is not improved theoretically. Furthermore, we found the quality of MR angiograms dose not improve significantly with the 'shifted data' more than twice as dense as ordinal ones. (author)

  4. Maximum Historical Seismic Intensity Map of S. Miguel Island (azores)

    Science.gov (United States)

    Silveira, D.; Gaspar, J. L.; Ferreira, T.; Queiroz, G.

    The Azores archipelago is situated in the Atlantic Ocean where the American, African and Eurasian lithospheric plates meet. The so-called Azores Triple Junction located in the area where the Terceira Rift, a NW-SE to WNW-ESE fault system with a dextral component, intersects the Mid-Atlantic Ridge, with an approximate N-S direction, dominates its geological setting. S. Miguel Island is located in the eastern segment of the Terceira Rift, showing a high diversity of volcanic and tectonic structures. It is the largest Azorean island and includes three active trachytic central volcanoes with caldera (Sete Cidades, Fogo and Furnas) placed in the intersection of the NW-SE Ter- ceira Rift regional faults with an E-W deep fault system thought to be a relic of a Mid-Atlantic Ridge transform fault. N-S and NE-SW faults also occur in this con- text. Basaltic cinder cones emplaced along NW-SE fractures link that major volcanic structures. The easternmost part of the island comprises an inactive trachytic central volcano (Povoação) and an old basaltic volcanic complex (Nordeste). Since the settle- ment of the island, early in the XV century, several destructive earthquakes occurred in the Azores region. At least 11 events hit S. Miguel Island with high intensity, some of which caused several deaths and significant damages. The analysis of historical documents allowed reconstructing the history and the impact of all those earthquakes and new intensity maps using the 1998 European Macrosseismic Scale were produced for each event. The data was then integrated in order to obtain the maximum historical seismic intensity map of S. Miguel. This tool is regarded as an important document for hazard assessment and risk mitigation taking in account that indicates the location of dangerous seismogenic zones and provides a comprehensive set of data to be applied in land-use planning, emergency planning and building construction.

  5. The Influence of Creatine Monohydrate on Strength and Endurance After Doing Physical Exercise With Maximum Intensity

    Directory of Open Access Journals (Sweden)

    Asrofi Shicas Nabawi

    2017-11-01

    Full Text Available The purpose of this study was: (1 to analyze the effect of creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (2 to analyze the effect of non creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (3 to analyze the results of the difference by administering creatine and non creatine on strength and endurance after exercise with maximum intensity. This type of research used in this research was quantitative with quasi experimental research methods. The design of this study was using pretest and posttest control group design, and data analysis was using a paired sample t-test. The process of data collection was done with the test leg muscle strength using a strength test with back and leg dynamometer, sit ups test with 1 minute sit ups, push ups test with push ups and 30 seconds with a VO2max test cosmed quart CPET during the pretest and posttest. Furthermore, the data were analyzed using SPSS 22.0 series. The results showed: (1 There was the influence of creatine administration against the strength after doing exercise with maximum intensity; (2 There was the influence of creatine administration against the group endurance after doing exercise with maximum intensity; (3 There was the influence of non creatine against the force after exercise maximum intensity; (4 There was the influence of non creatine against the group after endurance exercise maximum intensity; (5 The significant difference with the provision of non creatine and creatine from creatine group difference delta at higher against the increased strength and endurance after exercise maximum intensity. Based on the above analysis, it can be concluded that the increased strength and durability for each of the groups after being given a workout.

  6. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    OpenAIRE

    Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae

    2016-01-01

    This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...

  7. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  8. Effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren

    Directory of Open Access Journals (Sweden)

    Sergio Galdames-Maliqueo

    2017-12-01

    Full Text Available Introduction: The low levels of maximum oxygen consumption (VO2max evaluated in Chilean schoolchildren suggest the startup of trainings that improve the aerobic capacity. Objective: To analyze the effect of a High-intensity Interval Training method on maximum oxygen consumption in Chilean schoolchildren. Materials and methods: Thirty-two high school students from the eighth grade, who were divided into two groups, were part of the study (experimental group = 16 students and control group = 16 students. The main analyzed variable was the maximum oxygen consumption through the Course Navette Test. A High-intensity Interval training method was applied based on the maximum aerobic speed obtained through the Test. A mixed ANOVA was used for statistical analysis. Results: The experimental group showed a significant increase in the Maximum Oxygen Consumption between the pretest and posttest when compared with the control group (p < 0.0001. Conclusion: The results of the study showed a positive effect of the High-intensity Interval Training on the maximum consumption of oxygen. At the end of the study, it is concluded that High-intensity Interval Training is a good stimulation methodology for Chilean schoolchildren.

  9. Exploring the Urban Heat Island (UHI) Effect in Port Louis, Mauritius

    African Journals Online (AJOL)

    2012r

    2014-10-13

    Oct 13, 2014 ... namely: environmental contamination stemming from traffic congestion, the ... problem of UHI may become a more important issue than global warming because the rate of ..... MIGRATION, population distribution and development in the world. ... Urban Heat Island and Climate Change: An Assessment of.

  10. Urban Heat Islands (UHI) and the influence of city parks within the urban environment.

    Science.gov (United States)

    Garcia, W.; Shandas, V.; Voelkel, J.; Espinoza, D.

    2016-12-01

    Urban Heat Islands (UHI) and the influence of city parks within the urban environment.As cities grow outward and their populations increase the Urban Heat Island (UHI) phenomena becomes an ever more important topic to reducing environmental stressors. When UHI combines with human sensitivities such as pre-existing health conditions, and other vulnerabilities, finding an effective way to cool our cities is a matter of life and death. One way to cool an area is to introduce vegetation; which is abundant is in city parks. This study measures the cooling effect and temperature gradient of city parks; characterizing the relationship between the cooling effects within parks and surrounding neighborhoods. Past studies of the UHI are largely based on satellite images and, more recently, car traverses across that describe the ambient temperatures. The present project aims to understand the effects of parks on the UHI by asking two research questions: (1) how do the physical characteristics and designs of city parks impact the variation in ambient temperatures? And (2) what effect does the park have on cooling the surrounding neighborhoods? We address these questions by using a bicycle mounted with a temperature probe, and a series of geospatial analytics. The bicycle collects temperature data every one second, and the traverse intervals are an hour long to prevent normal fluctuations of daily temperature. Preliminary analysis shows that there is a temperature gradient within the parks (Figure 1). Further, the average temperature of the urban park could cool the surrounding area by upwards of 2°C, depending on the physical characteristics of then park and neighborhood. Our results suggest that the role of smaller parks and their design can reduce heat stress particularly among the vulnerable populations. These results can help urban planners make informed decisions when developing future city infrastructure.

  11. Estimate of annual daily maximum rainfall and intense rain equation for the Formiga municipality, MG, Brazil

    Directory of Open Access Journals (Sweden)

    Giovana Mara Rodrigues Borges

    2016-11-01

    Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.

  12. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Science.gov (United States)

    Borcherdt, Roger D.; Gibbs, James F.

    1975-01-01

    The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

  13. Prediction of maximum earthquake intensities for the San Francisco Bay region

    Energy Technology Data Exchange (ETDEWEB)

    Borcherdt, R.D.; Gibbs, J.F.

    1975-01-01

    The intensity data for the California earthquake of Apr 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan formation is intensity = 2.69 - 1.90 log (distance) (km). For sites on other geologic units, intensity increments, derived with respect to this empirical relation, correlate strongly with the average horizontal spectral amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is intensity increment = 0.27 + 2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan formation, 0.64 for the Great Valley sequence, 0.82 for Santa Clara formation, 1.34 for alluvium, and 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hayward fault.

  14. The maximum possible stress intensity factor for a crack in an unknown residual stress field

    International Nuclear Information System (INIS)

    Coules, H.E.; Smith, D.J.

    2015-01-01

    Residual and thermal stress fields in engineering components can act on cracks and structural flaws, promoting or inhibiting fracture. However, these stresses are limited in magnitude by the ability of materials to sustain them elastically. As a consequence, the stress intensity factor which can be applied to a given defect by a self-equilibrating stress field is also limited. We propose a simple weight function method for determining the maximum stress intensity factor which can occur for a given crack or defect in a one-dimensional self-equilibrating stress field, i.e. an upper bound for the residual stress contribution to K I . This can be used for analysing structures containing defects and subject to residual stress without any information about the actual stress field which exists in the structure being analysed. A number of examples are given, including long radial cracks and fully-circumferential cracks in thick-walled hollow cylinders containing self-equilibrating stresses. - Highlights: • An upper limit to the contribution of residual stress to stress intensity factor. • The maximum K I for self-equilibrating stresses in several geometries is calculated. • A weight function method can determine this maximum for 1-dimensional stress fields. • Simple MATLAB scripts for calculating maximum K I provided as supplementary material.

  15. Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific

    Directory of Open Access Journals (Sweden)

    Jae-Won Choi

    2016-01-01

    Full Text Available This study obtained the latitude where tropical cyclones (TCs show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two periods, anomalous anticyclonic circulations were strong in 30°–50°N, while anomalous monsoon trough was located in the north of South China Sea. This anomalous monsoon trough was extended eastward to 145°E. Middle-latitude region in East Asia is affected by the anomalous southeasterlies due to these anomalous anticyclonic circulations and anomalous monsoon trough. These anomalous southeasterlies play a role of anomalous steering flows that make the TCs heading toward region in East Asia middle latitude. As a result, TCs during 1999–2013 had higher latitude of the maximum intensity compared to the TCs during 1977–1998.

  16. Spatial and Temporal Trends in the Location of the Lifetime Maximum Intensity of Tropical Cyclones

    Directory of Open Access Journals (Sweden)

    Sarah A. Tennille

    2017-10-01

    Full Text Available The climatology of tropical cyclones is an immediate research need, specifically to better understand their long-term patterns and elucidate their future in a changing climate. One important pattern that has recently been detected is the poleward shift of the lifetime maximum intensity (LMI of tropical cyclones. This study further assessed the recent (1977–2015 spatial changes in the LMI of tropical cyclones, specifically those of tropical storm strength or stronger in the North Atlantic and northern West Pacific basins. Analyses of moving decadal means suggested that LMI locations migrated south in the North Atlantic and north in the West Pacific. In addition to a linear trend, there is a cyclical migration of LMI that is especially apparent in the West Pacific. Relationships between LMI migration and intensity were explored, as well as LMI location relative to landfall. The southerly trend of LMI in the North Atlantic was most prevalent in the strongest storms, resulting in these storms reaching their LMI farther from land. The relationship between intensity and LMI migration in the West Pacific was not as clear, but the most intense storms have been reaching LMI closer to their eventual landfall location. This work adds to those emphasizing the importance of understanding the climatology of the most intense hurricanes and shows there are potential human impacts resulting from any migration of LMI.

  17. Underwater Hyperspectral Imaging (UHI) for Assessing the Coverage of Drill Cuttings on Benthic Habitats

    Science.gov (United States)

    Erdal, I.; Sandvik Aas, L. M.; Cochrane, S.; Ekehaug, S.; Hansen, I. M.

    2016-02-01

    Larger-scale mapping of seabed areas requires improved methods in order to obtain effective and sound marine management. The state of the art for visual surveys today involves video transects, which is a proven, yet time consuming and subjective method. Underwater hyperspectral imaging (UHI) utilizes high color sensitive information in the visible light reflected from objects on the seafloor to automatically identify seabed organisms and other objects of interest (OOI). A spectral library containing optical fingerprints of a range of OOI's are used in the classification. The UHI is a push-broom hyperspectral camera utilizing a state of the art CMOS sensor ensuring high sensitivity and low noise levels. Dedicated lamps illuminate the imaging area of the seafloor. Specialized software is used both for processing raw data and for geo-localization and OOI identification. The processed hyperspectral image are used as a reference when extracting new spectral data for OOI's to the spectral library. By using the spectral library in classification algorithms, large sea floor areas can automatically be classified. Recent advantages in UHI classification includes mapping of areas affected by drill cuttings. Tools for automated classification of seabed that have a different bottom composition than adjacent baseline areas are under development. Tests have been applied to a transect in gradient from the drilling hole to baseline seabed. Some areas along the transect were identified as different compared to baseline seabed. The finding was supported by results from traditional seabed mapping methods. We propose that this can be a useful tool for tomorrows environmental mapping and monitoring of drill sites.

  18. [Polish guidelines of 2001 for maximum admissible intensities in high frequency EMF versus European Union recommendations].

    Science.gov (United States)

    Aniołczyk, Halina

    2003-01-01

    In 1999, a draft of amendments to maximum admissible intensities (MAI) of electromagnetic fields (0 Hz-300 GHz) was prepared by Professor H. Korniewicz of the Central Institute for Labour Protection, Warsaw, in cooperation with the Nofer Institute of Occupational Medicine, Łódź (radio- and microwaves) and the Military Institute of Hygiene and Epidemiology, Warsaw (pulse radiation). Before 2000, the development of the national MAI guidelines for the frequency range of 0.1 MHz-300 GHz was based on the knowledge of biological and health effects of EMF exposure available on the turn of the 1960s. A current basis for establishing the MAI international standards is a well-documented thermal effect measured by the value of a specific absorption rate (SAR), whereas the effects of resonant absorption imposes the nature of the functional dependency on EMF frequency. The Russian standards, already thoroughly analyzed, still take so-called non-thermal effects and the conception of energetic load for a work-shift with its progressive averaging (see hazardous zone in Polish guidelines) as a basis for setting maximum admissible intensities. The World Health Organization recommends a harmonization of the EMF protection guidelines, existing in different countries, with the guidelines of the International Commission for Non-Ionizing Radiation Protection (ICNIRP), and its position is supported by the European Union.

  19. Pulmonary nodules: sensitivity of maximum intensity projection versus that of volume rendering of 3D multidetector CT data

    NARCIS (Netherlands)

    Peloschek, Philipp; Sailer, Johannes; Weber, Michael; Herold, Christian J.; Prokop, Mathias; Schaefer-Prokop, Cornelia

    2007-01-01

    PURPOSE: To prospectively compare maximum intensity projection (MIP) and volume rendering (VR) of multidetector computed tomographic (CT) data for the detection of small intrapulmonary nodules. MATERIALS AND METHODS: This institutional review board-approved prospective study included 20 oncology

  20. Detection of pulmonary nodules at paediatric CT: maximum intensity projections and axial source images are complementary

    International Nuclear Information System (INIS)

    Kilburn-Toppin, Fleur; Arthurs, Owen J.; Tasker, Angela D.; Set, Patricia A.K.

    2013-01-01

    Maximum intensity projection (MIP) images might be useful in helping to differentiate small pulmonary nodules from adjacent vessels on thoracic multidetector CT (MDCT). The aim was to evaluate the benefits of axial MIP images over axial source images for the paediatric chest in an interobserver variability study. We included 46 children with extra-pulmonary solid organ malignancy who had undergone thoracic MDCT. Three radiologists independently read 2-mm axial and 10-mm MIP image datasets, recording the number of nodules, size and location, overall time taken and confidence. There were 83 nodules (249 total reads among three readers) in 46 children (mean age 10.4 ± 4.98 years, range 0.3-15.9 years; 24 boys). Consensus read was used as the reference standard. Overall, three readers recorded significantly more nodules on MIP images (228 vs. 174; P < 0.05), improving sensitivity from 67% to 77.5% (P < 0.05) but with lower positive predictive value (96% vs. 85%, P < 0.005). MIP images took significantly less time to read (71.6 ± 43.7 s vs. 92.9 ± 48.7 s; P < 0.005) but did not improve confidence levels. Using 10-mm axial MIP images for nodule detection in the paediatric chest enhances diagnostic performance, improving sensitivity and reducing reading time when compared with conventional axial thin-slice images. Axial MIP and axial source images are complementary in thoracic nodule detection. (orig.)

  1. Comparison of maximum intensity projection and digitally reconstructed radiographic projection for carotid artery stenosis measurement

    International Nuclear Information System (INIS)

    Hyde, Derek E.; Habets, Damiaan F.; Fox, Allan J.; Gulka, Irene; Kalapos, Paul; Lee, Don H.; Pelz, David M.; Holdsworth, David W.

    2007-01-01

    Digital subtraction angiography is being supplanted by three-dimensional imaging techniques in many clinical applications, leading to extensive use of maximum intensity projection (MIP) images to depict volumetric vascular data. The MIP algorithm produces intensity profiles that are different than conventional angiograms, and can also increase the vessel-to-tissue contrast-to-noise ratio. We evaluated the effect of the MIP algorithm in a clinical application where quantitative vessel measurement is important: internal carotid artery stenosis grading. Three-dimensional computed rotational angiography (CRA) was performed on 26 consecutive symptomatic patients to verify an internal carotid artery stenosis originally found using duplex ultrasound. These volumes of data were visualized using two different postprocessing projection techniques: MIP and digitally reconstructed radiographic (DRR) projection. A DRR is a radiographic image simulating a conventional digitally subtracted angiogram, but it is derived computationally from the same CRA dataset as the MIP. By visualizing a single volume with two different projection techniques, the postprocessing effect of the MIP algorithm is isolated. Vessel measurements were made, according to the NASCET guidelines, and percentage stenosis grades were calculated. The paired t-test was used to determine if the measurement difference between the two techniques was statistically significant. The CRA technique provided an isotropic voxel spacing of 0.38 mm. The MIPs and DRRs had a mean signal-difference-to-noise-ratio of 30:1 and 26:1, respectively. Vessel measurements from MIPs were, on average, 0.17 mm larger than those from DRRs (P<0.0001). The NASCET-type stenosis grades tended to be underestimated on average by 2.4% with the MIP algorithm, although this was not statistically significant (P=0.09). The mean interobserver variability (standard deviation) of both the MIP and DRR images was 0.35 mm. It was concluded that the MIP

  2. Changes of Physiological Tremor Following Maximum Intensity Exercise in Male and Female Young Swimmers

    Directory of Open Access Journals (Sweden)

    Gajewski Jan

    2015-12-01

    Full Text Available Purpose. The aim of this study was to determine the changes in postural physiological tremor following maximum intensity effort performed on arm ergometer by young male and female swimmers. Methods. Ten female and nine male young swimmers served as subjects in the study. Forearm tremor was measured accelerometrically in the sitting position before the 30-second Wingate Anaerobic Test on arm ergometer and then 5, 15 and 30 minutes post-test. Results. Low-frequency tremor log-amplitude (L1−5 increased (repeated factor: p < 0.05 from −7.92 ± 0.45 to −7.44 ± 0.45 and from −6.81 ± 0.52 to −6.35 ± 0.58 in women and men, respectively (gender: p < 0.05 5 minute post-test. Tremor log-amplitude (L15−20 increased (repeated factor: p < 0.001 from −9.26 ± 0.70 to −8.59 ± 0.61 and from −8.79 ± 0.65 to −8.39 ± 0.79 in women and men, respectively 5 minute post-test. No effect of gender was found for high frequency range.The increased tremor amplitude was observed even 30 minute post-exercise. Mean frequency of tremor spectra gradually decreased post-exercises (p < 0.001. Conclusions. Exercise-induced changes in tremor were similar in males and females. A fatigue produced a decrement in the mean frequency of tremor what suggested decreased muscle stiffness post-exercise. Such changes intremorafter exercise may be used as the indicator of fatigue in the nervous system.

  3. Use of Maximum Intensity Projections (MIPs) for target outlining in 4DCT radiotherapy planning.

    Science.gov (United States)

    Muirhead, Rebecca; McNee, Stuart G; Featherstone, Carrie; Moore, Karen; Muscat, Sarah

    2008-12-01

    Four-dimensional computed tomography (4DCT) is currently being introduced to radiotherapy centers worldwide, for use in radical radiotherapy planning for non-small cell lung cancer (NSCLC). A significant drawback is the time required to delineate 10 individual CT scans for each patient. Every department will hence ask the question if the single Maximum Intensity Projection (MIP) scan can be used as an alternative. Although the problems regarding the use of the MIP in node-positive disease have been discussed in the literature, a comprehensive study assessing its use has not been published. We compared an internal target volume (ITV) created using the MIP to an ITV created from the composite volume of 10 clinical target volumes (CTVs) delineated on the 10 phases of the 4DCT. 4DCT data was collected from 14 patients with NSCLC. In each patient, the ITV was delineated on the MIP image (ITV_MIP) and a composite ITV created from the 10 CTVs delineated on each of the 10 scans in the dataset. The structures were compared by assessment of volumes of overlap and exclusion. There was a median of 19.0% (range, 5.5-35.4%) of the volume of ITV_10phase not enclosed by the ITV_MIP, demonstrating that the use of the MIP could result in under-treatment of disease. In contrast only a very small amount of the ITV_MIP was not enclosed by the ITV_10phase (median of 2.3%, range, 0.4-9.8%), indicating the ITV_10phase covers almost all of the tumor tissue as identified by MIP. Although there were only two Stage I patients, both demonstrated very similar ITV_10phase and ITV_MIP volumes. These findings suggest that Stage I NSCLC tumors could be outlined on the MIP alone. In Stage II and III tumors the ITV_10phase would be more reliable. To prevent under-treatment of disease, the MIP image can only be used for delineation in Stage I tumors.

  4. Is the poleward migration of tropical cyclone maximum intensity associated with a poleward migration of tropical cyclone genesis?

    Science.gov (United States)

    Daloz, Anne Sophie; Camargo, Suzana J.

    2018-01-01

    A recent study showed that the global average latitude where tropical cyclones achieve their lifetime-maximum intensity has been migrating poleward at a rate of about one-half degree of latitude per decade over the last 30 years in each hemisphere. However, it does not answer a critical question: is the poleward migration of tropical cyclone lifetime-maximum intensity associated with a poleward migration of tropical cyclone genesis? In this study we will examine this question. First we analyze changes in the environmental variables associated with tropical cyclone genesis, namely entropy deficit, potential intensity, vertical wind shear, vorticity, skin temperature and specific humidity at 500 hPa in reanalysis datasets between 1980 and 2013. Then, a selection of these variables is combined into two tropical cyclone genesis indices that empirically relate tropical cyclone genesis to large-scale variables. We find a shift toward greater (smaller) average potential number of genesis at higher (lower) latitudes over most regions of the Pacific Ocean, which is consistent with a migration of tropical cyclone genesis towards higher latitudes. We then examine the global best track archive and find coherent and significant poleward shifts in mean genesis position over the Pacific Ocean basins.

  5. SU-E-T-174: Evaluation of the Optimal Intensity Modulated Radiation Therapy Plans Done On the Maximum and Average Intensity Projection CTs

    Energy Technology Data Exchange (ETDEWEB)

    Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)

    2014-06-01

    Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.

  6. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Bosmans, H; Verbeeck, R; Vandermeulen, D; Suetens, P; Wilms, G; Maaly, M; Marchal, G; Baert, A L [Louvain Univ. (Belgium)

    1995-12-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.

  7. Validation of new 3D post processing algorithm for improved maximum intensity projections of MR angiography acquisitions in the brain

    International Nuclear Information System (INIS)

    Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L.

    1995-01-01

    The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final 'background region' whereas cortical blood vessels and all brain tissues are included in the 'brain region'. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms

  8. The extent and intensity of the urban heat island in Iași city, Romania

    Science.gov (United States)

    Sfîcă, Lucian; Ichim, Pavel; Apostol, Liviu; Ursu, Adrian

    2017-10-01

    The study underlines the characteristics of the urban heat island of Iași (Iași's UHI) on the basis of 3 years of air temperature measurements obtained by fixed-point observations. We focus on the identification of UHI development and intensity as it is expressed by the temperature differences between the city centre and the rural surroundings. Annual, seasonal and daily characteristics of Iaşi's UHI are investigated at the level of the classical weather observation. In brief, an intensity of 0.8 °C of UHI and a spatial extension which corresponds to the densely built area of the city were delineated. The Iaşi UHI is stronger during summer calm nights—when the inner city is warmer with 2.5-3 °C than the surroundings—and is weaker during windy spring days. The specific features of Iași's UHI bear a profound connection to the specificity of the urban structure, the high atmospheric stability in the region and the local topography. Also, the effects of Iași's UHI upon some environmental aspects are presented as study cases. For instance, under the direct influence of UHI, we have observed that in the city centre, the apricot tree blossoms earlier (with up to 4 days) and the depth of the snow cover is significantly lower (with up to 10 cm for a rural snow depth of 30 cm) than in the surrounding areas.

  9. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  10. Summertime heat island intensities in three high-rise housing quarters in inner-city Shanghai China: Building layout, density and greenery

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Feng; Lau, Stephen S.Y. [Department of Architecture, The University of Hong Kong, Pokfulam Road, Hong Kong SAR (China); Qian, Feng [College of Architecture and Urban Planning (CAUP), Tongji University, 1239 Siping Road, Shanghai, 200092 (China)

    2010-01-15

    Shanghai as the largest city in China has been suffering from the ever-worsening thermal environment due to the explosive urbanization rate. As an indication of urbanization impact, urban heat islands (UHI) can give rise to a variety of problems. This paper reports the results of an empirical study on the summertime UHI patterns in three high-rise residential quarters in the inner-city Shanghai. Site-means of UHI intensity are compared; case studies are carried out on strategically located measurement points; and regression analysis is followed to examine the significance of the on-site design variables in relation to UHI intensity. It is found that site characteristics in plot layout, density and greenery have different impacts on UHI-day and UHI-night patterns. Day-time UHI is closely related to site shading factor. Total site factor (TSF) as an integrated measure on solar admittance shows a higher explanatory power in UHI-day than sky view factor (SVF) does under a partially cloudy sky condition. Night-time UHI cannot be statistically well explained by the on-site variables in use, indicating influences from anthropogenic heat and other sources. Evaporative cooling by vegetation plays a more important role at night than it does at day. Considered diurnally, the semi-enclosed plot layout with a fairly high density and tree cover has the best outdoor thermal condition. Design implication based on the findings, with consideration on other important environmental design issues, is briefly discussed. (author)

  11. MR tractography; Visualization of structure of nerve fiber system from diffusion weighted images with maximum intensity projection method

    Energy Technology Data Exchange (ETDEWEB)

    Kinosada, Yasutomi; Okuda, Yasuyuki (Mie Univ., Tsu (Japan). School of Medicine); Ono, Mototsugu (and others)

    1993-02-01

    We developed a new noninvasive technique to visualize the anatomical structure of the nerve fiber system in vivo, and named this technique magnetic resonance (MR) tractography and the acquired image an MR tractogram. MR tractography has two steps. One is to obtain diffusion-weighted images sensitized along axes appropriate for depicting the intended nerve fibers with anisotropic water diffusion MR imaging. The other is to extract the anatomical structure of the nerve fiber system from a series of diffusion-weighted images by the maximum intensity projection method. To examine the clinical usefulness of the proposed technique, many contiguous, thin (3 mm) coronal two-dimensional sections of the brain were acquired sequentially in normal volunteers and selected patients with paralyses, on a 1.5 Tesla MR system (Signa, GE) with an ECG-gated Stejskal-Tanner pulse sequence. The structure of the nerve fiber system of normal volunteers was almost the same as the anatomy. The tractograms of patients with paralyses clearly showed the degeneration of nerve fibers and were correlated with clinical symptoms. MR tractography showed great promise for the study of neuroanatomy and neuroradiology. (author).

  12. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, Jason, E-mail: jason.callahan@petermac.org [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Kron, Tomas [Department of Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Schneider-Kolsky, Michal [Department of Medical Imaging and Radiation Science, Monash University, Clayton, Victoria (Australia); Dunn, Leon [Department of Applied Physics, RMIT University, Melbourne (Australia); Thompson, Mick [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Siva, Shankar [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Aarons, Yolanda [Department of Radiation Oncology, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia); Binns, David [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Hicks, Rodney J. [Centre for Molecular Imaging, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne (Australia)

    2013-07-15

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently

  13. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    International Nuclear Information System (INIS)

    Callahan, Jason; Kron, Tomas; Schneider-Kolsky, Michal; Dunn, Leon; Thompson, Mick; Siva, Shankar; Aarons, Yolanda; Binns, David; Hicks, Rodney J.

    2013-01-01

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) 18 F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of 18 F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom while moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently underestimates ITV

  14. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    Energy Technology Data Exchange (ETDEWEB)

    Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.

  15. How diffusivity, thermocline and incident light intensity modulate the dynamics of deep chlorophyll maximum in Tyrrhenian Sea.

    Directory of Open Access Journals (Sweden)

    Davide Valenti

    Full Text Available During the last few years theoretical works have shed new light and proposed new hypotheses on the mechanisms which regulate the spatio-temporal behaviour of phytoplankton communities in marine pelagic ecosystems. Despite this, relevant physical and biological issues, such as effects of the time-dependent mixing in the upper layer, competition between groups, and dynamics of non-stationary deep chlorophyll maxima, are still open questions. In this work, we analyze the spatio-temporal behaviour of five phytoplankton populations in a real marine ecosystem by using a one-dimensional reaction-diffusion-taxis model. The study is performed, taking into account the seasonal variations of environmental variables, such as light intensity, thickness of upper mixed layer and profiles of vertical turbulent diffusivity, obtained starting from experimental findings. Theoretical distributions of phytoplankton cell concentration was converted in chlorophyll concentration, and compared with the experimental profiles measured in a site of the Tyrrhenian Sea at four different times (seasons of the year, during four different oceanographic cruises. As a result we find a good agreement between theoretical and experimental distributions of chlorophyll concentration. In particular, theoretical results reveal that the seasonal changes of environmental variables play a key role in the phytoplankton distribution and determine the properties of the deep chlorophyll maximum. This study could be extended to other marine ecosystems to predict future changes in the phytoplankton biomass due to global warming, in view of devising strategies to prevent the decline of the primary production and the consequent decrease of fish species.

  16. Phantom and Clinical Study of Differences in Cone Beam Computed Tomographic Registration When Aligned to Maximum and Average Intensity Projection

    International Nuclear Information System (INIS)

    Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo

    2014-01-01

    Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error

  17. Diagnostic of annual cycle and effects of the ENSO about the maximum intensity of duration rains between 1 and 24 hours at the Andes of Colombia

    International Nuclear Information System (INIS)

    Poveda, German; Mesa, Oscar; Toro, Vladimir; Agudelo, Paula; Alvarez, Juan F; Arias, Paola; Moreno, Hernan; Salazar, Luis; Vieira, Sara

    2002-01-01

    We study the distribution of maximum rainfall events during the annual cycle, for storms ranging from 1 to 24-hour in duration; by using information over 51 rain gauges locate at the Colombian Andes. Also, the effects of both phases of ENSO (El Nino and La Nina) are quantified. We found that maximum rainfall intensity events occur during the rainy periods of march-may and September-November. There is a strong similarity between the annual cycle of mean total rainfall and that of the maximum intensities of rainfall over the tropical Andes. This result is quite consistent throughout the three ranges of the Colombian Andes. At inter annual timescales, we found that both phases of ENSO are associated with disturbances of maximum rainfall events; since during La Nina there are more intense precipitation events than during El Nino, overall, for durations longer than 3 hours, rainfall intensity gets reduced by one order of magnitude with respect to shorter durations (1-3 hours). The most extreme recorded rainfall events are apparently not associated with the annual and inter annual large scales forcing and appear to be randomly generated by the important role of the land surface atmosphere in the genesis and dynamics of intense storm over central Colombia

  18. Effects of light intensity on growth, anatomy and forage quality of two tropical grasses (Brachiaria brizantha and Panicum maximum var. trichoglume).

    NARCIS (Netherlands)

    Deinum, B.; Sulastri, R.D.; Zeinab, M.H.J.; Maassen, A.

    1996-01-01

    Effects of light intensity on growth, histology and anatomy, and nutritive value were studied in seedlings of two shade tolerant species: Brachiaria brizantha and Panicum maximum var. trichoglume. They were studied under greenhouse conditions in pots with sandy soil and sufficient N and cut after a

  19. Effect of Chinese traditional medicine anti-fatigue prescription on the concentration of the serum testosterone and cortisol in male rats under stress of maximum intensive training

    International Nuclear Information System (INIS)

    Dong Ling; Si Xulan

    2008-01-01

    Objective: To study the effect of chinese traditional medicine anti-fatigue prescription on the concentration of the serum testosterone (T) and cortisol (C) in male rats under the stress of maximum intensive training. Methods: Wistar male rat models of stress under maximum intensity training were established (n=40) and half of them were treated with Chinese traditional medicine anti-fatigue prescription twenty undisturbed rats served as controls. Testosterone and cortisol serum levels were determined with RIA at the end of the seven weeks' experiment. Results: Maximum intensive training would cause the level of the serum testosterone lowered, the concentration of the cortisol elevated and the ratio of T/C reduced. The serum T levels and T/C ratio were significantly lower and cortisol levels significantly higher in the untreated models than those in the treated models and controls (P<0.01). The levels of the two hormones were markedly corrected in the treated models with no significantly differences from those in the controls. However, the T/C ratio was still significantly lower than that in the controls (P <0.05) due to a relatively slightly greater degree of reduction of T levels. Conclusion: Anti-fatigue prescription can not only promote the recovery of fatigue after the maximum intensive training but also strengthen the anabolism of the rats. (authors)

  20. Reduced Urban Heat Island intensity under warmer conditions

    Science.gov (United States)

    Scott, Anna A.; Waugh, Darryn W.; Zaitchik, Ben F.

    2018-06-01

    The Urban Heat Island (UHI), the tendency for urban areas to be hotter than rural regions, represents a significant health concern in summer as urban populations are exposed to elevated temperatures. A number of studies suggest that the UHI increases during warmer conditions, however there has been no investigation of this for a large ensemble of cities. Here we compare urban and rural temperatures in 54 US cities for 2000–2015 and show that the intensity of the Urban Heat Island, measured here as the differences in daily-minimum or daily-maximum temperatures between urban and rural stations or ΔT, in fact tends to decrease with increasing temperature in most cities (38/54). This holds when investigating daily variability, heat extremes, and variability across climate zones and is primarily driven by changes in rural areas. We relate this change to large-scale or synoptic weather conditions, and find that the lowest ΔT nights occur during moist weather conditions. We also find that warming cities have not experienced an increasing Urban Heat Island effect.

  1. Performance and nematode infection of ewe lambs on intensive rotational grazing with two different cultivars of Panicum maximum.

    Science.gov (United States)

    Costa, R L D; Bueno, M S; Veríssimo, C J; Cunha, E A; Santos, L E; Oliveira, S M; Spósito Filha, E; Otsuk, I P

    2007-05-01

    The daily live weight gain (DLWG), faecal nematode egg counts (FEC), and packed cell volume (PCV) of Suffolk, Ile de France and Santa Inês ewe lambs were evaluated fortnightly for 56 days in the dry season (winter) and 64 days in the rainy season (summer) of 2001-2002. The animals were distributed in two similar groups, one located on Aruana and the other on Tanzania grass (Panicum maximum), in rotational grazing system at the Instituto de Zootecnia, in Nova Odessa city (SP), Brazil. In the dry season, 24 one-year-old ewe lambs were used, eight of each breed, and there was no difference (p > 0.05) between grasses for DLWG (100 g/day), although the Suffolk had higher values (p < 0.05) than the other breeds. In the rainy season, with 33 six-month-old ewe lambs, nine Suffolk, eight Ile de France and 16 Santa Inês, the DLWG was not affected by breed, but it was twice as great (71 g/day, p < 0.05) on Aruana as on Tanzânia grass (30 g/day). The Santa Inês ewe lambs had the lowest FEC (p < 0.05) and the highest PCV (p < 0.05), confirming their higher resistance to Haemonchus contortus, the prevalent nematode in the rainy season. It was concluded that the best performance of ewe lambs on Aruana pastures in the rainy season is probably explained by their lower nematode infection owing to the better protein content of this grass (mean contents 11.2% crude protein in Aruana grass and 8.7% in Tanzania grass, p < 0.05) which may have improved the immunological system with the consequence that the highest PCV (p < 0.05) observed in those animals.

  2. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    Science.gov (United States)

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  3. Maximum-intensity-projection CT angiography for evaluating head and neck tumors. Usefulness of helical CT and auto bone masking method

    International Nuclear Information System (INIS)

    Sakai, Osamu; Nakashima, Noriko; Ogawa, Chiaki; Shen, Yun; Takata, Yasunori; Azemoto, Shougo.

    1994-01-01

    Angiographic images of 10 adult patients with head and neck tumors were obtained by helical computed tomography (CT) using maximum intensity projection (MIP). In all cases, the vasculature of the head and neck region was directly demonstrated. In the head and neck, bone masking is a more important problem than in other regions. We developed an effective automatic bone masking method (ABM) using 2D/3D connectivity. Helical CT angiography with MIP and ABM provided accurate anatomic depiction, and was considered to be helpful in preoperative evaluation of head and neck tumors. (author)

  4. Gross rainfall amount and maximum rainfall intensity in 60-minute influence on interception loss of shrubs: a 10-year observation in the Tengger Desert.

    Science.gov (United States)

    Zhang, Zhi-Shan; Zhao, Yang; Li, Xin-Rong; Huang, Lei; Tan, Hui-Juan

    2016-05-17

    In water-limited regions, rainfall interception is influenced by rainfall properties and crown characteristics. Rainfall properties, aside from gross rainfall amount and duration (GR and RD), maximum rainfall intensity and rainless gap (RG), within rain events may heavily affect throughfall and interception by plants. From 2004 to 2014 (except for 2007), individual shrubs of Caragana korshinskii and Artemisia ordosica were selected to measure throughfall during 210 rain events. Various rainfall properties were auto-measured and crown characteristics, i.e., height, branch and leaf area index, crown area and volume of two shrubs were also measured. The relative interceptions of C. korshinskii and A. ordosica were 29.1% and 17.1%, respectively. Rainfall properties have more contributions than crown characteristics to throughfall and interception of shrubs. Throughfall and interception of shrubs can be explained by GR, RI60 (maximum rainfall intensities during 60 min), RD and RG in deceasing importance. However, relative throughfall and interception of two shrubs have different responses to rainfall properties and crown characteristics, those of C. korshinskii were closely related to rainfall properties, while those of A. ordosica were more dependent on crown characteristics. We highlight long-term monitoring is very necessary to determine the relationships between throughfall and interception with crown characteristics.

  5. Assessment of the intensity and spatial variability of urban heat islands over the Indian cities for Regional Climate Analysis

    Science.gov (United States)

    Sultana, S.; Satyanarayana, A. N. V.

    2016-12-01

    The Urban heat island (UHI) in general developed over cities, due to the drastic changes in land use and land cover (LULC), has profound impact on the atmospheric circulation patterns due to the changes in the energy transport mechanism which in turn affect the regional climate. In this study, an attempt has been made to quantify the intensity of UHI, and to identify the pockets of UHI over cities during last decade over fast developing cosmopolitan Indian cities such as New Delhi, Mumbai and Kolkata. For this purpose, Landsat TM and ETM+ images during winter period, in about 5 year intervals from 2002 to 2013, has been selected to retrieve the brightness temperatures and land use/cover, from which Land Surface Temperature (LST) has been estimated using Normalized Difference Vegetation Index (NDVI). Normalized Difference Build-up Index (NDBI) and Normalized Difference Bareness Index (NDBaI) are estimated to extract build-up areas and bare land from the satellite images to identify the UHI pockets over the study area. For this purpose image processing and GIS tools were employed. Results reveal a significant increase in the intensity of UHI and increase in its area of influence over all the three cities. An increase of 2 to 2.5 oC of UHI intensity over the study regions has been noticed. The range of increase in UHI intensity is found to be more over New Delhi compared to Mumbai and Kolkata which is more or less same. The number of hotspot pockets of UHI has also been increased as seen from the spatial distribution of LST, NDVI and NDBI. This result signifies the impact of rapid urbanization and infrastructural developments has a direct consequence in modulating the regional climate over the Indian cities.

  6. Dual-energy CT angiography in peripheral arterial occlusive disease - accuracy of maximum intensity projections in clinical routine and subgroup analysis

    International Nuclear Information System (INIS)

    Kau, Thomas; Eicher, Wolfgang; Reiterer, Christian; Niedermayer, Martin; Rabitsch, Egon; Hausegger, Klaus A.; Senft, Birgit

    2011-01-01

    To evaluate the accuracy of dual-energy CT angiography (DE-CTA) maximum intensity projections (MIPs) in symptomatic peripheral arterial occlusive disease (PAOD). In 58 patients, DE-CTA of the lower extremities was performed on dual-source CT. In a maximum of 35 arterial segments, severity of the most stenotic lesion was graded (<10%, 10-49% and 50-99% luminal narrowing or occlusion) independently by two radiologists, with DSA serving as the reference standard. In DSA, 52.3% of segments were significantly stenosed or occluded. Agreement of DE-CTA MIPs with DSA was good in the aorto-iliac and femoro-popliteal regions (κ = 0.72; κ = 0.66), moderate in the crural region (κ = 0.55), slight in pedal arteries (κ = 0.10) and very good in bypass segments (κ = 0.81). Accuracy was 88%, 78%, 74%, 55% and 82% for the respective territories and moderate (75%) overall, with good sensitivity (84%) and moderate specificity (67%). Sensitivity and specificity was 82% and 76% in claudicants and 84% and 61% in patients with critical limb ischaemia. While correlating well with DSA above the knee, accuracy of DE-CTA MIPs appeared to be moderate in the calf and largely insufficient in calcified pedal arteries, especially in patients with critical limb ischaemia. (orig.)

  7. Producción de semilla de guinea (Panicum maximum Jacq. en un sistema intensivo de ceba de ganado vacuno Seed production of Guinea grass (Panicum maximum Jacq. in an intensive cattle fattening system

    Directory of Open Access Journals (Sweden)

    G Oquend

    2008-09-01

    Full Text Available En un suelo Pardo sialítico del subtipo Cambisol cálcico, localizado en la Empresa Pecuaria «Calixto García», en la provincia de Holguín, se estudio la producción de semilla de guinea (Panicum maximum Jacq. en un sistema intensivo de ceba de ganado vacuno, en condiciones de riego. Los tratamientos fueron cinco varieda­des del pasto guinea: A Común; B Likoni; C Mombasa; D Tanzania; y E Tobiatá. Los siguientes métodos se consideraron a su vez como subtratamientos: 1 Siembra con semilla gámica; 2 Plantación por macollas; y 3 Por vía de trasplante. La carga se mantuvo ajustada a 2 UGM/ha. En la producción de semillas existieron interacciones favorables entre los métodos de siembra y las variedades: semilla gámica-guinea Likoni; maco­lla-guinea Mombasa, Tanzania y Tobiatá; trasplante-guinea Común. En todo el sistema de explotación se obtuvo un aporte adicional superior a los $1 000/ha por concepto de producción de semilla, sin afectar la producción animal, en la que se obtuvieron ganancias superiores a los 800 g/animal/día y producciones pro­medio de 46 212 t de carne en pie por ciclo de ceba. Se considera factible la producción de semilla del pasto guinea en sistemas intensivos de ceba de ganado vacuno.On a sialitic Brown soil of the calcic Cambisol subtype, located at the «Calixto García» Livestock Production Enterprise, in the Holguín province, the production of Guinea grass (Panicum maximum Jacq. was studied in an intensive cattle fattening system, with irrigation. The treatments were five varieties of Guinea grass: A Common; B Likoni; C Mombasa; D Tanzania; and E Tobiatá. The following methods were considered, in turn, sub-treatments: 1 Seeding with gamic seed; 2 Planting with tillers; and 3 Transplanting. The stocking rate remained adjusted at 2 animals/ha. In seed production there were favorable interactions between the planting methods and the varieties: gamic seed-Guinea grass Likoni; tiller-Guinea grass

  8. Diagnostic performance of three-dimensional MR maximum intensity projection for the assessment of synovitis of the hand and wrist in rheumatoid arthritis: A pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xubin, E-mail: lixb@bjmu.edu.cn [Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Reseaech Center for Cancer, Tianjin, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060 (China); Liu, Xia; Du, Xiangke [Department of Radiology, Peking University People' s Hospital, Beijing 100044 (China); Ye, Zhaoxiang [Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Reseaech Center for Cancer, Tianjin, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060 (China)

    2014-05-15

    Purpose: To evaluate the diagnostic performance of three-dimensional (3D) MR maximum intensity projection (MIP) in the assessment of synovitis of the hand and wrist in rheumatoid arthritis (RA) compared to 3D contrast-enhanced magnetic resonance imaging (CE-MRI). Materials and methods: Twenty-five patients with RA underwent MR examinations. 3D MR MIP images were derived from the enhanced images. MR images were reviewed by two radiologists for the presence and location of synovitis of the hand and wrist. The diagnostic sensitivity, specificity and accuracy of 3D MIP were, respectively, calculated with the reference standard 3D CE-MRI. Results: In all subjects, 3D MIP images yielded directly and clearly the presence and location of synovitis with just one image. Synovitis demonstrated high signal intensity on MIP images. The k-values for the detection of articular synovitis indicated excellent interobserver agreements using 3D MIP images (k = 0.87) and CE-MR images (k = 0.91), respectively. 3D MIP demonstrated a sensitivity, specificity and accuracy of 91.07%, 98.57% and 96.0%, respectively, for the detection of synonitis. Conclusion: 3D MIP can provide a whole overview of lesion locations and a reliable diagnostic performance in the assessment of articular synovitis of the hand and wrist in patients with RA, which has potential value of clinical practice.

  9. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    International Nuclear Information System (INIS)

    Lu, Wei; Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah; Huang, Xuan; Regine, William F.; Feigenberg, Steven J.; D'Souza, Warren D.

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV MIP (internal target volume generated by contouring in the maximum intensity projection scan) and ITV 10 (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV 10 and ITV MIP . The match between ITV MIP and ITV 10 was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV MIP improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV MIP and ITV 10 over FB. On average, ITV MIP underestimated ITV 10 by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV MIP did not correct for the mismatch between ITV MIP and ITV 10 . Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV MIP and ITV 10 . In general, ITV MIP should be limited to lung cancers, and modification of ITV MIP in each phase of the 4DCT data set is recommended

  10. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei, E-mail: wlu@umm.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Huang, Xuan [Research and Development, Care Management Department, Johns Hopkins HealthCare LLC, Glen Burnie, Maryland (United States); Regine, William F.; Feigenberg, Steven J.; D' Souza, Warren D. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States)

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.

  11. Three-dimensional display of peripheral nerves in the wrist region based on MR diffusion tensor imaging and maximum intensity projection post-processing

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Wen Quan, E-mail: dingwenquan1982@163.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Zhou, Xue Jun, E-mail: zxj0925101@sina.com [Department of Radiology, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Tang, Jin Bo, E-mail: jinbotang@yahoo.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Gu, Jian Hui, E-mail: gujianhuint@163.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Jin, Dong Sheng, E-mail: jindongshengnj@aliyun.com [Department of Radiology, Jiangsu Province Official Hospital, Nanjing, Jiangsu (China)

    2015-06-15

    Highlights: • 3D displays of peripheral nerves can be achieved by 2 MIP post-processing methods. • The median nerves’ FA and ADC values can be accurately measured by using DTI6 data. • Adopting 6-direction DTI scan and MIP can evaluate peripheral nerves efficiently. - Abstract: Objectives: To achieve 3-dimensional (3D) display of peripheral nerves in the wrist region by using maximum intensity projection (MIP) post-processing methods to reconstruct raw images acquired by a diffusion tensor imaging (DTI) scan, and to explore its clinical applications. Methods: We performed DTI scans in 6 (DTI6) and 25 (DTI25) diffusion directions on 20 wrists of 10 healthy young volunteers, 6 wrists of 5 patients with carpal tunnel syndrome, 6 wrists of 6 patients with nerve lacerations, and one patient with neurofibroma. The MIP post-processing methods employed 2 types of DTI raw images: (1) single-direction and (2) T{sub 2}-weighted trace. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values of the median and ulnar nerves were measured at multiple testing sites. Two radiologists used custom evaluation scales to assess the 3D nerve imaging quality independently. Results: In both DTI6 and DTI25, nerves in the wrist region could be displayed clearly by the 2 MIP post-processing methods. The FA and ADC values were not significantly different between DTI6 and DTI25, except for the FA values of the ulnar nerves at the level of pisiform bone (p = 0.03). As to the imaging quality of each MIP post-processing method, there were no significant differences between DTI6 and DTI25 (p > 0.05). The imaging quality of single-direction MIP post-processing was better than that from T{sub 2}-weighted traces (p < 0.05) because of the higher nerve signal intensity. Conclusions: Three-dimensional displays of peripheral nerves in the wrist region can be achieved by MIP post-processing for single-direction images and T{sub 2}-weighted trace images for both DTI6 and DTI25

  12. Sensitivity study of the UHI in the city of Szeged (Hungary) to different offline simulation set-ups using SURFEX/TEB

    Science.gov (United States)

    Zsebeházi, Gabriella; Hamdi, Rafiq; Szépszó, Gabriella

    2015-04-01

    Urbanised areas modify the local climate due to the physical properties of surface subjects and their morphology. The urban effect on local climate and regional climate change interact, resulting in more serious climate change impacts (e.g., more heatwave events) over cities. Majority of people are now living in cities and thus, affected by these enhanced changes. Therefore, targeted adaptation and mitigation strategies in cities are of high importance. Regional climate models (RCMs) are sufficient tools for estimating future climate change of an area in detail, although most of them cannot represent the urban climate characteristics, because their spatial resolution is too coarse (in general 10-50 km) and they do not use a specific urban parametrization over urbanized areas. To describe the interactions between the urban surface and atmosphere on few km spatial scale, we use the externalised SURFEX land surface scheme including the TEB urban canopy model in offline mode (i.e. the interaction is only one-way). The driving atmospheric conditions highly influence the impact results, thus the good quality of these data is particularly essential. The overall aim of our research is to understand the behaviour of the impact model and its interaction with the forcing coming from the atmospheric model in order to reduce the biases, which can lead to qualified impact studies of climate change over urban areas. As a preliminary test, several short (few-day) 1 km resolution simulations are carried out over a domain covering a Hungarian town, Szeged, which is located at the flat southern part of Hungary. The atmospheric forcing is provided by ALARO (a new version of the limited-area model of the ARPEGE-IFS system running at the Royal Meteorological Institute of Belgium) applied over Hungary. The focal point of our investigations is the ability of SURFEX to simulate the diurnal evolution and spatial pattern of urban heat island (UHI). Different offline simulation set-ups have

  13. A new method for estimating the probable maximum hail loss of a building portfolio based on hailfall intensity determined by radar measurements

    Science.gov (United States)

    Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.

    2003-04-01

    Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.

  14. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  15. A comparison of conventional maximum intensity projection with a new depth-specific topographic mapping technique in the CT analysis of proximal tibial subchondral bone density

    International Nuclear Information System (INIS)

    Johnston, James D.; Kontulainen, Saija A.; Masri, Bassam A.; Wilson, David R.

    2010-01-01

    The objective was to identify subchondral bone density differences between normal and osteoarthritic (OA) proximal tibiae using computed tomography osteoabsorptiometry (CT-OAM) and computed tomography topographic mapping of subchondral density (CT-TOMASD). Sixteen intact cadaver knees from ten donors (8 male:2 female; mean age:77.8, SD:7.4 years) were categorized as normal (n = 10) or OA (n = 6) based upon CT reconstructions. CT-OAM assessed maximum subchondral bone mineral density (BMD). CT-TOMASD assessed average subchondral BMD across three layers (0-2.5, 2.5-5 and 5-10 mm) measured in relation to depth from the subchondral surface. Regional analyses of CT-OAM and CT-TOMASD included: medial BMD, lateral BMD, and average BMD of a 10-mm diameter area that searched each medial and lateral plateau for the highest ''focal'' density present within each knee. Compared with normal knees, both CT-OAM and CT-TOMASD demonstrated an average of 17% greater whole medial compartment density in OA knees (p 0.05). CT-TOMASD focal region analyses revealed an average of 24% greater density in the 0- to 2.5-mm layer (p = 0.003) and 36% greater density in the 2.5- to 5-mm layer (p = 0.034) in OA knees. Both CT-OAM and TOMASD identified higher medial compartment density in OA tibiae compared with normal tibiae. In addition, CT-TOMASD indicated greater focal density differences between normal and OA knees with increased depth from the subchondral surface. Depth-specific density analyses may help identify and quantify small changes in subchondral BMD associated with OA disease onset and progression. (orig.)

  16. Spatiotemporal Variation in Surface Urban Heat Island Intensity and Associated Determinants across Major Chinese Cities

    Directory of Open Access Journals (Sweden)

    Juan Wang

    2015-03-01

    Full Text Available Urban heat islands (UHIs created through urbanization can have negative impacts on the lives of people living in cities. They may also vary spatially and temporally over a city. There is, thus, a need for greater understanding of these patterns and their causes. While previous UHI studies focused on only a few cities and/or several explanatory variables, this research provides a comprehensive and comparative characterization of the diurnal and seasonal variation in surface UHI intensities (SUHIIs across 67 major Chinese cities. The factors associated with the SUHII were assessed by considering a variety of related social, economic and natural factors using a regression tree model. Obvious seasonal variation was observed for the daytime SUHII, and the diurnal variation in SUHII varied seasonally across China. Interestingly, the SUHII varied significantly in character between northern and southern China. Southern China experienced more intense daytime SUHIIs, while the opposite was true for nighttime SUHIIs. Vegetation had the greatest effect in the day time in northern China. In southern China, annual electricity consumption and the number of public buses were found to be important. These results have important theoretical significance and may be of use to mitigate UHI effects.

  17. Urban Heat Island and Park Cool Island Intensities in the Coastal City of Aracaju, North-Eastern Brazil

    Directory of Open Access Journals (Sweden)

    Max Anjos

    2017-08-01

    Full Text Available In this study, an evaluation of the Urban Heat Island (UHI and Park Cool Island (PCI intensities in Aracaju, North-Eastern Brazil, was performed. The basis of our evaluation is a 2-year dataset from the urban climatological network installed with the principles and concepts defined for urban areas related to climatic scales, sitting and exposure, urban morphology, and metadata. The current findings update UHI intensities in Aracaju refuting the trend registered in previous studies. On average, the UHI was more intense in the cool season (1.3 °C than in hot season (0.5 °C, which was caused by wind speed decrease. In relation to the PCI, mitigation of high air temperatures of 1.5–2 °C on average was registered in the city. However, the urban park is not always cooler than the surrounding built environment. Consistent long-term monitoring in the cities is very important to provide more accurate climatic information about the UHI and PCI to be applied in urban planning properly, e.g., to provide pleasant thermal comfort in urban spaces.

  18. Multidetector computed tomography of the head in acute stroke: predictive value of different patterns of the dense artery sign revealed by maximum intensity projection reformations for location and extent of the infarcted area

    Energy Technology Data Exchange (ETDEWEB)

    Gadda, Davide; Vannucchi, Letizia; Niccolai, Franco; Neri, Anna T.; Carmignani, Luca; Pacini, Patrizio [Ospedale del Ceppo, U.O. Radiodiagnostica, Pistoia (Italy)

    2005-12-01

    Maximum intensity projections reconstructions from 2.5 mm unenhanced multidetector computed tomography axial slices were obtained from 49 patients within the first 6 h of anterior-circulation cerebral strokes to identify different patterns of the dense artery sign and their prognostic implications for location and extent of the infarcted areas. The dense artery sign was found in 67.3% of cases. Increased density of the whole M1 segment with extension to M2 of the middle cerebral artery was associated with a wider extension of cerebral infarcts in comparison to M1 segment alone or distal M1 and M2. A dense sylvian branch of the middle cerebral artery pattern was associated with a more restricted extension of infarct territory. We found 62.5% of patients without a demonstrable dense artery to have a limited peripheral cortical or capsulonuclear lesion. In patients with a 7-10 points on the Alberta Stroke Early Programme Computed Tomography Score and a dense proximal MCA in the first hours of ictus the mean decrease in the score between baseline and follow-up was 5.09{+-}1.92 points. In conclusion, maximum intensity projections from thin-slice images can be quickly obtained from standard computed tomography datasets using a multidetector scanner and are useful in identifying and correctly localizing the dense artery sign, with prognostic implications for the entity of cerebral damage. (orig.)

  19. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Older Adults Overcoming Barriers Measuring Physical Activity Intensity Target Heart Rate & Estimated Maximum Heart Rate Perceived Exertion ( ... a heavy backpack Other Methods of Measuring Intensity Target Heart Rate and Estimated Maximum Heart Rate Perceived ...

  20. Erosivity under two durations of maximum rain intensities in Pelotas/RS = Erosividade sob duas durações de intensidades máximas da chuva em Pelotas - RS

    Directory of Open Access Journals (Sweden)

    Jacira Porto dos Santos

    2012-04-01

    Full Text Available In the Universal Equation of Soil Loss (USLE, erosivity is the factor related to rain and express its potential to cause soil erosion, being necessary to know its kinetic energy and the maximum intensities of rain in duration of 30 min. Thus, the aim of this study was to verify and quantify the impact of the rain duration, considering 15 and 30 min, on the USLE erosivity factor. To achieve this, 863 rain gauge records were used, duiring the period of 1983 to 1998 in the city of Pelotas, RS, obtained from the Agrometeorological Station - Covenant EMBRAPA/UFPel, INMET (31o51´S; 52o21´O and altitude of 13,2 m. With the records, it was estimated the erosivity values from the maximum intensities of rain during the period evaluated. The average annual values of erosivity was 2551,3 MJ ha-1 h-1 ano-1 and 1406,1 MJ ha-1 h-1 ano-1, for the average intensities of 6,40 mm h-1 and 3,74 mm h-1, in durations of 15 and 30 min, respectively. The results of this study have shown that the percentage of erosive rainfalls in relation to the total precipitation was of 91.0%, and that the erosivity was influenced by the duration of the maximum intensity of rain.= Na Equação Universal de Perdas de Solo (EUPS a erosividade é o fator relacionado à chuva e expressa o seu potencial em provocar a erosão do solo, sendo necessário que se conheça a energia cinética da mesma e as máximas intensidades da chuva na duração de 30 min. Objetivou-se com este trabalho verificar e quantificar o impacto da duração da chuva, considerando 15 e 30 min, sobre o fator erosividade da EUPS. Para tanto foram utilizados 863 registros pluviográficos de chuva, no período de 1983 a 1998 da localidade de Pelotas, RS, obtidos na Estação Agroclimatológica – Convênio EMBRAPA/UFPel, INMET (31o51´S;52o21´O e altitude de 13,2 m. Com os registros foram estimados os valores de erosividade a partir de intensidades máximas de chuva nas durações consideradas. Os valores m

  1. Geometrical differences in target volumes based on 18F-fluorodeoxyglucose positron emission tomography/computed tomography and four-dimensional computed tomography maximum intensity projection images of primary thoracic esophageal cancer.

    Science.gov (United States)

    Guo, Y; Li, J; Wang, W; Zhang, Y; Wang, J; Duan, Y; Shang, D; Fu, Z

    2014-01-01

    The objective of the study was to compare geometrical differences of target volumes based on four-dimensional computed tomography (4DCT) maximum intensity projection (MIP) and 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images of primary thoracic esophageal cancer for radiation treatment. Twenty-one patients with thoracic esophageal cancer sequentially underwent contrast-enhanced three-dimensional computed tomography (3DCT), 4DCT, and 18F-FDG PET/CT thoracic simulation scans during normal free breathing. The internal gross target volume defined as IGTVMIP was obtained by contouring on MIP images. The gross target volumes based on PET/CT images (GTVPET ) were determined with nine different standardized uptake value (SUV) thresholds and manual contouring: SUV≥2.0, 2.5, 3.0, 3.5 (SUVn); ≥20%, 25%, 30%, 35%, 40% of the maximum (percentages of SUVmax, SUVn%). The differences in volume ratio (VR), conformity index (CI), and degree of inclusion (DI) between IGTVMIP and GTVPET were investigated. The mean centroid distance between GTVPET and IGTVMIP ranged from 4.98 mm to 6.53 mm. The VR ranged from 0.37 to 1.34, being significantly (P<0.05) closest to 1 at SUV2.5 (0.94), SUV20% (1.07), or manual contouring (1.10). The mean CI ranged from 0.34 to 0.58, being significantly closest to 1 (P<0.05) at SUV2.0 (0.55), SUV2.5 (0.56), SUV20% (0.56), SUV25% (0.53), or manual contouring (0.58). The mean DI of GTVPET in IGTVMIP ranged from 0.61 to 0.91, and the mean DI of IGTVMIP in GTVPET ranged from 0.34 to 0.86. The SUV threshold setting of SUV2.5, SUV20% or manual contouring yields the best tumor VR and CI with internal-gross target volume contoured on MIP of 4DCT dataset, but 3DPET/CT and 4DCT MIP could not replace each other for motion encompassing target volume delineation for radiation treatment. © 2014 International Society for Diseases of the Esophagus.

  2. Detection of urban heat island in Ankara, Turkey

    International Nuclear Information System (INIS)

    Cicek, I.; Dogan, U.

    2006-01-01

    Ankara is the second largest city in Turkey after Istanbul, and the rate of population increase and urbanization are quite high. In this study, the effects of urbanization on temperature variation due to urbanization in Ankara were investigated. The intensities of urban heat island (UHI) for long and short term were analyzed. Analysis of both long- and short-term data revealed that there is a significant increase in the intensity of UHI (AT(u-r)) in winter during the period analyzed. Analysis of data collected for period of October 2001-September 2002 shows that intensity of maximum UHI is in February. In this month, positive UHI was observed in 26 nights and on all these days wind speed was less than 0.5ms.1. UHI is positive in all seasons and frequency and intensity of UHI in winter are higher than in the other seasons. This characteristic makes Ankara different from other temperate latitude cities

  3. Comparison of primary tumour volumes delineated on four-dimensional computed tomography maximum intensity projection and 18F-fluorodeoxyglucose positron emission tomography computed tomography images of non-small cell lung cancer

    International Nuclear Information System (INIS)

    Duan, Yili; Li, Jianbin; Zhang, Yingjie; Wang, Wei; Fan, Tingyong; Shao, Qian; Xu, Min; Guo, Yanluan; Sun, Xiaorong; Shang, Dongping

    2015-01-01

    The study aims to compare the positional and volumetric differences of tumour volumes based on the maximum intensity projection (MIP) of four-dimensional CT (4DCT) and 18 F-fluorodexyglucose ( 18 F-FDG) positron emission tomography CT (PET/CT) images for the primary tumour of non-small cell lung cancer (NSCLC). Ten patients with NSCLC underwent 4DCT and 18 F-FDG PET/CT scans of the thorax on the same day. Internal gross target volumes (IGTVs) of the primary tumours were contoured on the MIP images of 4DCT to generate IGTV MIP . Gross target volumes (GTVs) based on PET (GTV PET ) were determined with nine different threshold methods using the auto-contouring function. The differences in the volume, position, matching index (MI) and degree of inclusion (DI) of the GTV PET and IGTV MIP were investigated. In volume terms, GTV PET2.0 and GTV PET20% approximated closely to IGTV MIP with mean volume ratio of 0.93 ± 0.45 and 1.06 ± 0.43, respectively. The best MI was between IGTV MIP and GTV PET20% (0.45 ± 0.23). The best DI of IGTV MIP in GTV PET was IGTV MIP in GTV PET20% (0.61 ± 0.26). In 3D PET images, the GTVPET contoured by standardised uptake value (SUV) 2.0 or 20% of maximal SUV (SUV max ) approximate closely to the IGTV MIP in target size, while the spatial mismatch is apparent between them. Therefore, neither of them could replace IGTV MIP in spatial position and form. The advent of 4D PET/CT may improve the accuracy of contouring the perimeter for moving targets.

  4. The Impact of the Urban Heat Island during an Intense Heat Wave in Oklahoma City

    Directory of Open Access Journals (Sweden)

    Jeffrey B. Basara

    2010-01-01

    Full Text Available During late July and early August 2008, an intense heat wave occurred in Oklahoma City. To quantify the impact of the urban heat island (UHI in Oklahoma City on observed and apparent temperature conditions during the heat wave event, this study used observations from 46 locations in and around Oklahoma City. The methodology utilized composite values of atmospheric conditions for three primary categories defined by population and general land use: rural, suburban, and urban. The results of the analyses demonstrated that a consistent UHI existed during the study period whereby the composite temperature values within the urban core were approximately 0.5∘C warmer during the day than the rural areas and over 2∘C warmer at night. Further, when the warmer temperatures were combined with ambient humidity conditions, the composite values consistently revealed even warmer heat-related variables within the urban environment as compared with the rural zone.

  5. Respostas morfológicas do capim-Tanzânia (Panicum maximum Jacq. cv. Tanzânia-1 irrigado à intensidade de desfolha sob lotação rotacionada Morphological responses of irrigated Tanzaniagrass (Panicum Maximum Jacq. cv. Tanzania-1 to grazing intensity under rotational stocking

    Directory of Open Access Journals (Sweden)

    Alexandre Carneiro Leão de Mello

    2004-04-01

    Full Text Available Objetivando quantificar respostas morfológicas de dosséis de capim-Tanzânia (Panicum maximum Jacq. cv. Tanzânia-1 sob três intensidades de pastejo, lotação rotacionada e irrigação, foi conduzido um experimento em delineamento experimental de blocos completos casualizados com quatro repetições. Os tratamentos foram três intensidades de pastejo, representados pelas quantidades de massa seca verde residual pós-pastejo (T1=1000; T2=2500 e T3=4000 kg MSV/ha. Durante oito ciclos de pastejo (rebrotas de 33 dias após três dias de pastejo em cada ciclo, foram realizadas avaliações de altura média do dossel, índice de área foliar (IAF, interceptação luminosa (IL e ângulos foliares médios, em quatro dias dentro do período de rebrota (1, 11, 22 e 33 dias após a saída dos animais. A análise de correlações parciais indicou correlações entre altura e IL, bem como entre IAF e IL. Com o progresso da estação de pastejo, da primavera-verão para outono-inverno, houve reduções nos valores de IAF médio. Valores médios de IAF crítico (95% IL de 3,6 (T1, 4,0 (T2 e 4,5 (T3, foram alcançados por volta do 22º dia das rebrotas. A maior intensidade de pastejo (menor resíduo alterou a estrutura da pastagem no que diz respeito à arquitetura do dossel, evidenciada pela redução nos ângulos foliares médios (folhas mais horizontais ao longo das estações, com plantas passando a interceptar mais luz por unidade de área foliar. Os IAFs críticos medidos sugerem a necessidade de períodos de descanso menores que 33 dias em pastos de capim-Tanzânia, quando submetido a pastejo intensivo sob lotação rotacionada e irrigação.The objective of this research was to quantify morphological responses of Tanzania grass (Panicum maximum Jacq. cv. Tanzania-1 under three grazing intensities in an irrigated, rotationally stocked setting. Treatments consisted of three grazing intensities represented by three post-graze forage masses (T1=1,000; T

  6. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  7. Efficacy on maximum intensity projection of contrast-enhanced 3D spin echo imaging with improved motion-sensitized driven-equilibrium preparation in the detection of brain metastases

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Yun Jung; Choi, Byung Se; Yoon, Yeon Hong; Woo, Leonard Sun; Jung, Cheol Kyu; Kim, Jae Hyoung [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Lee, Kyung Mi [Dept. of Radiology, Kyung Hee University College of Medicine, Kyung Hee University Hospital, Seoul (Korea, Republic of)

    2017-08-01

    To evaluate the diagnostic benefits of 5-mm maximum intensity projection of improved motion-sensitized driven-equilibrium prepared contrast-enhanced 3D T1-weighted turbo-spin echo imaging (MIP iMSDE-TSE) in the detection of brain metastases. The imaging technique was compared with 1-mm images of iMSDE-TSE (non-MIP iMSDE-TSE), 1-mm contrast-enhanced 3D T1-weighted gradient-echo imaging (non-MIP 3D-GRE), and 5-mm MIP 3D-GRE. From October 2014 to July 2015, 30 patients with 460 enhancing brain metastases (size > 3 mm, n = 150; size ≤ 3 mm, n = 310) were scanned with non-MIP iMSDE-TSE and non-MIP 3D-GRE. We then performed 5-mm MIP reconstruction of these images. Two independent neuroradiologists reviewed these four sequences. Their diagnostic performance was compared using the following parameters: sensitivity, reading time, and figure of merit (FOM) derived by jackknife alternative free-response receiver operating characteristic analysis. Interobserver agreement was also tested. The mean FOM (all lesions, 0.984; lesions ≤ 3 mm, 0.980) and sensitivity ([reader 1: all lesions, 97.3%; lesions ≤ 3 mm, 96.2%], [reader 2: all lesions, 97.0%; lesions ≤ 3 mm, 95.8%]) of MIP iMSDE-TSE was comparable to the mean FOM (0.985, 0.977) and sensitivity ([reader 1: 96.7, 99.0%], [reader 2: 97, 95.3%]) of non-MIP iMSDE-TSE, but they were superior to those of non-MIP and MIP 3D-GREs (all, p < 0.001). The reading time of MIP iMSDE-TSE (reader 1: 47.7 ± 35.9 seconds; reader 2: 44.7 ± 23.6 seconds) was significantly shorter than that of non-MIP iMSDE-TSE (reader 1: 78.8 ± 43.7 seconds, p = 0.01; reader 2: 82.9 ± 39.9 seconds, p < 0.001). Interobserver agreement was excellent (κ > 0.75) for all lesions in both sequences. MIP iMSDE-TSE showed high detectability of brain metastases. Its detectability was comparable to that of non-MIP iMSDE-TSE, but it was superior to the detectability of non-MIP/MIP 3D-GREs. With a shorter reading time, the false-positive results of MIP i

  8. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  9. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  10. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  11. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  12. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  13. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  14. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  15. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  16. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Measuring Intensity Target Heart Rate and Estimated Maximum Heart Rate Perceived Exertion (Borg Rating of Perceived Exertion Scale) Get Email Updates To receive email updates about this page, enter your email ... ...

  17. P3: An installation for high-energy density plasma physics and ultra-high intensity laser–matter interaction at ELI-Beamlines

    Directory of Open Access Journals (Sweden)

    S. Weber

    2017-07-01

    Full Text Available ELI-Beamlines (ELI-BL, one of the three pillars of the Extreme Light Infrastructure endeavour, will be in a unique position to perform research in high-energy-density-physics (HEDP, plasma physics and ultra-high intensity (UHI (>1022W/cm2 laser–plasma interaction. Recently the need for HED laboratory physics was identified and the P3 (plasma physics platform installation under construction in ELI-BL will be an answer. The ELI-BL 10 PW laser makes possible fundamental research topics from high-field physics to new extreme states of matter such as radiation-dominated ones, high-pressure quantum ones, warm dense matter (WDM and ultra-relativistic plasmas. HEDP is of fundamental importance for research in the field of laboratory astrophysics and inertial confinement fusion (ICF. Reaching such extreme states of matter now and in the future will depend on the use of plasma optics for amplifying and focusing laser pulses. This article will present the relevant technological infrastructure being built in ELI-BL for HEDP and UHI, and gives a brief overview of some research under way in the field of UHI, laboratory astrophysics, ICF, WDM, and plasma optics.

  18. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  19. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  20. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  1. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  2. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  3. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  4. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  5. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  6. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  7. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  8. Stochastic conditional intensity processes

    DEFF Research Database (Denmark)

    Bauwens, Luc; Hautsch, Nikolaus

    2006-01-01

    model allows for a wide range of (cross-)autocorrelation structures in multivariate point processes. The model is estimated by simulated maximum likelihood (SML) using the efficient importance sampling (EIS) technique. By modeling price intensities based on NYSE trading, we provide significant evidence......In this article, we introduce the so-called stochastic conditional intensity (SCI) model by extending Russell’s (1999) autoregressive conditional intensity (ACI) model by a latent common dynamic factor that jointly drives the individual intensity components. We show by simulations that the proposed...... for a joint latent factor and show that its inclusion allows for an improved and more parsimonious specification of the multivariate intensity process...

  9. Maximum intensity of rarefaction shock waves for dense gases

    NARCIS (Netherlands)

    Guardone, A.; Zamfirescu, C.; Colonna, P.

    2009-01-01

    Modern thermodynamic models indicate that fluids consisting of complex molecules may display non-classical gasdynamic phenomena such as rarefaction shock waves (RSWs) in the vapour phase. Since the thermodynamic region in which non-classical phenomena are physically admissible is finite in terms of

  10. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  11. Parâmetros para equações mensais de estimativas de precipitação de intensidade máxima para o estado de São Paulo: fase I Parameters for monthly equations of maximum intensity estimates of rain for the São Paulo state: phase I

    Directory of Open Access Journals (Sweden)

    José Carlos Ferreira

    2005-12-01

    Full Text Available Nesta fase do trabalho objetivou-se estimar parâmetros para equações mensais de estimativas de precipitação de intensidade máxima em intervalos de 5, 10, 15, 20, 25, 30 e 60 minutos para 165 localidades do Estado de São Paulo. A partir de dados mensais de séries históricas de 31 anos de precipitação máxima de "um dia", utilizou-se da distribuição de probabilidade de Gumbel para os cálculos da probabilidade de ocorrência de valores extremos em cada mês. Utilizando-se da metodologia proposta por Occhipinti & Santos (1966, as chuvas máximas de "um dia" foram desagregadas para precipitações de intensidade máxima em 24 horas e nos sete intervalos de tempo acima descritos, para cada uma das 165 localidades e em cada mês. Os parâmetros alfa e beta foram calculados, para cada um dos sete intervalos de duração da chuva, com F(x= 90% e em cada uma das 165 localidades propostas. As séries de precipitação máxima de "um dia" foram submetidas ao teste de Kolmogorov-Smirnov, confirmando bom ajuste com distribuição de Gumbel. A metodologia mostrou bom desempenho, considerando-se que as diferenças percentuais relativas dos resultados das precipitações máximas obtidas com os parâmetros alfa e beta, de 25 localidades, comparadas com os obtidos pela metodologia de Occhipinti, foram de modo geral menores que 0,5%.The objective of this phase of the work was to obtain parameters for monthly equations of maximum of estimations precipitation intensity in intervals of 5, 10, 15, 20, 25, 30 and 60 minutes covering 165 places of São Paulo State. Starting from the historical series of 31 years of maximum precipitation of "one day", it was used Gumbel probability distribution for calculating the probability of occurrence of extreme values in every month. Using the methodology proposed by Occhipinti & Santos (1966, the maximum rains of "one day" were dissociated in precipitation of maximum intensity in 24 hours in the seven intervals of

  12. The spatial variability of air temperature and nocturnal urban heat island intensity in the city of Brno, Czech Republic

    Directory of Open Access Journals (Sweden)

    Dobrovolný Petr

    2015-09-01

    Full Text Available This study seeks to quantify the effects of a number of factors on the nocturnal air temperature field in a medium-sized central European city located in complex terrain. The main data sources consist of mobile air temperature measurements and a geographical database. Temperature measurements were taken along several profiles through the city centre and were made under a clear sky with no advection. Altogether nine sets of detailed measurements, in all seasons, were assembled. Altitude, quantity of vegetation, density of buildings and the structure of the transportation (road system were considered as explanatory variables. The result is that the normalized difference vegetation index (NDVI and the density of buildings were the most important factors, each of them explaining a substantial part (more than 50% of overall air temperature variability. Mobile measurements with NDVI values as a covariate were used for interpolation of air temperature for the entire study area. The spatial variability of nocturnal air temperature and UHI intensity in Brno is the main output presented. Air temperatures interpolated from mobile measurements and NDVI values indicate that the mean urban heat island (UHI intensity in the early night in summer is at its highest (approximately 5 °C in the city centre and decreases towards the suburban areas.

  13. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  14. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  15. The Urban Heat Island Effect and the Role of Vegetation to Address the Negative Impacts of Local Climate Changes in a Small Brazilian City

    Directory of Open Access Journals (Sweden)

    Elis Dener Lima Alves

    2017-02-01

    Full Text Available This study analyzes the influence of urban-geographical variables on determining heat islands and proposes a model to estimate and spatialize the maximum intensity of urban heat islands (UHI. Simulations of the UHI based on the increase of normalized difference vegetation index (NDVI, using multiple linear regression, in Iporá (Brazil are also presented. The results showed that the UHI intensity of this small city tended to be lower than that of bigger cities. Urban geometry and vegetation (UI and NDVI were the variables that contributed the most to explain the variability of the maximum UHI intensity. It was observed that areas located in valleys had lower thermal values, suggesting a cool island effect. With the increase in NDVI in the central area of a maximum UHI, there was a significant decrease in its intensity and size (a 45% area reduction. It is noteworthy that it was possible to spatialize the UHI to the whole urban area by using multiple linear regression, providing an analysis of the urban set from urban-geographical variables and thus performing prognostic simulations that can be adapted to other small tropical cities.

  16. Mitigating the UHI: Considerations for Southern African cities

    CSIR Research Space (South Africa)

    Naidoo, Sasha

    2016-12-01

    Full Text Available Urbanisation in South Africa is expected to increase to 71.3 % in 2030 and reach nearly 80% by 2050, with the City of Johannesburg projected to surpass the 10 million population mark and emerge as a megacity by 2030. Urbanised cities generally have...

  17. Mitigating the UHI: Considerations for Southern African cities

    CSIR Research Space (South Africa)

    Naidoo, Sasha

    2016-12-01

    Full Text Available replaced natural land surfaces with materials that retain heat, as well as have waste heat from buildings, motor vehicles and industries. In altering the local environment, this can result in local environmental stresses. Rapid urbanisation coupled...

  18. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  19. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  20. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  1. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  2. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  3. Força muscular respiratória, postura corporal, intensidade vocal e tempos máximos de fonação na Doença de Parkinson Respiratory muscle strength, body posture, vocal intensity and maximum phonation times in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Fernanda Vargas Ferreira

    2012-04-01

    Full Text Available TEMA: Verificar os achados de força muscular respiratória (FMR, postura corporal (PC, intensidade vocal (IV e tempos máximos de fonação (TMF, em indivíduos com Doença de Parkinson (DP e casos de controle, conforme o sexo, o estágio da DP e o nível de atividade física (AF. PROCEDIMENTOS: três homens e duas mulheres com DP, entre 36 e 63 anos (casos de estudo - CE, e cinco indivíduos sem doenças neurológicas, pareados em idade, sexo e nível de AF (casos de controle - CC. Avaliadas a FMR, PC, IV e TMF. RESULTADOS: homens: diminuição mais acentuada dos TMF, IV e FMR nos parkinsonianos, mais alterações posturais nos idosos; mulheres com e sem DP: alterações posturais similares, relação positiva entre estágio, nível de AF e as demais medidas. CONCLUSÕES: Verificou-se nas parkinsonianas, prejuízo na IV e nos parkinsonianos déficits nos TMF, IV e FMR. Sugerem-se novos estudos sob um viés interdisciplinar.PURPOSE: To check the findings on respiratory muscular strength (RMS, body posture (BP, vocal intensity (VI and maximum phonation time (MPT, in patients with Parkinson Disease (PD and control cases, according to gender, Parkinson Disease stage (PD and the level of physical activity (PA. METHODS: three men and two women with PD, between 36 and 63 year old (study cases - SC, and five subjects without neurologic diseases, of the same age, gender and PA level (control cases - CC. We evaluated RMS, BP, VI and MPT. RESULTS: men: a more pronounced decrease of MPT, VI, RMS in Parkinson patients, plus postural alterations in the elderly; women: similar postural alterations, positive relation between stages, PA level and the other measures. CONCLUSIONS: We observed in women with PD, impaired VI; in men with PD deficits in MPT, VI, RMS. We suggest further studies under an interdisciplinary bias.

  4. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  5. Recruiting intensity

    OpenAIRE

    R. Jason Faberman

    2014-01-01

    To hire new workers, employers use a variety of recruiting methods in addition to posting a vacancy announcement. The intensity with which employers use these alternative methods can vary widely with a firm’s performance and with the business cycle. In fact, persistently low recruiting intensity helps to explain the sluggish pace of US job growth following the Great Recession.

  6. Intensity of Urban Heat Islands in Tropical and Temperate Climates

    Directory of Open Access Journals (Sweden)

    Margarete Cristiane de Costa Trindade Amorim

    2017-12-01

    Full Text Available Nowadays, most of the Earth’s population lives in urban areas. The replacement of vegetation by buildings and the general soil sealing, associated with human activity, lead to a rise in cities temperature, resulting in the formation of urban heat islands. This article aims to evaluate the intensity and the hourly maintenance of the atmospheric heat islands in two climates: one tropical (Presidente Prudente, Brazil and one temperate (Rennes, France throughout 2016. For this, air temperature and hourly averages were measured and calculated using both a HOBO datalogger (U23-002—protected under the same RS3 brand and weather stations Davis Vantage PRO 2. The daily evolution of the heat islands presented characteristics that varied according to the hours and seasons of the year. For both Rennes and Presidente Prudente, the largest magnitudes occurred overnight, being more greatly expressed in the tropical environment and during the driest months (winter in the tropical city and summer in the temperate one. The variability of synoptic conditions from one month to another also leads to a great heterogeneity of UHI intensity throughout the year.

  7. Occurrence and Impact of Insects in Maximum Growth Plantations

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, J.T.; Berisford, C.W.

    2001-01-01

    Investigation of the relationships between intensive management practices and insect infestation using maximum growth potential studies of loblolly pine constructed over five years using a hierarchy of cultural treatments-monitoring differences in growth and insect infestation levels related to the increasing management intensities. This study shows that tree fertilization can increase coneworm infestation and demonstrated that tip moth management tree growth, at least initially.

  8. Assessing the effect of wind speed/direction changes on urban heat island intensity of Istanbul.

    Science.gov (United States)

    Perim Temizoz, Huriye; Unal, Yurdanur S.

    2017-04-01

    Assessing the effect of wind speed/direction changes on urban heat island intensity of Istanbul. Perim Temizöz, Deniz H. Diren, Cemre Yürük and Yurdanur S. Ünal Istanbul Technical University, Department of Meteorological Engineering, Maslak, Istanbul, Turkey City or metropolitan areas are significantly warmer than the outlying rural areas since the urban fabrics and artificial surfaces which have different radiative, thermal and aerodynamic features alter the surface energy balance, interact with the regional circulation and introduce anthropogenic sensible heat and moisture into the atmosphere. The temperature contrast between urban and rural areas is most prominent during nighttime since heat is absorbed by day and emitted by night. The intensity of the urban heat island (UHI) vary considerably depending on the prevailent meteorological conditions and the characteristics of the region. Even though urban areas cover a small fraction of Earth, their climate has greater impact on the world's population. Over half of the world population lives in the cities and it is expected to rise within the coming decades. Today almost one fifth of the Turkey's population resides in Istanbul with the percentage expected to increase due to the greater job opportunities compared to the other cities. Its population has been increased from 2 millions to 14 millions since 1960s. Eventually, the city has been expanded tremendously within the last half century, shifting the landscape from vegetation to built up areas. The observations of the last fifty years over Istanbul show that the UHI is most pronounced during summer season. The seasonal temperature differences between urban and suburban sites reach up to 3 K and roughly haft degree increase in UHI intensity is observed after 2000. In this study, we explore the possible range of heat load and distribution over Istanbul for different prevailing wind conditions by using the non-hydrostatic MUKLIMO3 model developed by DWD

  9. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  10. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  11. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  12. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  13. Multiple timescale analysis of the urban heat island effect based on the Community Land Model: a case study of the city of Xi'an, China.

    Science.gov (United States)

    Gao, Meiling; Shen, Huanfeng; Han, Xujun; Li, Huifang; Zhang, Liangpei

    2017-12-06

    Urban heat islands (UHIs) are the phenomenon of urban regions usually being warmer than rural regions, which significantly impacts both the regional ecosystem and societal activities. Numerical simulation can provide spatially and temporally continuous datasets for UHI analysis. In this study, a spatially and temporally continuous ground temperature dataset of Xi'an, China was obtained through numerical simulation based on the Community Land Model version 4.5 (CLM4.5), at a temporal resolution of 30 min and a spatial resolution of 0.05 ∘ × 0.05 ∘ . Based on the ground temperature, the seasonal average UHI intensity (UHII) was calculated and the seasonal variation of the UHI effect was analyzed. The monthly variation tendency of the urban heat stress was also investigated. Based on the diurnal cycle of ground temperature and the UHI effect in each season, the variation tendencies of the maximum, minimum, and average UHII were analyzed. The results show that the urban heat stress in summer is the strongest among all four seasons. The heat stress in urban areas is very significant in July, and the UHII is the weakest in January. Regarding the diurnal cycle of UHII, the maximum always appears at 06:30 UTC to 07:30 UTC, while the minimum intensity of the UHI effect occurs at different times in the different seasons. The results of this study could provide a reference for policymakers about how to reduce the damage caused by heat stress.

  14. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  15. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  16. MAXIMUM RUNOFF OF THE FLOOD ON WADIS OF NORTHERN ...

    African Journals Online (AJOL)

    lanez

    The technique of account the maximal runoff of flood for the rivers of northern part of Algeria based on the theory of ... north to south: 1) coastal Tel – fertile, high cultivated and sown zone; 2) territory of Atlas. Mountains ... In the first case the empiric dependence between maximum intensity of precipitation for some calculation ...

  17. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  18. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  19. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  20. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  1. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  2. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  3. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  5. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  6. Human Influence on Tropical Cyclone Intensity

    Science.gov (United States)

    Sobel, Adam H.; Camargo, Suzana J.; Hall, Timothy M.; Lee, Chia-Ying; Tippett, Michael K.; Wing, Allison A.

    2016-01-01

    Recent assessments agree that tropical cyclone intensity should increase as the climate warms. Less agreement exists on the detection of recent historical trends in tropical cyclone intensity.We interpret future and recent historical trends by using the theory of potential intensity, which predicts the maximum intensity achievable by a tropical cyclone in a given local environment. Although greenhouse gas-driven warming increases potential intensity, climate model simulations suggest that aerosol cooling has largely canceled that effect over the historical record. Large natural variability complicates analysis of trends, as do poleward shifts in the latitude of maximum intensity. In the absence of strong reductions in greenhouse gas emissions, future greenhouse gas forcing of potential intensity will increasingly dominate over aerosol forcing, leading to substantially larger increases in tropical cyclone intensities.

  7. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  8. Intensive mobilities

    DEFF Research Database (Denmark)

    Vannini, Phillip; Bissell, David; Jensen, Ole B.

    with fieldwork conducted in Canada, Denmark and Australia to develop our understanding of the experiential politics of long distance workers. Rather than focusing on the extensive dimensions of mobilities that are implicated in patterns and trends, our paper turns to the intensive dimensions of this experience......This paper explores the intensities of long distance commuting journeys as a way of exploring how bodily sensibilities are being changed by the mobilities that they undertake. The context of this paper is that many people are travelling further to work than ever before owing to a variety of factors...... which relate to transport, housing and employment. Yet we argue that the experiential dimensions of long distance mobilities have not received the attention that they deserve within geographical research on mobilities. This paper combines ideas from mobilities research and contemporary social theory...

  9. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  10. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  11. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  12. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  13. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  14. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  15. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  16. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  17. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  18. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  19. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  20. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Compartir For more help with what counts as aerobic activity, watch this video: Windows Media Player, 4: ... ways to understand and measure the intensity of aerobic activity: relative intensity and absolute intensity. Relative Intensity ...

  1. Analysis of the effect of local heat island in Seoul using LANDSAT image

    Science.gov (United States)

    Lee, K. I.; Ryu, J.; Jeon, S. W.

    2017-12-01

    The increase in the rate of industrialization due to urbanization has caused the Urban Heat Island phenomenon which means that the temperature of the city is higher than the surrounding area, and its intensity is increasing with climate change. Among the cities where heat island phenomenon occur, Seoul city has different degree of urbanization, green area ratio, energy consumption, and population density by each district unit. As a result, the strength of heat island phenomenon is also different. The average maximum temperature in each region may differ by more than 3 °, which is bigger than the suburbs in Seoul and it means that analysis of UHI effect by regional unit is needed. Therefore, this study is to extract the UHI Intensity of the regional unit of the Seoul Metropolitan City using the satellite image, analyzed the difference of intensity according to the regional unit. And do linear regression analysis with variables included in three categories(regional meteorological conditions, anthropogenic heat generation, land use factors). As a result, The UHI Intensity value of the Gu unit is significantly different from the UHI Intensity distribution of the Dong unit. The variable having the greatest positive correlation with UHI Intensity was NDBI(Normalized Difference Built-up Index) which shows the distribution of urban area, and Urban area ratio also has high correlation. There was a negative correlation between mean wind speed but there was no significant correlation between population density and power consumption. The result of this study is to identify the regional difference of UHI Intensity and to identify the factors inducing heat island phenomenon. so It is expected that it will provide direction in urban thermal environment design and policy development in the future.

  2. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  3. Design and Implementation of Photovoltaic Maximum Power Point Tracking Controller

    Directory of Open Access Journals (Sweden)

    Fawaz S. Abdullah

    2018-03-01

    Full Text Available  The power supplied by any solar array depends upon the environmental conditions as weather conditions (temperature and radiation intensity and the incident angle of the radiant source. The work aims to study the maximum power tracking schemes that used to compare the system performance without and with different types of controllers. The maximum power points of the solar panel under test studied and compared with two controller's types.  The first controller is the proportional- integral - derivative controller type and the second is the perturbation and observation algorithm controller. The associated converter system is a microcontroller based type, whereas the results studied and compared of greatest power point of the Photovoltaic panels under the different two controllers. The experimental tests results compared with simulation results to verify accurate performance.

  4. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  5. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  6. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  7. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  8. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  9. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  10. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  11. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  12. High-intensity laser physics

    International Nuclear Information System (INIS)

    Mohideen, U.

    1993-01-01

    This thesis is a study of the effect of high intensity lasers on atoms, free electrons and the generation of X-rays from solid density plasmas. The laser produced 50 milli Joule 180 femto sec pulses at 5 Hz. This translates to a maximum intensity of 5 x 10 18 W/cm 2 . At such high fields the AC stark shifts of atoms placed at the focus is much greater than the ionization energy. The characteristics of multiphoton ionization of atoms in intense laser fields was studied by angle resolved photoelectron spectroscopy. Free electrons placed in high intensity laser fields lead to harmonic generation. This phenomenon of Nonlinear Compton Scattering was theoretically investigated. Also, when these high intensity pulses are focused on solids a hot plasma is created. This plasma is a bright source of a short X-ray pulse. The pulse-width of X-rays from these solid density plasmas was measured by time-resolved X-ray spectroscopy

  13. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  14. Laser-matter interaction at high intensity and high temporal contrast

    International Nuclear Information System (INIS)

    Doumy, G.

    2006-01-01

    The continuous progress in the development of laser installations has already lead to ultra-short pulses capable of achieving very high focalized intensities (I > 10 18 W/cm 2 ). At these intensities, matter presents new non-linear behaviours, due to the fact that the electrons are accelerated to relativistic speeds. The experimental access to this interaction regime on solid targets has long been forbidden because of the presence, alongside the femtosecond pulse, of a pedestal (mainly due to the amplified spontaneous emission (ASE) which occurs in the laser chain) intense enough to modify the state of the target. In this thesis, we first characterized, both experimentally and theoretically, a device which allows an improvement of the temporal contrast of the pulse: the Plasma Mirror. It consists in adjusting the focusing of the pulse on a dielectric target, so that the pedestal is mainly transmitted, while the main pulse is reflected by the overcritical plasma that it forms at the surface. The implementation of such a device on the UHI 10 laser facility (CEA Saclay - 10 TW - 60 fs) then allowed us to study the interaction between ultra-intense, high contrast pulses with solid targets. In a first part, we managed to generate and characterize dense plasmas resulting directly from the interaction between the main pulse and very thin foils (100 nm). This characterization was realized by using an XUV source obtained by high order harmonics generation in a rare gas jet. In a second part, we studied experimentally the phenomenon of high order harmonics generation on solid targets, which is still badly understood, but could potentially lead to a new kind of energetic ultra-short XUV sources. (author)

  15. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... 45 David, Age 65 Harold, Age 67 Data & Statistics Facts About Physical Activity Data, Trends and Maps ... relative intensity and absolute intensity. Relative Intensity The level of effort required by a person to do ...

  16. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  17. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  18. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  19. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  20. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  1. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  2. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  3. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  4. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  5. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  6. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  7. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  8. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  9. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  10. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  11. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  12. Correlation analysis of the urban heat island effect and the spatial and temporal distribution of atmospheric particulates using TM images in Beijing

    International Nuclear Information System (INIS)

    Xu, L.Y.; Xie, X.D.; Li, S.

    2013-01-01

    This study combines the methods of observation statistics and remote sensing retrieval, using remote sensing information including the urban heat island (UHI) intensity index, the normalized difference vegetation index (NDVI), the normalized difference water index (NDWI), and the difference vegetation index (DVI) to analyze the correlation between the urban heat island effect and the spatial and temporal concentration distributions of atmospheric particulates in Beijing. The analysis establishes (1) a direct correlation between UHI and DVI; (2) an indirect correlation among UHI, NDWI and DVI; and (3) an indirect correlation among UHI, NDVI, and DVI. The results proved the existence of three correlation types with regional and seasonal effects and revealed an interesting correlation between UHI and DVI, that is, if UHI is below 0.1, then DVI increases with the increase in UHI, and vice versa. Also, DVI changes more with UHI in the two middle zones of Beijing. -- Highlights: •We analyze the correlation from the spatial and temporal views. •We present correlation analyses among UHI, NDWI, NDVI, and DVI from three perspectives. •Three correlations are proven to exist with regional and seasonal effects. •If UHI is below 0.1, then DVI increases with the increase in UHI, and vice versa. •The DVI changes more with UHI in the two middle zones of Beijing. -- Generally, if UHI is below 0.1 in the weak heat island or green island range, then DVI increases with the increase in UHI, and vice versa

  13. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  14. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  15. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  16. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  17. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  18. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  19. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  20. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  1. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  2. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  3. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  4. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  5. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  6. Analytical approach for evaluating temperature field of thermal modified asphalt pavement and urban heat island effect

    International Nuclear Information System (INIS)

    Chen, Jiaqi; Wang, Hao; Zhu, Hongzhou

    2017-01-01

    Highlights: • Derive an analytical approach to predict temperature fields of multi-layered asphalt pavement based on Green’s function. • Analyze the effects of thermal modifications on heat output from pavement to near-surface environment. • Evaluate pavement solutions for reducing urban heat island (UHI) effect. - Abstract: This paper aims to present an analytical approach to predict temperature fields in asphalt pavement and evaluate the effects of thermal modification on near-surface environment for urban heat island (UHI) effect. The analytical solution of temperature fields in the multi-layered pavement structure was derived with the Green’s function method, using climatic factors including solar radiation, wind velocity, and air temperature as input parameters. The temperature solutions were validated with an outdoor field experiment. By using the proposed analytical solution, temperature fields in the pavement with different pavement surface albedo, thermal conductivity, and layer combinations were analyzed. Heat output from pavement surface to the near-surface environment was studied as an indicator of pavement contribution to UHI effect. The analysis results show that increasing pavement surface albedo could decrease pavement temperature at various depths, and increase heat output intensity in the daytime but decrease heat output intensity in the nighttime. Using reflective pavement to mitigate UHI may be effective for an open street but become ineffective for the street surrounded by high buildings. On the other hand, high-conductivity pavement could alleviate the UHI effect in the daytime for both the open street and the street surrounded by high buildings. Among different combinations of thermal-modified asphalt mixtures, the layer combination of high-conductivity surface course and base course could reduce the maximum heat output intensity and alleviate the UHI effect most.

  7. Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions

    NARCIS (Netherlands)

    Herman, P.M.J.; Heip, C.H.R.

    1999-01-01

    In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available

  8. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  9. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  10. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  11. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  12. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  13. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  14. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  15. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  16. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  17. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  18. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  19. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  20. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  1. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... using relative intensity, people pay attention to how physical activity affects their heart rate and breathing. The talk test is a simple way to measure relative intensity. ...

  2. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Share Compartir For more help with what counts as aerobic activity, watch this video: Windows Media Player, ... The table below lists examples of activities classified as moderate-intensity or vigorous-intensity based upon the ...

  3. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... for a breath. Absolute Intensity The amount of energy used by the body per minute of activity. ... or vigorous-intensity based upon the amount of energy used by the body while doing the activity. ...

  4. Iowa Intensive Archaeological Survey

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This shape file contains intensive level archaeological survey areas for the state of Iowa. All intensive Phase I surveys that are submitted to the State Historic...

  5. Rainfed intensive crop systems

    DEFF Research Database (Denmark)

    Olesen, Jørgen E

    2014-01-01

    This chapter focuses on the importance of intensive cropping systems in contributing to the world supply of food and feed. The impact of climate change on intensive crop production systems is also discussed.......This chapter focuses on the importance of intensive cropping systems in contributing to the world supply of food and feed. The impact of climate change on intensive crop production systems is also discussed....

  6. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  7. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  8. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  9. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  10. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  11. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  12. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  13. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  14. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  15. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  16. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  17. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  18. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  19. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  20. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  1. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  2. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs Measuring Physical Activity Intensity Recommend on Facebook Tweet Share Compartir For more help with what ...

  3. AGS intensity upgrades

    International Nuclear Information System (INIS)

    Roser, T.

    1995-01-01

    After the successful completion of the AGS Booster and several upgrades of the AGS, a new intensity record of 6.3 x 10 13 protons per pulse accelerated to 24 GeV was achieved. The high intensity slow-extracted beam program at the AGS typically serves about five production targets and about eight experiments including three rare Kaon decay experiments. Further intensity upgrades are being discussed that could increase the average delivered beam intensity by up to a factor of four

  4. Solar cycle variations in IMF intensity

    International Nuclear Information System (INIS)

    King, J.H.

    1979-01-01

    Annual averages of logarithms of hourly interplanetary magnetic field (IMF) intensities, obtained from geocentric spacecraft between November 1963 and December 1977, reveal the following solar cycle variation. For 2--3 years at each solar minimum period, the IMF intensity is depressed by 10--15% relative to its mean value realized during a broad 9-year period contered at solar maximum. No systematic variations occur during this 9-year period. The solar minimum decrease, although small in relation to variations in some other solar wind parameters, is both statistically and physically significant

  5. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  6. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  7. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  8. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  9. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  10. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  11. Metabonomics and Intensive Care

    OpenAIRE

    Antcliffe, D; Gordon, AC

    2016-01-01

    This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency medicine 2016. Other selected articles can be found online at http://www.biomedcentral.com/collections/annualupdate2016. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from http://www.springer.com/series/8901.

  12. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  13. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  14. Generation of intensity duration frequency curves and intensity temporal variability pattern of intense rainfall for Lages/SC

    Directory of Open Access Journals (Sweden)

    Célio Orli Cardoso

    2014-04-01

    Full Text Available The objective of this work was to analyze the frequency distribution and intensity temporal variability of intense rainfall for Lages/SC from diary pluviograph data. Data on annual series of maximum rainfalls from rain gauges of the CAV-UDESC Weather Station in Lages/SC were used from 2000 to 2009. Gumbel statistic distribution was applied in order to obtain the rainfall height and intensity in the following return periods: 2, 5, 10, 15 and 20 years. Results showed intensity-duration-frequency curves (I-D-F for those return periods, as well as I-D-F equations: i=2050.Tr0,20.(t+30-0,89, where i was the intensity, Tr was the rainfall return periods and t was the rainfall duration. For the intensity of temporal variability pattern along of the rainfall duration time, the convective, or advanced pattern was the predominant, with larger precipitate rainfalls in the first half of the duration. The same pattern presented larger occurrences in the spring and summer stations.

  15. On the maximum-entropy method for kinetic equation of radiation, particle and gas

    International Nuclear Information System (INIS)

    El-Wakil, S.A.; Madkour, M.A.; Degheidy, A.R.; Machali, H.M.

    1995-01-01

    The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.)

  16. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  17. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  18. Proton energy dependence of slow neutron intensity

    International Nuclear Information System (INIS)

    Teshigawara, Makoto; Harada, Masahide; Watanabe, Noboru; Kai, Tetsuya; Sakata, Hideaki; Ikeda, Yujiro

    2001-01-01

    The choice of the proton energy is an important issue for the design of an intense-pulsed-spallation source. The optimal proton beam energy is rather unique from a viewpoint of the leakage neutron intensity but no yet clear from the slow-neutron intensity view point. It also depends on an accelerator type. Since it is also important to know the proton energy dependence of slow-neutrons from the moderators in a realistic target-moderator-reflector assembly (TMRA). We studied on the TMRA proposed for Japan Spallation Neutron Source. The slow-neutron intensities from the moderators per unit proton beam power (MW) exhibit the maximum at about 1-2 GeV. At higher proton energies the intensity per MW goes down; at 3 and 50 GeV about 0.91 and 0.47 times as low as that at 1 GeV. The proton energy dependence of slow-neutron intensities was found to be almost the same as that of total neutron yield (leakage neutrons) from the same bare target. It was also found that proton energy dependence was almost the same for the coupled and decoupled moderators, regardless the different moderator type, geometry and coupling scheme. (author)

  19. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... miles per hour Tennis (doubles) Ballroom dancing General gardening Vigorous Intensity Race walking, jogging, or running Swimming ... miles per hour or faster Jumping rope Heavy gardening (continuous digging or hoeing) Hiking uphill or with ...

  20. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... The amount of energy used by the body per minute of activity. The table below lists examples ... of Page Moderate Intensity Walking briskly (3 miles per hour or faster, but not race-walking) Water ...

  1. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... For this reason, some items on this page will be unavailable. For more information about this message, ... If you're doing vigorous-intensity activity, you will not be able to say more than a ...

  2. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... level of effort required by a person to do an activity. When using relative intensity, people pay ... State and Local Programs File Formats Help: How do I view different file formats (PDF, DOC, PPT, ...

  3. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... an activity. When using relative intensity, people pay attention to how physical activity affects their heart rate ... Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs File Formats Help: How ...

  4. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Local Programs Measuring Physical Activity Intensity Recommend on Facebook Tweet Share Compartir For more help with what ... RSS ABOUT About CDC Jobs Funding LEGAL Policies Privacy FOIA No Fear Act OIG 1600 Clifton Road ...

  5. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Intensity The amount of energy used by the body per minute of activity. The table below lists ... upon the amount of energy used by the body while doing the activity. Top of Page Moderate ...

  6. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... to Your Life Activities for Children Activities for Older Adults Overcoming Barriers ... required by a person to do an activity. When using relative intensity, people pay attention to how physical activity affects their ...

  7. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Adults Need More Physical Activity MMWR Data Highlights State Indicator Report on Physical Activity, 2014 Recommendations & Guidelines ... Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs Measuring Physical Activity Intensity Recommend ...

  8. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... 10 miles per hour or faster Jumping rope Heavy gardening (continuous digging or hoeing) Hiking uphill or with a heavy backpack Other Methods of Measuring Intensity Target Heart ...

  9. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Adults Needs for Children What Counts Needs for Older Adults Needs for Pregnant or Postpartum Women Physical Activity & ... to Your Life Activities for Children Activities for Older Adults Overcoming Barriers Measuring Physical Activity Intensity Target Heart ...

  10. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Hiking uphill or with a heavy backpack Other Methods of Measuring Intensity Target Heart Rate and Estimated ... Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF ...

  11. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Hiking uphill or with a heavy backpack Other Methods of Measuring Intensity Target Heart Rate and Estimated ... YouTube Instagram Listen Watch RSS ABOUT About CDC Jobs Funding LEGAL Policies Privacy FOIA No Fear Act ...

  12. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... be able to say more than a few words without pausing for a breath. Absolute Intensity The ... site? Adobe PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple ...

  13. [Intensive medicine in Spain].

    Science.gov (United States)

    2011-03-01

    Intensive care medicine is a medical specialty that was officially established in our country in 1978, with a 5-year training program including two years of common core training followed by three years of specific training in an intensive care unit accredited for training. During this 32-year period, intensive care medicine has carried out an intense and varied activity, which has allowed its positioning as an attractive and with future specialty in the hospital setting. This document summarizes the history of the specialty, its current situation, the key role played in the programs of organ donation and transplantation of the National Transplant Organization (after more than 20 years of mutual collaboration), its training activities with the development of the National Plan of Cardiopulmonary Resuscitation, with a trajectory of more than 25 years, its interest in providing care based on quality and safety programs for the severely ill patient. It also describes the development of reference registries due to the need for reliable data on the care process for the most prevalent diseases, such as ischemic heart disease or ICU-acquired infections, based on long-term experience (more than 15 years), which results in the availability of epidemiological information and characteristics of care that may affect the practical patient's care. Moreover, features of its scientific society (SEMICYUC) are reported, an organization that agglutinates the interests of more than 280 ICUs and more than 2700 intensivists, with reference to the journal Medicina Intensiva, the official journal of the society and the Panamerican and Iberian Federation of Critical Medicine and Intensive Care Societies. Medicina Intensiva is indexed in the Thompson Reuters products of Science Citation Index Expanded (Scisearch(®)) and Journal Citation Reports, Science Edition. The important contribution of the Spanish intensive care medicine to the scientific community is also analyzed, and in relation to

  14. Data-intensive science

    CERN Document Server

    Critchlow, Terence

    2013-01-01

    Data-intensive science has the potential to transform scientific research and quickly translate scientific progress into complete solutions, policies, and economic success. But this collaborative science is still lacking the effective access and exchange of knowledge among scientists, researchers, and policy makers across a range of disciplines. Bringing together leaders from multiple scientific disciplines, Data-Intensive Science shows how a comprehensive integration of various techniques and technological advances can effectively harness the vast amount of data being generated and significan

  15. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  16. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  17. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  18. Towards higher intensities

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Over the past 2 weeks, commissioning of the machine protection system has advanced significantly, opening up the possibility of higher intensity collisions at 3.5 TeV. The intensity has been increased from 2 bunches of 1010 protons to 6 bunches of 2x1010 protons. Luminosities of 6x1028 cm-2s-1 have been achieved at the start of fills, a factor of 60 higher than those provided for the first collisions on 30 March.   The recent increase in LHC luminosity as recorded by the experiments. (Graph courtesy of the experiments and M. Ferro-Luzzi) To increase the luminosity further, the commissioning crews are now trying to push up the intensity of the individual proton bunches. After the successful injection of nominal intensity bunches containing 1.1x1011 protons, collisions were subsequently achieved at 450 GeV with these intensities. However, half-way through the first ramping of these nominal intensity bunches to 3.5 TeV on 15 May, a beam instability was observed, leading to partial beam loss...

  19. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  20. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  1. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  2. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  3. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  4. CO2 maximum in the oxygen minimum zone (OMZ

    Directory of Open Access Journals (Sweden)

    V. Garçon

    2011-02-01

    Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence

  5. CO2 maximum in the oxygen minimum zone (OMZ)

    Science.gov (United States)

    Paulmier, A.; Ruiz-Pino, D.; Garçon, V.

    2011-02-01

    Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2Chile during 4 cruises (2000-2002) and a monthly monitoring (2000-2001) in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the

  6. Intensity Conserving Spectral Fitting

    Science.gov (United States)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  7. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  8. Long-term Modulation of Cosmic Ray Intensity in relation to Sunspot ...

    Indian Academy of Sciences (India)

    it should be more closely connected with cosmic ray modulation than with other solar characteristics (sunspot numbers or coronal emission intensity). The intensity of galactic cosmic rays varies inversely with sunspot numbers, having their maximum intensity at the minimum of the 11-year sunspot cycle (Forbush 1954, 1958) ...

  9. The intense neutron generator

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, W B

    1966-07-01

    The presentation discusses both the economic and research contexts that would be served by producing neutrons in gram quantities at high intensities by electrical means without uranium-235. The revenue from producing radioisotopes is attractive. The array of techniques introduced by the multipurpose 65 megawatt Intense Neutron Generator project includes liquid metal cooling, superconducting magnets for beam bending and focussing, super-conductors for low-loss high-power radiofrequency systems, efficient devices for producing radiofrequency power, plasma physics developments for producing and accelerating hydrogen, ions at high intensity that are still far out from established practice, a multimegawatt high voltage D.C. generating machine that could have several applications. The research fields served relate principally to materials science through neutron-phonon and other quantum interactions as well as through neutron diffraction. Nuclear physics is served through {mu}-, {pi}- and K-meson production. Isotope production enters many fields of applied research. (author)

  10. The intense neutron generator

    International Nuclear Information System (INIS)

    Lewis, W.B.

    1966-01-01

    The presentation discusses both the economic and research contexts that would be served by producing neutrons in gram quantities at high intensities by electrical means without uranium-235. The revenue from producing radioisotopes is attractive. The array of techniques introduced by the multipurpose 65 megawatt Intense Neutron Generator project includes liquid metal cooling, superconducting magnets for beam bending and focussing, super-conductors for low-loss high-power radiofrequency systems, efficient devices for producing radiofrequency power, plasma physics developments for producing and accelerating hydrogen, ions at high intensity that are still far out from established practice, a multimegawatt high voltage D.C. generating machine that could have several applications. The research fields served relate principally to materials science through neutron-phonon and other quantum interactions as well as through neutron diffraction. Nuclear physics is served through μ-, π- and K-meson production. Isotope production enters many fields of applied research. (author)

  11. Evaluation of surface air temperature and urban effects in Japan simulated by non-hydrostatic regional climate model

    Science.gov (United States)

    Murata, A.; Sasaki, H.; Hanafusa, M.; Kurihara, K.

    2012-12-01

    We evaluated the performance of a well-developed nonhydrostatic regional climate model (NHRCM) with a spatial resolution of 5 km with respect to temperature in the present-day climate of Japan, and estimated urban heat island (UHI) intensity by comparing the model results and observations. The magnitudes of root mean square error (RMSE) and systematic error (bias) for the annual average of daily mean (Ta), maximum (Tx), and minimum (Tn) temperatures are within 1.5 K, demonstrating that the temperatures of the present-day climate are reproduced well by NHRCM. These small errors indicate that temperature variability produced by local-scale phenomena is represented well by the model with a higher spatial resolution. It is also found that the magnitudes of RMSE and bias in the annually-average Tx are relatively large compared with those in Ta and Tn. The horizontal distributions of the error, defined as the difference between simulated and observed temperatures (simulated minus observed), illustrate negative errors in the annually-averaged Tn in three major metropolitan areas: Tokyo, Osaka, and Nagoya. These negative errors in urban areas affect the cold bias in the annually-averaged Tx. The relation between the underestimation of temperature and degree of urbanization is therefore examined quantitatively using National Land Numerical Information provided by the Ministry of Land, Infrastructure, Transport, and Tourism. The annually-averaged Ta, Tx, and Tn are all underestimated in the areas where the degree of urbanization is relatively high. The underestimations in these areas are attributed to the treatment of urban areas in NHRCM, where the effects of urbanization, such as waste heat and artificial structures, are not included. In contrast, in rural areas, the simulated Tx is underestimated and Tn is overestimated although the errors in Ta are small. This indicates that the simulated diurnal temperature range is underestimated. The reason for the relatively large

  12. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  13. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  14. Strongly intensive quantities

    International Nuclear Information System (INIS)

    Gorenstein, M. I.; Gazdzicki, M.

    2011-01-01

    Analysis of fluctuations of hadron production properties in collisions of relativistic particles profits from use of measurable intensive quantities which are independent of system size variations. The first family of such quantities was proposed in 1992; another is introduced in this paper. Furthermore we present a proof of independence of volume fluctuations for quantities from both families within the framework of the grand canonical ensemble. These quantities are referred to as strongly intensive ones. Influence of conservation laws and resonance decays is also discussed.

  15. High intensity hadron accelerators

    International Nuclear Information System (INIS)

    Teng, L.C.

    1989-05-01

    This rapporteur report consists mainly of two parts. Part I is an abridged review of the status of all High Intensity Hadron Accelerator projects in the world in semi-tabulated form for quick reference and comparison. Part II is a brief discussion of the salient features of the different technologies involved. The discussion is based mainly on my personal experiences and opinions, tempered, I hope, by the discussions I participated in in the various parallel sessions of the workshop. In addition, appended at the end is my evaluation and expression of the merits of high intensity hadron accelerators as research facilities for nuclear and particle physics

  16. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  17. Shifting the urban heat island clock in a megacity: a case study of Hong Kong

    Science.gov (United States)

    Chen, Xuan; Jeong, Su-Jong

    2018-01-01

    With increasing levels of urbanization in the near future, understanding the impact of urbanization on urban heat islands (UHIs) is critical to adapting to regional climate and environmental changes. However, our understanding of the UHI effect relies mainly on its intensity or magnitude. The present study evaluates the impact of urbanization on UHI duration changes by comparing three stations with different rates of urbanization, including highly developed and developing urban areas throughout Hong Kong, from 1990-2015. Results show that the 26 year average UHI intensity in highly urbanized regions is much higher than that in developing areas, and the 26 year average of UHI duration is similar. Over the past 25 years, however, UHI duration has increased only in developing urban areas, from 13.59-17.47 hours. Both earlier UHI starting and later UHI ending times concurrently contribute to the UHI effect being experienced for a longer duration. The differences in UHI duration change between the two areas are supported by population and by night light changes from space. Increasing night light, which suggests enhancements in the economic infrastructure, occurred only in the developing urban areas. Our results suggest that changes in UHI duration should be included in an assessment of regional climate change as well as in urban planning in a megacity.

  18. Laser-matter interaction at high intensity and high temporal contrast; Interaction laser matiere a haut flux et fort contraste temporel

    Energy Technology Data Exchange (ETDEWEB)

    Doumy, G

    2006-01-15

    The continuous progress in the development of laser installations has already lead to ultra-short pulses capable of achieving very high focalized intensities (I > 10{sup 18} W/cm{sup 2}). At these intensities, matter presents new non-linear behaviours, due to the fact that the electrons are accelerated to relativistic speeds. The experimental access to this interaction regime on solid targets has long been forbidden because of the presence, alongside the femtosecond pulse, of a pedestal (mainly due to the amplified spontaneous emission (ASE) which occurs in the laser chain) intense enough to modify the state of the target. In this thesis, we first characterized, both experimentally and theoretically, a device which allows an improvement of the temporal contrast of the pulse: the Plasma Mirror. It consists in adjusting the focusing of the pulse on a dielectric target, so that the pedestal is mainly transmitted, while the main pulse is reflected by the overcritical plasma that it forms at the surface. The implementation of such a device on the UHI 10 laser facility (CEA Saclay - 10 TW - 60 fs) then allowed us to study the interaction between ultra-intense, high contrast pulses with solid targets. In a first part, we managed to generate and characterize dense plasmas resulting directly from the interaction between the main pulse and very thin foils (100 nm). This characterization was realized by using an XUV source obtained by high order harmonics generation in a rare gas jet. In a second part, we studied experimentally the phenomenon of high order harmonics generation on solid targets, which is still badly understood, but could potentially lead to a new kind of energetic ultra-short XUV sources. (author)

  19. Measuring Physical Activity Intensity

    Medline Plus

    Full Text Available ... Updates To receive email updates about this page, enter your email address: Enter Email Address What's this? Submit What's this? Submit ... Intensity Walking briskly (3 miles per hour or faster, but not race-walking) Water aerobics Bicycling slower ...

  20. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  1. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  2. Analysis of the maximum discharge of karst springs

    Science.gov (United States)

    Bonacci, Ognjen

    2001-07-01

    Analyses are presented of the conditions that limit the discharge of some karst springs. The large number of springs studied show that, under conditions of extremely intense precipitation, a maximum value exists for the discharge of the main springs in a catchment, independent of catchment size and the amount of precipitation. Outflow modelling of karst-spring discharge is not easily generalized and schematized due to numerous specific characteristics of karst-flow systems. A detailed examination of the published data on four karst springs identified the possible reasons for the limitation on the maximum flow rate: (1) limited size of the karst conduit; (2) pressure flow; (3) intercatchment overflow; (4) overflow from the main spring-flow system to intermittent springs within the same catchment; (5) water storage in the zone above the karst aquifer or epikarstic zone of the catchment; and (6) factors such as climate, soil and vegetation cover, and altitude and geology of the catchment area. The phenomenon of limited maximum-discharge capacity of karst springs is not included in rainfall-runoff process modelling, which is probably one of the main reasons for the present poor quality of karst hydrological modelling. Résumé. Les conditions qui limitent le débit de certaines sources karstiques sont présentées. Un grand nombre de sources étudiées montrent que, sous certaines conditions de précipitations extrêmement intenses, il existe une valeur maximale pour le débit des sources principales d'un bassin, indépendante des dimensions de ce bassin et de la hauteur de précipitation. La modélisation des débits d'exhaure d'une source karstique n'est pas facilement généralisable, ni schématisable, à cause des nombreuses caractéristiques spécifiques des écoulements souterrains karstiques. Un examen détaillé des données publiées concernant quatre sources karstiques permet d'identifier les raisons possibles de la limitation de l'écoulement maximal: (1

  3. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  4. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  5. Proton Fluxes Measured by the PAMELA Experiment from the Minimum to the Maximum Solar Activity for Solar Cycle 24

    Science.gov (United States)

    Martucci, M.; Munini, R.; Boezio, M.; Di Felice, V.; Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Santis, C.; Galper, A. M.; Karelin, A. V.; Koldashov, S. V.; Koldobskiy, S.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malakhov, V.; Marcelli, L.; Marcelli, N.; Mayorov, A. G.; Menn, W.; Mergè, M.; Mikhailov, V. V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Osteria, G.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S. B.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y. I.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S. A.; Yurkin, Y. T.; Zampa, G.; Zampa, N.; Potgieter, M. S.; Raath, J. L.

    2018-02-01

    Precise measurements of the time-dependent intensity of the low-energy (solar activity periods, i.e., from minimum to maximum, are needed to achieve comprehensive understanding of such physical phenomena. The minimum phase between solar cycles 23 and 24 was peculiarly long, extending up to the beginning of 2010 and followed by the maximum phase, reached during early 2014. In this Letter, we present proton differential spectra measured from 2010 January to 2014 February by the PAMELA experiment. For the first time the GCR proton intensity was studied over a wide energy range (0.08–50 GeV) by a single apparatus from a minimum to a maximum period of solar activity. The large statistics allowed the time variation to be investigated on a nearly monthly basis. Data were compared and interpreted in the context of a state-of-the-art three-dimensional model describing the GCRs propagation through the heliosphere.

  6. The role of one large greenspace in mitigating London's nocturnal urban heat island.

    Science.gov (United States)

    Doick, Kieron J; Peace, Andrew; Hutchings, Tony R

    2014-09-15

    The term urban heat island (UHI) describes a phenomenon where cities are on average warmer than the surrounding rural area. Trees and greenspaces are recognised for their strong potential to regulate urban air temperatures and combat the UHI. Empirical data is required in the UK to inform predictions on cooling by urban greenspaces and guide planning to maximise cooling of urban populations. We describe a 5-month study to measure the temperature profile of one of central London's large greenspaces and also in an adjacent street to determine the extent to which the greenspace reduced night-time UHI intensity. Statistical modelling displayed an exponential decay in the extent of cooling with increased distance from the greenspace. The extent of cooling ranged from an estimated 20 m on some nights to 440 m on other nights. The mean temperature reduction over these distances was 1.1 °C in the summer months, with a maximum of 4 °C cooling observed on some nights. Results suggest that calculation of London's UHI using Met Stations close to urban greenspace can underestimate 'urban' heat island intensity due to the cooling effect of the greenspace and values could be in the region of 45% higher. Our results lend support to claims that urban greenspace is an important component of UHI mitigation strategies. Lack of certainty over the variables that govern the extent of the greenspace cooling influence indicates that the multifaceted roles of trees and greenspaces in the UK's urban environment merit further consideration. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  7. AGS intensity record

    International Nuclear Information System (INIS)

    Bleser, Ed

    1994-01-01

    As flashed in the September issue, this summer the Brookhaven Alternating Gradient Synchrotron (AGS) reached a proton beam intensity of 4.05 x 10 13 protons per puise, claimed as the highest intensity ever achieved in a proton synchrotron. It is, however, only two-thirds of the way to its final goal of 6 x 10 13 . The achievement is the resuit of many years of effort. The Report of the AGS II Task Force, issued in February 1984, laid out a comprehensive programme largely based on a careful analysis of the PS experience at CERN. The AGS plan had two essential components: the construction of a new booster, and major upgrades to the AGS itself.

  8. Intensities of Mobility

    DEFF Research Database (Denmark)

    Bissell, David; Vannini, Phillip; Jensen, Ole B.

    2017-01-01

    This paper explores the intensities of long-distance commuting journeys in order to understand how bodily sensibilities become attuned to the regular mobilities which they undertake. More people are travelling farther to and from work than ever before, owing to a variety of factors which relate...... to complex social and geographical dynamics of transport, housing, lifestyle, and employment. Yet, the experiential dimensions of long-distance commuting have not received the attention that they deserve within research on mobilities. Drawing from fieldwork conducted in Australia, Canada, and Denmark...... this paper aims to further develop our collective understanding of the experiential particulars of long-distance workers or ‘supercommuters’. Rather than focusing on the extensive dimensions of mobilities that are implicated in broad social patterns and trends, our paper turns to the intensive dimensions...

  9. Intensive culture”

    DEFF Research Database (Denmark)

    Michelsen, Anders Ib

    2012-01-01

    Scott Lash argumenterer i bogen Intensive Culture for en vending fra ”ekstensiv” til ”intensiv” i den nutidige globalisering. Bogens udgangspunkt er en stadig mere ekstensiv og gennemtrængende globalisering af kultur, forbrugs- og vareformer, ”comtemporary culture, today’s capitalism – our global......, samlivsmøstre etc.; ”the sheer pace of life in the streets of today’s mega-city would seem somehow to be intensive”....

  10. Intense ion beam generator

    International Nuclear Information System (INIS)

    Humphries, S. Jr.; Sudan, R.N.

    1977-01-01

    Methods and apparatus for producing intense megavolt ion beams are disclosed. In one embodiment, a reflex triode-type pulsed ion accelerator is described which produces ion pulses of more than 5 kiloamperes current with a peak energy of 3 MeV. In other embodiments, the device is constructed so as to focus the beam of ions for high concentration and ease of extraction, and magnetic insulation is provided to increase the efficiency of operation

  11. Intense fusion neutron sources

    International Nuclear Information System (INIS)

    Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.

    2010-01-01

    The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 10 15 -10 21 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 10 20 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.

  12. Intense fusion neutron sources

    Science.gov (United States)

    Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.

    2010-04-01

    The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 1015-1021 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 1020 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.

  13. Very high intensity reaction chamber design

    International Nuclear Information System (INIS)

    Devaney, J.J.

    1975-09-01

    The problem of achieving very high intensity irradiation by light in minimal regions was studied. Three types of irradiation chamber are suggested: the common laser-reaction chamber, the folded concentric or near-concentric resonator, and the asymmetric confocal resonator. In all designs the ratio of high-intensity illuminated volume to other volume is highly dependent (to the 3 / 2 power) on the power and fluence tolerances of optical elements, primarily mirrors. Optimization of energy coupling is discussed for the common cavity. For the concentric cavities, optimization for both coherent and incoherent beams is treated. Formulae and numerical examples give the size of chambers, aspect ratios, maximum pass number, image sizes, fluences, and the like. Similarly for the asymmetric confocal chamber, formulae and numerical examples for fluences, dimensions, losses, and totally contained pass numbers are given

  14. Geometrical prediction of maximum power point for photovoltaics

    International Nuclear Information System (INIS)

    Kumar, Gaurav; Panchal, Ashish K.

    2014-01-01

    Highlights: • Direct MPP finding by parallelogram constructed from geometry of I–V curve of cell. • Exact values of V and P at MPP obtained by Lagrangian interpolation exploration. • Extensive use of Lagrangian interpolation for implementation of proposed method. • Method programming on C platform with minimum computational burden. - Abstract: It is important to drive solar photovoltaic (PV) system to its utmost capacity using maximum power point (MPP) tracking algorithms. This paper presents a direct MPP prediction method for a PV system considering the geometry of the I–V characteristic of a solar cell and a module. In the first step, known as parallelogram exploration (PGE), the MPP is determined from a parallelogram constructed using the open circuit (OC) and the short circuit (SC) points of the I–V characteristic and Lagrangian interpolation. In the second step, accurate values of voltage and power at the MPP, defined as V mp and P mp respectively, are decided by the Lagrangian interpolation formula, known as the Lagrangian interpolation exploration (LIE). Specifically, this method works with a few (V, I) data points instead most of the MPP algorithms work with (P, V) data points. The performance of the method is examined by several PV technologies including silicon, copper indium gallium selenide (CIGS), copper zinc tin sulphide selenide (CZTSSe), organic, dye sensitized solar cell (DSSC) and organic tandem cells’ data previously reported in literatures. The effectiveness of the method is tested experimentally for a few silicon cells’ I–V characteristics considering variation in the light intensity and the temperature. At last, the method is also employed for a 10 W silicon module tested in the field. To testify the preciseness of the method, an absolute value of the derivative of power (P) with respect to voltage (V) defined as (dP/dV) is evaluated and plotted against V. The method estimates the MPP parameters with high accuracy for any

  15. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  16. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  17. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  18. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  19. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  20. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  1. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  2. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  3. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  4. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  5. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  6. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  7. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  8. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  9. The radial distribution of cosmic rays in the heliosphere at solar maximum

    Science.gov (United States)

    McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.

    2003-08-01

    To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.

  10. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    Science.gov (United States)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  11. Effects of light intensity on cylindrospermopsin production in the ...

    African Journals Online (AJOL)

    The role of light intensity on growth and the production of the hepatotoxin cylindrospermopsin (CYN) in the cyanobacterial harmful algal bloom species Cylindrospermopsis raciborskii was investigated using cultured isolates grown in N-free media under a series of neutral density screens. Maximum growth as indicated by ...

  12. Trends in Intense Typhoon Minimum Sea Level Pressure

    Directory of Open Access Journals (Sweden)

    Stephen L. Durden

    2012-01-01

    Full Text Available A number of recent publications have examined trends in the maximum wind speed of tropical cyclones in various basins. In this communication, the author focuses on typhoons in the western North Pacific. Rather than maximum wind speed, the intensity of the storms is measured by their lifetime minimum sea level pressure (MSLP. Quantile regression is used to test for trends in storms of extreme intensity. The results indicate that there is a trend of decreasing intensity in the most intense storms as measured by MSLP over the period 1951–2010. However, when the data are broken into intervals 1951–1987 and 1987–2010, neither interval has a significant trend, but the intensity quantiles for the two periods differ. Reasons for this are discussed, including the cessation of aircraft reconnaissance in 1987. The author also finds that the average typhoon intensity is greater in El Nino years, while the intensity of the strongest typhoons shows no significant relation to El Nino Southern Oscillation.

  13. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  14. Comparison of helical, maximum intensity projection (MIP), and averaged intensity (AI) 4D CT imaging for stereotactic body radiation therapy (SBRT) planning in lung cancer

    International Nuclear Information System (INIS)

    Bradley, Jeffrey D.; Nofal, Ahmed N.; El Naqa, Issam M.; Lu, Wei; Liu, Jubei; Hubenschmidt, James; Low, Daniel A.; Drzymala, Robert E.; Khullar, Divya

    2006-01-01

    Background and Purpose: To compare helical, MIP and AI 4D CT imaging, for the purpose of determining the best CT-based volume definition method for encompassing the mobile gross tumor volume (mGTV) within the planning target volume (PTV) for stereotactic body radiation therapy (SBRT) in stage I lung cancer. Materials and methods: Twenty patients with medically inoperable peripheral stage I lung cancer were planned for SBRT. Free-breathing helical and 4D image datasets were obtained for each patient. Two composite images, the MIP and AI, were automatically generated from the 4D image datasets. The mGTV contours were delineated for the MIP, AI and helical image datasets for each patient. The volume for each was calculated and compared using analysis of variance and the Wilcoxon rank test. A spatial analysis for comparing center of mass (COM) (i.e. isocenter) coordinates for each imaging method was also performed using multivariate analysis of variance. Results: The MIP-defined mGTVs were significantly larger than both the helical- (p 0.001) and AI-defined mGTVs (p = 0.012). A comparison of COM coordinates demonstrated no significant spatial difference in the x-, y-, and z-coordinates for each tumor as determined by helical, MIP, or AI imaging methods. Conclusions: In order to incorporate the extent of tumor motion from breathing during SBRT, MIP is superior to either helical or AI images for defining the mGTV. The spatial isocenter coordinates for each tumor were not altered significantly by the imaging methods

  15. Longitudinal and transverse space charge limitations on transport of maximum power beams

    International Nuclear Information System (INIS)

    Khoe, T.K.; Martin, R.L.

    1977-01-01

    The maximum transportable beam power is a critical issue in selecting the most favorable approach to generating ignition pulses for inertial fusion with high energy accelerators. Maschke and Courant have put forward expressions for the limits on transport power for quadrupole and solenoidal channels. Included in a more general way is the self consistent effect of space charge defocusing on the power limit. The results show that no limits on transmitted power exist in principal. In general, quadrupole transport magnets appear superior to solenoids except for transport of very low energy and highly charged particles. Longitudinal space charge effects are very significant for transport of intense beams

  16. Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter

    International Nuclear Information System (INIS)

    Shukla, A.; Peter, M.; Hoffmann, L.

    1993-01-01

    Two new approaches are used to analyze positron annihilation lifetime spectra. A general linear filter is designed to filter the noise from lifetime data. The quantified maximum entropy method is used to solve the inverse problem of finding the lifetimes and intensities present in data. We determine optimal values of parameters needed for fitting using Bayesian methods. Estimates of errors are provided. We present results on simulated and experimental data with extensive tests to show the utility of this method and compare it with other existing methods. (orig.)

  17. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  18. Orchestrating intensities and rhythms

    DEFF Research Database (Denmark)

    Staunæs, Dorthe; Juelskjær, Malou

    2016-01-01

    environmentality and learning-centered governance standards has dramatic and performative effects for the production of (educational) subjectivities. This implies a shift from governing identities, categories and structures towards orchestrating affective intensities and rhythms. Finally, the article discusses...... and the making of subjects have held sway for many years; and it is also well known that schools have been some of the most regular purchasers of psychological methods, tests and classifications. Following but also elaborating upon governmentality studies, it is suggested that a current shift towards...

  19. Low-SNR Capacity of MIMO Optical Intensity Channels

    KAUST Repository

    Chaaban, Anas

    2017-09-18

    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.

  20. Low-SNR Capacity of MIMO Optical Intensity Channels

    KAUST Repository

    Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim

    2017-01-01

    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.

  1. SALIVARY CORTISOL RESPONSES AND PERCEIVED EXERTION DURING HIGH INTENSITY AND LOW INTENSITY BOUTS OF RESISTANCE EXERCISE

    Directory of Open Access Journals (Sweden)

    Alison D. Egan

    2004-03-01

    Full Text Available The purpose of this study was to measure the salivary cortisol response to different intensities of resistance exercise. In addition, we wanted to determine the reliability of the session rating of perceived exertion (RPE scale to monitor resistance exercise intensity. Subjects (8 men, 9 women completed 2 trials of acute resistance training bouts in a counterbalanced design. The high intensity resistance exercise protocol consisted of six, ten-repetition sets using 75% of one repetition maximum (RM on a Smith machine squat and bench press exercise (12 sets total. The low intensity resistance exercise protocol consisted of three, ten-repetition sets at 30% of 1RM of the same exercises as the high intensity protocol. Both exercise bouts were performed with 2 minutes of rest between each exercise and sessions were repeated to test reliability of the measures. The order of the exercise bouts was randomized with least 72 hours between each session. Saliva samples were obtained immediately before, immediately after and 30 mins following each resistance exercise bout. RPE measures were obtained using Borg's CR-10 scale following each set. Also, the session RPE for the entire exercise session was obtained 30 minutes following completion of the session. There was a significant 97% increase in the level of salivary cortisol immediately following the high intensity exercise session (P<0.05. There was also a significant difference in salivary cortisol of 145% between the low intensity and high intensity exercise session immediately post-exercise (P<0.05. The low intensity exercise did not result in any significant changes in cortisol levels. There was also a significant difference between the session RPE values for the different intensity levels (high intensity 7.1 vs. low intensity 1.9 (P<0.05. The intraclass correlation coefficient for the session RPE measure was 0.95. It was concluded that the session RPE method is a valid and reliable method of

  2. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  3. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  4. A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution

    International Nuclear Information System (INIS)

    Piotrowski, Edward W; Sładkowski, Jan

    2009-01-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  5. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    Science.gov (United States)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  6. French intensive truck garden

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, T D

    1983-01-01

    The French Intensive approach to truck gardening has the potential to provide substantially higher yields and lower per acre costs than do conventional farming techniques. It was the intent of this grant to show that there is the potential to accomplish the gains that the French Intensive method has to offer. It is obvious that locally grown food can greatly reduce transportation energy costs but when there is the consideration of higher efficiencies there will also be energy cost reductions due to lower fertilizer and pesticide useage. As with any farming technique, there is a substantial time interval for complete soil recovery after there have been made substantial soil modifications. There were major crop improvements even though there was such a short time since the soil had been greatly disturbed. It was also the intent of this grant to accomplish two other major objectives: first, the garden was managed under organic techniques which meant that there were no chemical fertilizers or synthetic pesticides to be used. Second, the garden was constructed so that a handicapped person in a wheelchair could manage and have a higher degree of self sufficiency with the garden. As an overall result, I would say that the garden has taken the first step of success and each year should become better.

  7. Fixed-head star tracker magnitude calibration on the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.

    1990-01-01

    The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.

  8. Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)

    Data.gov (United States)

    NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...

  9. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  10. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  11. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  12. Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual

    Science.gov (United States)

    This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.

  13. ORIGINAL ARTICLES Surgical practice in a maximum security prison

    African Journals Online (AJOL)

    Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.

  14. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  15. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  16. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  17. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  18. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  19. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  20. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  1. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  2. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  3. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  4. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  5. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  6. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  7. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  8. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  9. Collimator setting optimization in intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Williams, M.; Hoban, P.

    2001-01-01

    Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  10. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  11. Compton scattering at high intensities

    Energy Technology Data Exchange (ETDEWEB)

    Heinzl, Thomas, E-mail: thomas.heinzl@plymouth.ac.u [University of Plymouth, School of Mathematics and Statistics, Drake Circus, Plymouth PL4 8AA (United Kingdom)

    2009-12-01

    High-intensity Compton scattering takes place when an electron beam is brought into collision with a high power laser. We briefly review the main intensity signatures using the formalism of strong-field quantum electrodynamics.

  12. Turbulence Intensity Scaling: A Fugue

    OpenAIRE

    Basse, Nils T.

    2018-01-01

    We study streamwise turbulence intensity definitions using smooth- and rough-wall pipe flow measurements made in the Princeton Superpipe. Scaling of turbulence intensity with the bulk (and friction) Reynolds number is provided for the definitions. The turbulence intensity is proportional to the square root of the friction factor with the same proportionality constant for smooth- and rough-wall pipe flow. Turbulence intensity definitions providing the best description of the measurements are i...

  13. Modifications of the urban heat island characteristics under exceptionally hot weather - A case study

    Science.gov (United States)

    Founda, Dimitra; Pierros, Fragiskos; Santamouris, Mathew

    2016-04-01

    Considerable recent research suggests that heat waves are becoming more frequent, more intense and longer in the future. Heat waves are characterised by the dominance of prolonged abnormally hot conditions related to synoptic scale anomalies, thus they affect extensive geographical areas. Heat waves (HW) have a profound impact on humans and they have been proven to increase mortality. Urban areas are known to be hotter than the surrounding rural areas due to the well documented urban heat island (UHI) phenomenon. Urban areas face increased risk under heat waves, due to the added heat from the urban heat island and increased population density. Given that urban populations keep increasing, citizens are exposed to significant heat related risk. Mitigation and adaptation strategies require a deep understanding of the response of the urban heat islands under extremely hot conditions. The response of the urban heat island under selected episodes of heat waves is examined in the city of Athens, from the comparison between stations of different characteristics (urban, suburban, coastal and rural). Two distinct episodes of heat waves occurring during summer 2000 were selected. Daily maximum air temperature at the urban station of the National Observatory of Athens (NOA) exceeded 40 0C for at least three consecutive days for both episodes. The intensity of UHI during heat waves was compared to the intensity under 'normal' conditions, represented from a period 'before' and 'after' the heat wave. Striking differences of UHI features between HW and no HW cases were observed, depending on the time of the day and the type of station. The comparison between the urban and the coastal station showed an increase of the order of 3 0C in the intensity of UHI during the HW days, as regards both daytime and nighttime conditions. The comparison between urban and a suburban (inland) station, revealed some different behaviour during HWs, with increases of the order of 3 0C in the nocturnal

  14. High intensity circular proton accelerators

    International Nuclear Information System (INIS)

    Craddock, M.K.

    1987-12-01

    Circular machines suitable for the acceleration of high intensity proton beams include cyclotrons, FFAG accelerators, and strong-focusing synchrotrons. This paper discusses considerations affecting the design of such machines for high intensity, especially space charge effects and the role of beam brightness in multistage accelerators. Current plans for building a new generation of high intensity 'kaon factories' are reviewed. 47 refs

  15. River flooding due to intense precipitation

    International Nuclear Information System (INIS)

    Lin, James C.

    2014-01-01

    River stage can rise and cause site flooding due to local intense precipitation (LIP), dam failures, snow melt in conjunction with precipitation or dam failures, etc. As part of the re-evaluation of the design basis as well as the PRA analysis of other external events, the likelihood and consequence of river flooding leading to the site flooding need to be examined more rigorously. To evaluate the effects of intense precipitation on site structures, the site watershed hydrology and pond storage are calculated. To determine if river flooding can cause damage to risk-significant systems, structures, and components (SSC), water surface elevations are analyzed. Typically, the amount and rate of the input water is determined first. For intense precipitation, the fraction of the rainfall in the watershed drainage area not infiltrated into the ground is collected in the river and contributes to the rise of river water elevation. For design basis analysis, the Probable Maximum Flood (PMF) is evaluated using the Probable Maximum Precipitation (PMP) based on the site topography/configuration. The peak runoff flow rate and water surface elevations resulting from the precipitation induced flooding can then be estimated. The runoff flow hydrograph and peak discharge flows can be developed using the synthetic hydrograph method. The standard step method can then be used to determine the water surface elevations along the river channel. Thus, the flood water from the local intense precipitation storm and excess runoff from the nearby river can be evaluated to calculate the water surface elevations, which can be compared with the station grade floor elevation to determine the effects of site flooding on risk-significant SSCs. The analysis needs to consider any possible diversion flow and the effects of changes to the site configurations. Typically, the analysis is performed based on conservative peak rainfall intensity and the assumptions of failure of the site drainage facilities

  16. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  17. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  18. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  19. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  20. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  1. Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum

    Science.gov (United States)

    Kostiuk, T.; Livengood, T. A.; Hewagama, T.

    2009-01-01

    Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.

  2. Intensity modulated conformal radiotherapy

    International Nuclear Information System (INIS)

    Noel, Georges; Moty-Monnereau, Celine; Meyer, Aurelia; David, Pauline; Pages, Frederique; Muller, Felix; Lee-Robin, Sun Hae; David, Denis Jean

    2006-12-01

    This publication reports the assessment of intensity-modulated conformal radiotherapy (IMCR). This assessment is based on a literature survey which focussed on indications, efficiency and safety on the short term, on the risk of radio-induced cancer on the long term, on the role in the therapeutic strategy, on the conditions of execution, on the impact on morbidity-mortality and life quality, on the impact on the health system and on public health policies and program. This assessment is also based on the opinion of a group of experts regarding the technical benefit of IMCR, its indications depending on the cancer type, safety in terms of radio-induced cancers, and conditions of execution. Before this assessment, the report thus indicates indications for which the use of IMCR can be considered as sufficient or not determined. It also proposes a technical description of IMCR and helical tomo-therapy, discusses the use of this technique for various pathologies or tumours, analyses the present situation of care in France, and comments the identification of this technique in foreign classifications

  3. Intensive Care Unit Delirium

    Directory of Open Access Journals (Sweden)

    Yongsuk Kim

    2015-05-01

    Full Text Available Delirium is described as a manifestation of acute brain injury and recognized as one of the most common complications in intensive care unit (ICU patients. Although the causes of delirium vary widely among patients, delirium increases the risk of longer ICU and hospital length of stay, death, cost of care, and post-ICU cognitive impairment. Prevention and early detection are therefore crucial. However, the clinical approach toward delirium is not sufficiently aggressive, despite the condition’s high incidence and prevalence in the ICU setting. While the underlying pathophysiology of delirium is not fully understood, many risk factors have been suggested. As a way to improve delirium-related clinical outcome, high-risk patients can be identified. A valid and reliable bedside screening tool is also needed to detect the symptoms of delirium early. Delirium is commonly treated with medications, and haloperidol and atypical antipsychotics are commonly used as standard treatment options for ICU patients although their efficacy and safety have not been established. The approaches for the treatment of delirium should focus on identifying the underlying causes and reducing modifiable risk factors to promote early mobilization.

  4. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  5. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  6. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  7. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  8. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  9. Effects of intensive forest management practices on insect infestation levels and loblolly pine growth

    Science.gov (United States)

    John T. Nowak; C. Wayne Berisford

    2000-01-01

    Intensive forest management practices have been shown to increase tree growth and shorten rotation time. However, they may also lead to an increased need for insect pest management because of higher infestation levels and lower action thresholds. To investigate the relationship between intensive management practices arid insect infestation, maximum growth potential...

  10. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  11. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  12. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  13. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  14. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  15. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  16. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  17. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  18. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  19. Dynamics of triacylglycerol and EPA production in Phaeodactylum tricornutum under nitrogen starvation at different light intensities.

    Directory of Open Access Journals (Sweden)

    Ilse M Remmers

    Full Text Available Lipid production in microalgae is highly dependent on the applied light intensity. However, for the EPA producing model-diatom Phaeodactylum tricornutum, clear consensus on the impact of incident light intensity on lipid productivity is still lacking. This study quantifies the impact of different incident light intensities on the biomass, TAG and EPA yield on light in nitrogen starved batch cultures of P. tricornutum. The maximum biomass concentration and maximum TAG and EPA contents were found to be independent of the applied light intensity. The lipid yield on light was reduced at elevated light intensities (>100 μmol m-2 s-1. The highest TAG yield on light (112 mg TAG molph-1 was found at the lowest light intensity tested (60 μmol m-2 s-1, which is still relatively low to values reported in literature for other algae. Furthermore, mass balance analysis showed that the EPA fraction in TAG may originate from photosynthetic membrane lipids.

  20. The Canadian intense neutron generator

    Energy Technology Data Exchange (ETDEWEB)

    Tunnicliffe, P R

    1967-07-01

    Atomic Energy of Canada Ltd. has proposed construction of an Intense Neutron-Generator. The generator would produce uniquely-intense beams of thermal neutrons for solid-state and low-energy nuclear studies and would yield significant quantities of radioisotopes of both research and commercial value; it would also produce copious sources of mesons and energetic nucleons for use in intermediate-energy nuclear physics and in nuclear-structure studies. The primary neutron source of 10{sup 19}/sec would be generated by bombarding a heavy-element target with a continuous beam of 65 mA of 1 GeV protons. The target of circulating and cooled Pb-Bi eutectic would be surrounded by a tank of heavy water moderator yielding a maximum useful flux of 10{sup 16} thermal neutrons/cm{sup 2}/sec in the region where neutron beams can be extracted. This high-energy spallation process for producing neutrons is nearly four times more efficient in producing neutrons per unit of thermal energy released in the neutron source compared with a fission reactor. Nevertheless, if energy costs for producing the 65 MW proton beam are to be within reason, the machine producing the beam must be efficient. A D.C. machine is in principle ideal but practical achievement of 1 GV is not likely within the time desired. An accelerator where the protons gain energy from radio-frequency fields is the most likely prospect. We have selected a linear accelerator as our reference design and detailed theoretical and experimental studies are in progress. The machine is based on the Los Alamos Meson Physics Facility design reoptimized for continuous rather than pulsed operation. It is approximately one mile long and is expected to achieve nearly 50 percent overall efficiency. There are two major portions, an 'Alvarez' Section operating at 200 MHz accelerating the beam to about 150 MeV, followed by a 'Waveguide' section operating at 800 MHz. Protons are initially injected by an 0.75 MV D.C. accelerator. The Alvarez

  1. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  2. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  3. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  4. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  5. Computing the maximum volume inscribed ellipsoid of a polytopic projection

    NARCIS (Netherlands)

    Zhen, Jianzhe; den Hertog, Dick

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  6. Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic Projection

    NARCIS (Netherlands)

    Zhen, J.; den Hertog, D.

    2015-01-01

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  7. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  8. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  9. Molecular markers linked to apomixis in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    Panicum maximum Jacq. is an important forage grass of African origin largely used in the tropics. The genetic breeding of this species is based on the hybridization of sexual and apomictic genotypes and selection of apomictic F1 hybrids. The objective of this work was to identify molecular markers linked to apomixis in P.

  10. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  11. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  12. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  13. Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks

    Science.gov (United States)

    2016-08-29

    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016

  14. Maximum of difference assessment of typical semitrailers: a global study

    CSIR Research Space (South Africa)

    Kienhofer, F

    2016-11-01

    Full Text Available the maximum allowable width and frontal overhang as stipulated by legislation from Australia, the European Union, Canada, the United States and South Africa. The majority of the Australian, EU and Canadian semitrailer combinations and all of the South African...

  15. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  16. 24 CFR 232.565 - Maximum loan amount.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...

  17. 5 CFR 531.221 - Maximum payable rate rule.

    Science.gov (United States)

    2010-01-01

    ... before the reassignment. (ii) If the rate resulting from the geographic conversion under paragraph (c)(2... previous rate (i.e., the former special rate after the geographic conversion) with the rates on the current... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum payable rate rule. 531.221...

  18. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  19. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  20. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  1. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  2. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Amidei, D.; Burkett, K.; Gerdes, D.; Miao, C.; Wolinski, D.

    1994-01-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photo trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube spikes

  3. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Gerdes, D.

    1994-08-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photon trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube discharge

  4. Maximum drawdown and the allocation to real estate

    NARCIS (Netherlands)

    Hamelink, F.; Hoesli, M.

    2004-01-01

    The role of real estate in a mixed-asset portfolio is investigated when the maximum drawdown (hereafter MaxDD), rather than the standard deviation, is used as the measure of risk. In particular, it is analysed whether the discrepancy between the optimal allocation to real estate and the actual

  5. A Family of Maximum SNR Filters for Noise Reduction

    DEFF Research Database (Denmark)

    Huang, Gongping; Benesty, Jacob; Long, Tao

    2014-01-01

    significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR...

  6. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  7. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel; Cabello, Ana Marí a; Moran, Xose Anxelu G.; Massana, Ramon; Scharek, Renate

    2016-01-01

    and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer

  8. 44 CFR 208.12 - Maximum Pay Rate Table.

    Science.gov (United States)

    2010-10-01

    ...) Physicians. DHS uses the latest Special Salary Rate Table Number 0290 for Medical Officers (Clinical... Personnel, in which case the Maximum Pay Rate Table would not apply. (3) Compensation for Sponsoring Agency... organizations, e.g., HMOs or medical or engineering professional associations, under the revised definition of...

  9. Anti-nutrient components of guinea grass ( Panicum maximum ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... A true measure of forage quality is animal ... The anti-nutritional contents of a pasture could be ... nutrient factors in P. maximum; (2) assess the effect of nitrogen ..... 3. http://www.clemson.edu/Fairfield/local/news/quality.

  10. SIMULATION OF NEW SIMPLE FUZZY LOGIC MAXIMUM POWER ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Basic structure photovoltaic system Solar array mathematic ... The equivalent circuit model of a solar cell consists of a current generator and a diode .... control of boost converter (tracker) such that maximum power is achieved at the output of the solar panel. Fig.11. The membership function of input. Fig.12.

  11. Sur les estimateurs du maximum de vraisemblance dans les mod& ...

    African Journals Online (AJOL)

    Abstract. We are interested in the existence and uniqueness of maximum likelihood estimators of parameters in the two multiplicative regression models, with Poisson or negative binomial probability distributions. Following its work on the multiplicative Poisson model with two factors without repeated measures, Haberman ...

  12. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  13. Applications of the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš

    2004-01-01

    Roč. 305, - (2004), s. 57-62 ISSN 0015-0193 Grant - others:DFG and FCI(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.517, year: 2004

  14. Phytophthora stricta isolated from Rhododendron maximum in Pennsylvania

    Science.gov (United States)

    During a survey in October 2013, in the Michaux State Forest in Pennsylvania , necrotic Rhododendron maximum leaves were noticed on mature plants alongside a stream. Symptoms were nondescript necrotic lesions at the tips of mature leaves. Colonies resembling a Phytophthora sp. were observed from c...

  15. Transversals and independence in linear hypergraphs with maximum degree two

    DEFF Research Database (Denmark)

    Henning, Michael A.; Yeo, Anders

    2017-01-01

    , k-uniform hypergraphs with maximum degree 2. It is known [European J. Combin. 36 (2014), 231–236] that if H ∈ Hk, then (k + 1)τ (H) 6 ≤ n + m, and there are only two hypergraphs that achieve equality in the bound. In this paper, we prove a much more powerful result, and establish tight upper bounds...

  16. A conrparison of optirnunl and maximum reproduction using the rat ...

    African Journals Online (AJOL)

    of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.

  17. Revision of regional maximum flood (RMF) estimation in Namibia ...

    African Journals Online (AJOL)

    Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...

  18. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  19. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  20. Current opinion about maximum entropy methods in Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Szymanski, K

    2009-01-01

    Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.

  1. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...

  2. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.

  3. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  4. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  5. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  6. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  7. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  8. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  9. Heat Convection at the Density Maximum Point of Water

    Science.gov (United States)

    Balta, Nuri; Korganci, Nuri

    2018-01-01

    Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…

  10. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  11. Optimal item discrimination and maximum information for logistic IRT models

    NARCIS (Netherlands)

    Veerkamp, W.J.J.; Veerkamp, Wim J.J.; Berger, Martijn P.F.; Berger, Martijn

    1999-01-01

    Items with the highest discrimination parameter values in a logistic item response theory model do not necessarily give maximum information. This paper derives discrimination parameter values, as functions of the guessing parameter and distances between person parameters and item difficulty, that

  12. Effect of Training Frequency on Maximum Expiratory Pressure

    Science.gov (United States)

    Anand, Supraja; El-Bashiti, Nour; Sapienza, Christine

    2012-01-01

    Purpose: To determine the effects of expiratory muscle strength training (EMST) frequency on maximum expiratory pressure (MEP). Method: We assigned 12 healthy participants to 2 groups of training frequency (3 days per week and 5 days per week). They completed a 4-week training program on an EMST trainer (Aspire Products, LLC). MEP was the primary…

  13. Assessment of the phytoremediation potential of Panicum maximum ...

    African Journals Online (AJOL)

    Obvious signs of phyto-toxicity however appeared in plants exposed to 120 ppm Pb2+ and Cd2+ at day twenty-three, suggesting that P. maximum may be a moderate metal accumulator. Keywords: phytoremediation, heavy metals, uptake, tissues, accumulator. African Journal of Biotechnology, Vol 13(19), 1979-1984 ...

  14. Atlantic Meridional Overturning Circulation During the Last Glacial Maximum.

    NARCIS (Netherlands)

    Lynch-Stieglitz, J.; Adkins, J.F.; Curry, W.B.; Dokken, T.; Hall, I.R.; Herguera, J.C.; Hirschi, J.J.-M.; Ivanova, E.V.; Kissel, C.; Marchal, O.; Marchitto, T.M.; McCave, I.N.; McManus, J.F.; Mulitza, S.; Ninnemann, U.; Peeters, F.J.C.; Yu, E.-F.; Zahn, R.

    2007-01-01

    The circulation of the deep Atlantic Ocean during the height of the last ice age appears to have been quite different from today. We review observations implying that Atlantic meridional overturning circulation during the Last Glacial Maximum was neither extremely sluggish nor an enhanced version of

  15. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Data intensive ATLAS workflows in the Cloud

    CERN Document Server

    Rzehorz, Gerhard Ferdinand; The ATLAS collaboration

    2016-01-01

    This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value

  17. Data intensive ATLAS workflows in the Cloud

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00396985; The ATLAS collaboration; Keeble, Oliver; Quadt, Arnulf; Kawamura, Gen

    2017-01-01

    This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the Cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value.

  18. Effects of Irradiation on bacterial atp luminous intensity of cooled pork and chicken

    International Nuclear Information System (INIS)

    Ju Hua

    2010-01-01

    The effect of irradiation on cooled pork and chicken was detected with ATP luminous intensity method. The influences of other factors to ATP luminous intensity were also discussed. There was positive correlation between ATP standard concentration and ATP luminous intensity, and negative correlation between irradiation dosage and ATP luminous intensity. The trend of ATP luminous intensity of cooled pork and chicken after irradiation was inverse S, and the maximum ATP luminous intensity appeared at 6.0 kGy, and minimum at 4.0 and 8.0 kGy. Sterilized water and sterilized pork had no interference to ATP luminous intensity of the samples. There was significant positive correlation between E. coli 10003 concentration and ATP luminous intensity, the coefficient correlation was 0.9437. (authors)

  19. Simultaneous reconstruction, segmentation, and edge enhancement of relatively piecewise continuous images with intensity-level information

    International Nuclear Information System (INIS)

    Liang, Z.; Jaszczak, R.; Coleman, R.; Johnson, V.

    1991-01-01

    A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions. The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria. These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform. To test the model, a mathematical phantom over a 64x64 array was constructed. The phantom had three contiguous regions. Each region had a different intensity level. Measurements from the phantom were simulated using an emission-tomography geometry. Fifty projections were generated over 180 degree, with 64 equally spaced parallel rays per projection. Projection data were randomized to contain Poisson noise. Image reconstructions were performed using an iterative maximum a posteriori probability procedure. The contiguous regions corresponding to the three intensity levels were automatically segmented. Simultaneously, the edges of the regions were sharpened. Noise in the reconstructed images was significantly suppressed. Convergence of the iterative procedure to the phantom was observed. Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities

  20. Individuals underestimate moderate and vigorous intensity physical activity.

    Directory of Open Access Journals (Sweden)

    Karissa L Canning

    Full Text Available BACKGROUND: It is unclear whether the common physical activity (PA intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI classifications. OBJECTIVES: The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. METHODS: 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. RESULTS: On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05. CONCLUSION: When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.

  1. Analysis of low-intensity scintillation spectra

    International Nuclear Information System (INIS)

    Muravsky, V.; Tolstov, S.A.

    2002-01-01

    The maximum likelihood algorithms for nuclides activities estimation from low intensity scintillation γ-ray spectra have been created. The algorithms treat full energy peaks and Compton parts of spectra, and they are more effective than least squares estimators. The factors that could lead to the bias of activity estimates are taken into account. Theoretical analysis of the problem of choosing the optimal set of initial spectra for the spectrum model to minimize errors of the activities estimation has been carried out for the general case of the N-components with Gaussian or Poisson statistics. The obtained criterion allows to exclude superfluous initial spectra of nuclides from the model. A special calibration procedure for scintillation γ-spectrometers has been developed. This procedure is required for application of the maximum likelihood activity estimators processing all the channels of the scintillation γ-spectrum, including the Compton part. It allows one to take into account the influence of the sample mass density variation. The algorithm for testing the spectrum model adequacy to the processed scintillation spectrum has been developed. The algorithms are realized in Borland Pascal 7 as a library of procedures and functions. The developed library is compatible with Delphi 1.0 and higher versions. It can be used as the algorithmic basis for analysis of highly sensitive scintillation γ- and β-spectrometric devices. (author)

  2. Windiness spells in SW Europe since the last glacial maximum

    NARCIS (Netherlands)

    Costas, S.; Naugthon, P.; Goble, R.; Renssen, H.

    2016-01-01

    Dunefields have a great potential to unravel past regimes of atmospheric circulation as they record direct traces of this component of the climate system. Along the Portuguese coast, transgressive dunefields represent relict features originated by intense and frequent westerly winds that largely

  3. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  4. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  5. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  6. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  7. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  8. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  9. Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles

    Directory of Open Access Journals (Sweden)

    Paulo H. Egydio

    2008-01-01

    Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.

  10. On the maximum Q in feedback controlled subignited plasmas

    International Nuclear Information System (INIS)

    Anderson, D.; Hamnen, H.; Lisak, M.

    1990-01-01

    High Q operation in feedback controlled subignited fusion plasma requires the operating temperature to be close to the ignition temperature. In the present work we discuss technological and physical effects which may restrict this temperature difference. The investigation is based on a simplified, but still accurate, 0=D analytical analysis of the maximum Q of a subignited system. Particular emphasis is given to sawtooth ocsillations which complicate the interpretation of diagnostic neutron emission data into plasma temperatures and may imply an inherent lower bound on the temperature deviation from the ignition point. The estimated maximum Q is found to be marginal (Q = 10-20) from the point of view of a fusion reactor. (authors)

  11. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  12. Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints

    Directory of Open Access Journals (Sweden)

    Xiaojian Yu

    2014-01-01

    Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.

  13. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  14. Effect of current on the maximum possible reward.

    Science.gov (United States)

    Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S

    1991-12-01

    Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.

  15. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  16. Maximum mass-particle velocities in Kantor's information mechanics

    International Nuclear Information System (INIS)

    Sverdlik, D.I.

    1989-01-01

    Kantor's information mechanics links phenomena previously regarded as not treatable by a single theory. It is used here to calculate the maximum velocities υ m of single particles. For the electron, υ m /c ∼ 1 - 1.253814 x 10 -77 . The maximum υ m corresponds to υ m /c ∼ 1 -1.097864 x 10 -122 for a single mass particle with a rest mass of 3.078496 x 10 -5 g. This is the fastest that matter can move. Either information mechanics or classical mechanics can be used to show that υ m is less for heavier particles. That υ m is less for lighter particles can be deduced from an information mechanics argument alone

  17. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  18. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  19. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  20. Maximum heat flux in boiling in a large volume

    International Nuclear Information System (INIS)

    Bergmans, Dzh.

    1976-01-01

    Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory

  1. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  2. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  3. Rumor Identification with Maximum Entropy in MicroNet

    Directory of Open Access Journals (Sweden)

    Suisheng Yu

    2017-01-01

    Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.

  4. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  5. Intense electron and ion beams

    CERN Document Server

    Molokovsky, Sergey Ivanovich

    2005-01-01

    Intense Ion and Electron Beams treats intense charged-particle beams used in vacuum tubes, particle beam technology and experimental installations such as free electron lasers and accelerators. It addresses, among other things, the physics and basic theory of intense charged-particle beams; computation and design of charged-particle guns and focusing systems; multiple-beam charged-particle systems; and experimental methods for investigating intense particle beams. The coverage is carefully balanced between the physics of intense charged-particle beams and the design of optical systems for their formation and focusing. It can be recommended to all scientists studying or applying vacuum electronics and charged-particle beam technology, including students, engineers and researchers.

  6. Macroseismic intensity attenuation in Iran

    Science.gov (United States)

    Yaghmaei-Sabegh, Saman

    2018-01-01

    Macroseismic intensity data plays an important role in the process of seismic hazard analysis as well in developing of reliable earthquake loss models. This paper presents a physical-based model to predict macroseismic intensity attenuation based on 560 intensity data obtained in Iran in the time period 1975-2013. The geometric spreading and energy absorption of seismic waves have been considered in the proposed model. The proposed easy to implement relation describes the intensity simply as a function of moment magnitude, source to site distance and focal depth. The prediction capability of the proposed model is assessed by means of residuals analysis. Prediction results have been compared with those of other intensity prediction models for Italy, Turkey, Iran and central Asia. The results indicate the higher attenuation rate for the study area in distances less than 70km.

  7. Maximum credible accident analysis for TR-2 reactor conceptual design

    International Nuclear Information System (INIS)

    Manopulo, E.

    1981-01-01

    A new reactor, TR-2, of 5 MW, designed in cooperation with CEN/GRENOBLE is under construction in the open pool of TR-1 reactor of 1 MW set up by AMF atomics at the Cekmece Nuclear Research and Training Center. In this report the fission product inventory and doses released after the maximum credible accident have been studied. The diffusion of the gaseous fission products to the environment and the potential radiation risks to the population have been evaluated

  8. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  9. Precise charge density studies by maximum entropy method

    CERN Document Server

    Takata, M

    2003-01-01

    For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)

  10. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  11. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  12. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  13. The discrete maximum principle for Galerkin solutions of elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 10, č. 1 (2012), s. 25-43 ISSN 1895-1074 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * monotone methods * Galerkin solution Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2012 http://www.springerlink.com/content/x73624wm23x4wj26

  14. ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG

    Directory of Open Access Journals (Sweden)

    JOSÉ MARCANO

    2017-01-01

    Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.

  15. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  16. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  17. A maximum modulus theorem for the Oseen problem

    Czech Academy of Sciences Publication Activity Database

    Kračmar, S.; Medková, Dagmar; Nečasová, Šárka; Varnhorn, W.

    2013-01-01

    Roč. 192, č. 6 (2013), s. 1059-1076 ISSN 0373-3114 R&D Projects: GA ČR(CZ) GAP201/11/1304; GA MŠk LC06052 Institutional research plan: CEZ:AV0Z10190503 Keywords : Oseen problem * maximum modulus theorem * Oseen potentials Subject RIV: BA - General Mathematics Impact factor: 0.909, year: 2013 http://link.springer.com/article/10.1007%2Fs10231-012-0258-x

  18. Seeking the epoch of maximum luminosity for dusty quasars

    International Nuclear Information System (INIS)

    Vardanyan, Valeri; Weedman, Daniel; Sargsyan, Lusine

    2014-01-01

    Infrared luminosities νL ν (7.8 μm) arising from dust reradiation are determined for Sloan Digital Sky Survey (SDSS) quasars with 1.4 maximum at any redshift z < 5, reaching a plateau for z ≳ 3 with maximum luminosity νL ν (7.8 μm) ≳ 10 47 erg s –1 ; luminosity functions show one quasar Gpc –3 having νL ν (7.8 μm) > 10 46.6 erg s –1 for all 2 maximum luminosity has not yet been identified at any redshift below 5. The most ultraviolet luminous quasars, defined by rest frame νL ν (0.25 μm), have the largest values of the ratio νL ν (0.25 μm)/νL ν (7.8 μm) with a maximum ratio at z = 2.9. From these results, we conclude that the quasars most luminous in the ultraviolet have the smallest dust content and appear luminous primarily because of lessened extinction. Observed ultraviolet/infrared luminosity ratios are used to define 'obscured' quasars as those having >5 mag of ultraviolet extinction. We present a new summary of obscured quasars discovered with the Spitzer Infrared Spectrograph and determine the infrared luminosity function of these obscured quasars at z ∼ 2.1. This is compared with infrared luminosity functions of optically discovered, unobscured quasars in the SDSS and in the AGN and Galaxy Evolution Survey. The comparison indicates comparable numbers of obscured and unobscured quasars at z ∼ 2.1 with a possible excess of obscured quasars at fainter luminosities.

  19. Industry guidelines for the calibration of maximum anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, B.H. [AWS Scientific, Inc., Albany, NY (United States)

    1996-12-31

    The purpose of this paper is to report on a framework of guidelines for the calibration of the Maximum Type 40 anemometer. This anemometer model is the wind speed sensor of choice in the majority of wind resource assessment programs in the U.S. These guidelines were established by the Utility Wind Resource Assessment Program. In addition to providing guidelines for anemometers, the appropriate use of non-calibrated anemometers is also discussed. 14 refs., 1 tab.

  20. Max '91: Flare research at the next solar maximum

    Science.gov (United States)

    Dennis, Brian; Canfield, Richard; Bruner, Marilyn; Emslie, Gordon; Hildner, Ernest; Hudson, Hugh; Hurford, Gordon; Lin, Robert; Novick, Robert; Tarbell, Ted

    1988-01-01

    To address the central scientific questions surrounding solar flares, coordinated observations of electromagnetic radiation and energetic particles must be made from spacecraft, balloons, rockets, and ground-based observatories. A program to enhance capabilities in these areas in preparation for the next solar maximum in 1991 is recommended. The major scientific issues are described, and required observations and coordination of observations and analyses are detailed. A program plan and conceptual budgets are provided.

  1. Max '91: flare research at the next solar maximum

    International Nuclear Information System (INIS)

    Dennis, B.; Canfield, R.; Bruner, M.

    1988-01-01

    To address the central scientific questions surrounding solar flares, coordinated observations of electromagnetic radiation and energetic particles must be made from spacecraft, balloons, rockets, and ground-based observatories. A program to enhance capabilities in these areas in preparation for the next solar maximum in 1991 is recommended. The major scientific issues are described, and required observations and coordination of observations and analyses are detailed. A program plan and conceptual budgets are provided

  2. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  3. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  4. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  5. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    Science.gov (United States)

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  6. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  7. Investigation on maximum transition temperature of phonon mediated superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)

    1989-05-01

    Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.

  8. Study of forecasting maximum demand of electric power

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, B.C.; Hwang, Y.J. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    As far as the past performances of power supply and demand in Korea is concerned, one of the striking phenomena is that there have been repeated periodic surpluses and shortages of power generation facilities. Precise assumption and prediction of power demands is the basic work in establishing a supply plan and carrying out the right policy since facilities investment of the power generation industry requires a tremendous amount of capital and a long construction period. The purpose of this study is to study a model for the inference and prediction of a more precise maximum demand under these backgrounds. The non-parametric model considered in this study, paying attention to meteorological factors such as temperature and humidity, does not have a simple proportionate relationship with the maximum power demand, but affects it through mutual complicated nonlinear interaction. I used the non-parametric inference technique by introducing meteorological effects without importing any literal assumption on the interaction of temperature and humidity preliminarily. According to the analysis result, it is found that the non-parametric model that introduces the number of tropical nights which shows the continuity of the meteorological effect has better prediction power than the linear model. The non- parametric model that considers both the number of tropical nights and the number of cooling days at the same time is a model for predicting maximum demand. 7 refs., 6 figs., 9 tabs.

  9. Maximum Work of Free-Piston Stirling Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-04-01

    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  10. Erich Regener and the ionisation maximum of the atmosphere

    Science.gov (United States)

    Carlson, P.; Watson, A. A.

    2014-12-01

    In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under water and in the atmosphere. Along with one of his students, Georg Pfotzer, he discovered the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be, largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students, and through his links with Rutherford's group in Cambridge, is discussed in an appendix. Regener was nominated for the Nobel Prize in Physics by Schrödinger in 1938. He died in 1955 at the age of 73.

  11. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  12. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  13. Maximum Aerobic Capacity of Underground Coal Miners in India

    Directory of Open Access Journals (Sweden)

    Ratnadeep Saha

    2011-01-01

    Full Text Available Miners fitness test was assessed in terms of determination of maximum aerobic capacity by an indirect method following a standard step test protocol before going down to mine by taking into consideration of heart rates (Telemetric recording and oxygen consumption of the subjects (Oxylog-II during exercise at different working rates. Maximal heart rate was derived as 220−age. Coal miners reported a maximum aerobic capacity within a range of 35–38.3 mL/kg/min. It also revealed that oldest miners (50–59 yrs had a lowest maximal oxygen uptake (34.2±3.38 mL/kg/min compared to (42.4±2.03 mL/kg/min compared to (42.4±2.03 mL/kg/min the youngest group (20–29 yrs. It was found to be negatively correlated with age (r=−0.55 and −0.33 for younger and older groups respectively and directly associated with the body weight of the subjects (r=0.57 – 0.68, P≤0.001. Carriers showed maximum cardio respiratory capacity compared to other miners. Indian miners VO2max was found to be lower both compared to their abroad mining counterparts and various other non-mining occupational working groups in India.

  14. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  15. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  16. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  17. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  19. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  20. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.