Changes in the spatial scale of Beijing UHI and urban development
YU Shuqiu; BIAN Lingen; LIN Xuechun
2005-01-01
The seasonal and interannual variations of Beijing urban heat island (UHI) are investigated in this paper using the temperature data from 1960 to 2000 at 20 meteorological stations in the Beijing region, and then the relationship between the intensity and spatial scale of UHI and Beijing urbanization indices is analyzed and discussed. Main conclusions are the followings. First, Beijing UHI shows obvious seasonal variations, and it is strongest in winter, next in spring and autumn, and least in summer. The seasonal variation of the UHI mainly occurs in the urban area. The UHI intensity at the center of Beijing is more than 0.8℃ in winter, and only 0.5℃ in summer. Second, the intensity of Beijing HUI exhibits a clear interannual warming trend with its mean growth rate (MGR) being 0.3088℃/10 a. The MGR of HUI is largest in winter, next in spring and autumn, and least in summer, and the urban temperature increase makes a major contribution to the growth of HUI intensity. Third, since the Reform and Opening, the urbanization indices have grown several ten times or even one hundred times, the intensity of HUI has increased dramatically, and its spatial scale also expanded distinctively along with the expansion of urban architectural complexes. Fourth, the interannual variation of urbanization indices is very similar with that of HUI intensity, and their linear correlation coefficients are significant at a more than 0.001 confidence level.
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Maximum host survival at intermediate parasite infection intensities.
Martin Stjernman
Full Text Available BACKGROUND: Although parasitism has been acknowledged as an important selective force in the evolution of host life histories, studies of fitness effects of parasites in wild populations have yielded mixed results. One reason for this may be that most studies only test for a linear relationship between infection intensity and host fitness. If resistance to parasites is costly, however, fitness may be reduced both for hosts with low infection intensities (cost of resistance and high infection intensities (cost of parasitism, such that individuals with intermediate infection intensities have highest fitness. Under this scenario one would expect a non-linear relationship between infection intensity and fitness. METHODOLOGY/PRINCIPAL FINDINGS: Using data from blue tits (Cyanistes caeruleus in southern Sweden, we investigated the relationship between the intensity of infection of its blood parasite (Haemoproteus majoris and host survival to the following winter. Presence and intensity of parasite infections were determined by microscopy and confirmed using PCR of a 480 bp section of the cytochrome-b-gene. While a linear model suggested no relationship between parasite intensity and survival (F = 0.01, p = 0.94, a non-linear model showed a significant negative quadratic effect (quadratic parasite intensity: F = 4.65, p = 0.032; linear parasite intensity F = 4.47, p = 0.035. Visualization using the cubic spline technique showed maximum survival at intermediate parasite intensities. CONCLUSIONS/SIGNIFICANCE: Our results indicate that failing to recognize the potential for a non-linear relationship between parasite infection intensity and host fitness may lead to the potentially erroneous conclusion that the parasite is harmless to its host. Here we show that high parasite intensities indeed reduced survival, but this effect was masked by reduced survival for birds heavily suppressing their parasite intensities. Reduced survival among hosts with low
Rainfall Maximum Intensities for Urban Hydrological Design in Mexican Republic
Campos–Aranda D.F.
2010-04-01
Full Text Available Firstly, through the urban hydrosystem concept and through urbanization, the difficulties and approach of the urban flood estimation are established, based in the Intensity–Duration–Frequency curves (IDF. Next, in 10 recording gages located in very different geographic zones, a procedure is contrasted for IDF estimation curves, which utilized the Chen formula and the available information in the Mexican Republic for isohyet intensities and annual daily maximum rainfall. Late, having verified their capacity and approximation to reproduce the IDF curves, the utilized procedure was applied in 45 important locations of the country, showing the results. Lastly, the conclusions are formulated, which point out the approximation and simplicity of the proposal procedure.
Spatio-Temporal Analysis of UHI using Geo-Spatial Techniques: A case study of Ahmedabad
Vyas, A.; Shastri, B.; Joshi, Y.
2014-11-01
vulnerabilities to human health, the marginal population affected largely as the natural environment is their only home or their main shelter. Furthermore elderly people also affected in greater amount as their weakening immunes system. Major effects of UHI on environment include: a) Air Quality, b) Energy consumption and c) Human health. To study the causes and effect of UHI of any urban area, the first step is to demarcate the spatial distribution of UHI and its intensity over different time period of the day as well as difference in the temperature of urban area with the surrounding rural areas. Secondly, study of land use land cover change in the area also helps in identifying causes of heat accumulation for particular region. After marking up of intensity, analysis of different zones for understanding the relationship between UHI and urban morphological features can be done which further became suggestive towards planning of urban center that mitigates the effect of UHI. Mainly two approaches are there to demarcate UHI study as: - Field data collection and observations - Remote sensing data analysis For a long period of time observations from interior of the city and outwards of it can analyze by a climatic methods, by observing many days as well as many times of a day continuously to analyze the daily variation law of the heat island effects. As the city is for its developmental approaches may cover an area of hundreds of square kilometers, the ground observation data is not able to provide enough detail about the urban heat island distribution characteristics. The most precise method is the Satellite Remote Sensing method. The UHI phenomenon can be analyzed by using the thermal infrared data obtained meteorological satellite sensing. The atmospheric attenuation can be corrected for the remote sensing data by use of meteorological soundings and ground observation data. Ideally the heat island effect over a city is not same for any other city. Satellite images from AVHRR
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Maximum Likelihood Localization of Radiation Sources with unknown Source Intensity
Baidoo-Williams, Henry E
2016-01-01
In this paper, we consider a novel and robust maximum likelihood approach to localizing radiation sources with unknown statistics of the source signal strength. The result utilizes the smallest number of sensors required theoretically to localize the source. It is shown, that should the source lie in the open convex hull of the sensors, precisely $N+1$ are required in $\\mathbb{R}^N, ~N \\in \\{1,\\cdots,3\\}$. It is further shown that the region of interest, the open convex hull of the sensors, is entirely devoid of false stationary points. An augmented gradient ascent algorithm with random projections should an estimate escape the convex hull is presented.
Exploring the Urban Heat Island (UHI) Effect in Port Louis, Mauritius
2012r
2014-10-13
Oct 13, 2014 ... temperature due to artificial land cover and anthropogenic heat (Y. ... Exploring the Urban Heat Island (UHI) Effect in Port Louis, Mauritius. 140 ..... and intelligence in both design and technology in order to create sustainable.
The analysis and kinetic energy balance of an upper-level wind maximum during intense convection
Fuelberg, H. E.; Jedlovec, G. J.
1982-01-01
The purpose of this paper is to analyze the formation and maintenance of the upper-level wind maximum which formed between 1800 and 2100 GMT, April 10, 1979, during the AVE-SESAME I period, when intense storms and tornadoes were experienced (the Red River Valley tornado outbreak). Radiosonde stations participating in AVE-SESAME I are plotted (centered on Oklahoma). National Meteorological Center radar summaries near the times of maximum convective activity are mapped, and height and isotach plots are given, where the formation of an upper-level wind maximum over Oklahoma is the most significant feature at 300 mb. The energy balance of the storm region is seen to change dramatically as the wind maximum forms. During much of its lifetime, the upper-level wind maximum is maintained by ageostrophic flow that produces cross-contour generation of kinetic energy and by the upward transport of midtropospheric energy. Two possible mechanisms for the ageostrophic flow are considered.
Giovana Mara Rodrigues Borges
2016-11-01
Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.
GCR intensity during the sunspot maximum phase and the inversion of the heliospheric magnetic field
Krainev, M; Kalinin, M; Svirzhevskaya, A; Svirzhevsky, N
2015-01-01
The maximum phase of the solar cycle is characterized by several interesting features in the solar activity, heliospheric characteristics and the galactic cosmic ray (GCR) intensity. Recently the maximum phase of the current solar cycle (SC) 24, in many relations anomalous when compared with solar cycles of the second half of the 20-th century, came to the end. The corresponding phase in the GCR intensity cycle is also in progress. In this paper we study different aspects of the sunspot, heliospheric and GCR behavior around this phase. Our main conclusions are as follows: 1) The maximum phase of the sunspot SC 24 ended in 06.2014, the development of the sunspot cycle being similar to those of SC 14, 15 (the Glaisberg minimum). The maximum phase of SC 24 in the GCR intensity is still in progress. 2) The inversion of the heliospheric magnetic field consists of three stages, characterized by the appearance of the global heliospheric current sheet (HCS), connecting all longitudes. In two transition dipole stages ...
SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA
2016-12-01
In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments
Camarrone, Flavio; Ivanova, Anna; Decoster, Wivine; de Jong, Felix; van Hulle, Marc M
2015-01-01
To examine whether the minimum as well as the maximum voice intensity (i.e. sound pressure level, SPL) curves of a voice range profile (VRP) are required when discovering different voice groups based on a clustering analysis. In this approach, no a priori labeling of voice types is used. VRPs of 194 (84 male and 110 female) professional singers were registered and processed. Cluster analysis was performed with the use of features related to (1) both the maximum and minimum SPL curves and (2) the maximum SPL curve only. Features related to the maximum as well as the minimum SPL curves showed three clusters in both male and female voices. These clusters, or voice groups, are based on voice types with similar VRP features. However, when using features related only to the maximum SPL curve, the clusters became less obvious. Features related to the maximum and minimum SPL curves of a VRP are both needed in order to identify the three voice clusters. © 2016 S. Karger AG, Basel.
Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific
Jae-Won Choi
2016-01-01
Full Text Available This study obtained the latitude where tropical cyclones (TCs show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two periods, anomalous anticyclonic circulations were strong in 30°–50°N, while anomalous monsoon trough was located in the north of South China Sea. This anomalous monsoon trough was extended eastward to 145°E. Middle-latitude region in East Asia is affected by the anomalous southeasterlies due to these anomalous anticyclonic circulations and anomalous monsoon trough. These anomalous southeasterlies play a role of anomalous steering flows that make the TCs heading toward region in East Asia middle latitude. As a result, TCs during 1999–2013 had higher latitude of the maximum intensity compared to the TCs during 1977–1998.
di Sabatino, S.; Leo, L. S.; Hedquist, B. C.; Carter, W.; Fernando, H. J. S.
2009-04-01
This paper reports on the analysis of results from a large urban heat island experiment (UHI) performed in Phoenix (AZ) in April 2008. From 1960 to 2000, the city of Phoenix experienced a minimum temperature rise of 0.47 °C per decade, which is one of the highest rates in the world for a city of this size (Golden, 2004). Contemporaneously, the city has recorded a rapid enlargement and large portion of the land and desert vegetation have been replaced by buildings, asphalt and concrete (Brazel et al., 2007, Emmanuel and Fernando, 2007). Besides, model predictions show that minimum air temperatures for Phoenix metropolitan area in future years might be even higher than 38 °C. In order to make general statements and mitigation strategies of the UHI phenomenon in Phoenix and other cities in hot arid climates, a one-day intensive experiment was conducted on the 4th-5th April 2008 to collect surface and ambient temperatures within various landscapes in Central Phoenix. Inter alia, infrared thermography (IRT) was used for UHI mapping. The aim was to investigate UHI modifications within the city of Phoenix at three spatial scales i.e. the local (Central Business District, CBD), the neighborhood and the city scales. This was achieved by combining IRT measurements taken at ground level by mobile equipment (automobile-mounted and pedicab) and at high elevation by a helicopter. At local scale detailed thermographic images of about twenty building façades and several street canyons were collected. In total, about two thousand images were taken during the 24-hour campaign. Image analysis provides detailed information on building surface and pavement temperatures at fine resolution (Hedquist et al. 2009, Di Sabatino et al. 2009). This unique dataset allows us several investigations on local air temperature dependence on albedo, building thermal inertia, building shape and orientation and sky view factors. Besides, the mosaic of building façade temperatures are being analyzed
H. Ijadi
2012-09-01
Full Text Available In this paper, a method to track the maximum power of solar panels based on fuzzy logic is presented. The proposed method is based on the relationship between radiation intensity and the voltage of maximum power operating point. With this relationship, at any time by measuring the light intensity, voltage can be calculated at the maximum power point by using fuzzy approximation function. In order to verify the proposed method, simulation results are presented.
Data concurrency is required for estimating urban heat island intensity.
Zhao, Shuqing; Zhou, Decheng; Liu, Shuguang
2016-01-01
Urban heat island (UHI) can generate profound impacts on socioeconomics, human life, and the environment. Most previous studies have estimated UHI intensity using outdated urban extent maps to define urban and its surrounding areas, and the impacts of urban boundary expansion have never been quantified. Here, we assess the possible biases in UHI intensity estimates induced by outdated urban boundary maps using MODIS Land surface temperature (LST) data from 2009 to 2011 for China's 32 major cities, in combination with the urban boundaries generated from urban extent maps of the years 2000, 2005 and 2010. Our results suggest that it is critical to use concurrent urban extent and LST maps to estimate UHI at the city and national levels. Specific definition of UHI matters for the direction and magnitude of potential biases in estimating UHI intensity using outdated urban extent maps.
The poleward migration of the location of tropical cyclone maximum intensity.
Kossin, James P; Emanuel, Kerry A; Vecchi, Gabriel A
2014-05-15
Temporally inconsistent and potentially unreliable global historical data hinder the detection of trends in tropical cyclone activity. This limits our confidence in evaluating proposed linkages between observed trends in tropical cyclones and in the environment. Here we mitigate this difficulty by focusing on a metric that is comparatively insensitive to past data uncertainty, and identify a pronounced poleward migration in the average latitude at which tropical cyclones have achieved their lifetime-maximum intensity over the past 30 years. The poleward trends are evident in the global historical data in both the Northern and the Southern hemispheres, with rates of 53 and 62 kilometres per decade, respectively, and are statistically significant. When considered together, the trends in each hemisphere depict a global-average migration of tropical cyclone activity away from the tropics at a rate of about one degree of latitude per decade, which lies within the range of estimates of the observed expansion of the tropics over the same period. The global migration remains evident and statistically significant under a formal data homogenization procedure, and is unlikely to be a data artefact. The migration away from the tropics is apparently linked to marked changes in the mean meridional structure of environmental vertical wind shear and potential intensity, and can plausibly be linked to tropical expansion, which is thought to have anthropogenic contributions.
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a
Comparison of different UHI mitigation strategies: the street- versus roof-level implementation
Li, X.; Georgescu, M.; Norford, L. K.
2015-12-01
Many mitigation approaches have been proposed to ameliorate the deleterious aspects of urbalization on climate, with special focus on the notorious urban heat island (UHI) effect. Of these approaches, high reflectance roof (cool roof) and pavement (cool pavement) and green roof or greenery are most commonly used and widely studied. However, the debate regarding the better implementation of cool and green technology is still ongoing. In this study, numerical sensitivity tests are carried out to evaluate the mitigation effect of the cool and green implementations at the city scale. The effects of roof-level and street-level implementations are compared in the context of a tropical urban environment.
Verdon-Kidd, D. C.; Kiem, A. S.
2015-12-01
Rainfall intensity-frequency-duration (IFD) relationships are commonly required for the design and planning of water supply and management systems around the world. Currently, IFD information is based on the "stationary climate assumption" that weather at any point in time will vary randomly and that the underlying climate statistics (including both averages and extremes) will remain constant irrespective of the period of record. However, the validity of this assumption has been questioned over the last 15 years, particularly in Australia, following an improved understanding of the significant impact of climate variability and change occurring on interannual to multidecadal timescales. This paper provides evidence of regime shifts in annual maximum rainfall time series (between 1913-2010) using 96 daily rainfall stations and 66 sub-daily rainfall stations across Australia. Furthermore, the effect of these regime shifts on the resulting IFD estimates are explored for three long-term (1913-2010) sub-daily rainfall records (Brisbane, Sydney, and Melbourne) utilizing insights into multidecadal climate variability. It is demonstrated that IFD relationships may under- or over-estimate the design rainfall depending on the length and time period spanned by the rainfall data used to develop the IFD information. It is recommended that regime shifts in annual maximum rainfall be explicitly considered and appropriately treated in the ongoing revisions of the Engineers Australia guide to estimating and utilizing IFD information, Australian Rainfall and Runoff (ARR), and that clear guidance needs to be provided on how to deal with the issue of regime shifts in extreme events (irrespective of whether this is due to natural or anthropogenic climate change). The findings of our study also have important implications for other regions of the world that exhibit considerable hydroclimatic variability and where IFD information is based on relatively short data sets.
Dave, Jaydev K; Forsberg, Flemming
2009-09-01
The aim of this study was to develop a novel automated motion compensation algorithm for producing cumulative maximum intensity (CMI) images from subharmonic imaging (SHI) of breast lesions. SHI is a nonlinear contrast-specific ultrasound imaging technique in which pulses are received at half the frequency of the transmitted pulses. A Logiq 9 scanner (GE Healthcare, Milwaukee, WI, USA) was modified to operate in grayscale SHI mode (transmitting/receiving at 4.4/2.2 MHz) and used to scan 14 women with 16 breast lesions. Manual CMI images were reconstructed by temporal maximum-intensity projection of pixels traced from the first frame to the last. In the new automated technique, the user selects a kernel in the first frame and the algorithm then uses the sum of absolute difference (SAD) technique to identify motion-induced displacements in the remaining frames. A reliability parameter was used to estimate the accuracy of the motion tracking based on the ratio of the minimum SAD to the average SAD. Two thresholds (the mean and 85% of the mean reliability parameter) were used to eliminate images plagued by excessive motion and/or noise. The automated algorithm was compared with the manual technique for computational time, correction of motion artifacts, removal of noisy frames and quality of the final image. The automated algorithm compensated for motion artifacts and noisy frames. The computational time was 2 min compared with 60-90 minutes for the manual method. The quality of the motion-compensated CMI-SHI images generated by the automated technique was comparable to the manual method and provided a snapshot of the microvasculature showing interconnections between vessels, which was less evident in the original data. In conclusion, an automated algorithm for producing CMI-SHI images has been developed. It eliminates the need for manual processing and yields reproducible images, thereby increasing the throughput and efficiency of reconstructing CMI-SHI images. The
Fishman, Elliot K; Ney, Derek R; Heath, David G; Corl, Frank M; Horton, Karen M; Johnson, Pamela T
2006-01-01
The introduction and widespread availability of 16-section multi-detector row computed tomographic (CT) technology and, more recently, 64-section scanners, has greatly advanced the role of CT angiography in clinical practice. CT angiography has become a key component of state-of-the-art imaging, with applications ranging from oncology (eg, staging of pancreatic or renal cancer) to classic vascular imaging (eg, evaluation of aortic aneurysms and renal artery stenoses) as well as newer techniques such as coronary artery imaging and peripheral runoff studies. With an average of 400-1000 images in each volume data set, three-dimensional postprocessing is crucial to volume visualization. Radiologists now have workstations that provide capabilities for evaluation of these data sets by using a range of software programs and processing tools. Although different systems have unique capabilities and functionality, all provide the options of volume rendering and maximum intensity projection for image display and analysis. These two postprocessing techniques have different advantages and disadvantages when used in clinical practice, and it is important that radiologists understand when and how each technique should be used. Copyright RSNA, 2006.
CT-maximum intensity projection is a clinically useful modality for the detection of gastric varices
Toru Ishikawa; Tomoteru Kamimura; Takashi Ushiki; Ken-ichi Mizuno; Tadayuki Togashi; Kouji Watanabe; Kei-ichi Seki; Hironobu Ohta; Toshiaki Yoshida; Keiko Takeda
2005-01-01
AIM: To evaluate the efficacy of CT-maximum intensity projection (CT-MIP) in the detection of gastric varicesand their inflowing and outflowing vessels in patientswith gastric varices scheduled to undergo balloonoccluded retrograde transvenous obliteration (B-RTO). METHODS: Sixteen patients with endoscopicallyconfirmed gastric varices were included in this study. All patients were evaluated with CT-MIP using threedimensional reconstructions, before and after B-RTO. RESULTS: CT-MIP clearly depicted gastric varices in 16 patients (100%), the left gastric vein in 6 (32.5%),the posterior gastric vein in 12 (75.0%), the short gastric veins in 13 (81.3%), gastrorenal shunts in 16 (100%), the hemiazygos vein (HAZV) in 4 (25.0%), the pericardiophrenic vein (PCPV) in 9 (56.3%), and the left inferior phrenic vein in 9 patients (56.3%). Although flow direction itself cannot be determined from CT-MIP,this modality provided clear images of the inflowing and the outflowing vessels. Moreover, in one patient, short gastric veins were not seen on conventional angiographic portography images of the spleen, but were clearly revealed on CT-MIP,CONCLUSION: We suggest that CT-MIP should be considered as a routine method for detecting and diagnosing collateral veins in patients with gastric varices scheduled for B-RTO. Furthermore, CT-MIP is more useful than endoscopy in verifying the early therapeutic effects of B-RTO.
Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap
2008-05-01
Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).
Kilburn-Toppin, Fleur; Arthurs, Owen J.; Tasker, Angela D.; Set, Patricia A.K. [Addenbrooke' s Hospital, Cambridge University Teaching Hospitals NHS Foundation Trust, Department of Radiology, Box 219, Cambridge (United Kingdom)
2013-07-15
Maximum intensity projection (MIP) images might be useful in helping to differentiate small pulmonary nodules from adjacent vessels on thoracic multidetector CT (MDCT). The aim was to evaluate the benefits of axial MIP images over axial source images for the paediatric chest in an interobserver variability study. We included 46 children with extra-pulmonary solid organ malignancy who had undergone thoracic MDCT. Three radiologists independently read 2-mm axial and 10-mm MIP image datasets, recording the number of nodules, size and location, overall time taken and confidence. There were 83 nodules (249 total reads among three readers) in 46 children (mean age 10.4 {+-} 4.98 years, range 0.3-15.9 years; 24 boys). Consensus read was used as the reference standard. Overall, three readers recorded significantly more nodules on MIP images (228 vs. 174; P < 0.05), improving sensitivity from 67% to 77.5% (P < 0.05) but with lower positive predictive value (96% vs. 85%, P < 0.005). MIP images took significantly less time to read (71.6 {+-} 43.7 s vs. 92.9 {+-} 48.7 s; P < 0.005) but did not improve confidence levels. Using 10-mm axial MIP images for nodule detection in the paediatric chest enhances diagnostic performance, improving sensitivity and reducing reading time when compared with conventional axial thin-slice images. Axial MIP and axial source images are complementary in thoracic nodule detection. (orig.)
Fallmann, Joachim; Suppan, Peter; Emeis, Stefan
2013-04-01
Cities are warmer than their surroundings (called urban heat island, UHI). UHI influence urban atmospheric circulation, air quality, and ecological conditions. UHI leads to upward motion and compensating near-surface inflow from the surroundings which import rural trace substances. Chemical and aerosol formation processes are modified due to increased temperature, reduced humidity and modified urban-rural trace substance mixtures. UHIs produce enhanced heat stress for humans, animals and plants, less water availability and modified air quality. Growing cities and Climate Change will aggravate the UHI and its effects and urgently require adaptation and mitigation strategies. Prior to this, UHI properties must be assessed by surface observations, ground- and satellite-based vertical remote sensing and numerical modelling. The Weather Research and Forecasting Model (WRF) is an instrument to simulate and assess this phenomenon based on boundary conditions from observations and global climate models. Three urbanization schemes are available with WRF, which are tested during this study for different weather conditions in central Europe and will be enhanced if necessary. High resolution land use maps are used for this modeling effort. In situ measurements and Landsat thermal images are employed for validation of the results. The study will focus on the city of Stuttgart located in the south western part of Germany that is situated in a caldera-like orographic feature. This municipality has a long tradition in urban climate research and thus is well equipped with climatologic measurement stations. By using Geographical Information Systems (GIS), it is possible to simulate several scenarios for different surface properties. By increasing the albedo of roof and wall layers in the urban canopy model or by replacing urban land use by natural vegetation, simple urban planning strategies can be tested and the effect on urban heat island formation and air quality can be
Gajewski Jan
2015-12-01
Full Text Available Purpose. The aim of this study was to determine the changes in postural physiological tremor following maximum intensity effort performed on arm ergometer by young male and female swimmers. Methods. Ten female and nine male young swimmers served as subjects in the study. Forearm tremor was measured accelerometrically in the sitting position before the 30-second Wingate Anaerobic Test on arm ergometer and then 5, 15 and 30 minutes post-test. Results. Low-frequency tremor log-amplitude (L1−5 increased (repeated factor: p < 0.05 from −7.92 ± 0.45 to −7.44 ± 0.45 and from −6.81 ± 0.52 to −6.35 ± 0.58 in women and men, respectively (gender: p < 0.05 5 minute post-test. Tremor log-amplitude (L15−20 increased (repeated factor: p < 0.001 from −9.26 ± 0.70 to −8.59 ± 0.61 and from −8.79 ± 0.65 to −8.39 ± 0.79 in women and men, respectively 5 minute post-test. No effect of gender was found for high frequency range.The increased tremor amplitude was observed even 30 minute post-exercise. Mean frequency of tremor spectra gradually decreased post-exercises (p < 0.001. Conclusions. Exercise-induced changes in tremor were similar in males and females. A fatigue produced a decrement in the mean frequency of tremor what suggested decreased muscle stiffness post-exercise. Such changes intremorafter exercise may be used as the indicator of fatigue in the nervous system.
Pysz, Marybeth A.; Foygel, Kira; Panje, Cedric M.; Needles, Andrew; Tian, Lu; Willmann, Jürgen K.
2015-01-01
Objectives Contrast-enhanced ultrasound imaging is increasingly being used in the clinic for assessment of tissue vascularity. The purpose of our study was to evaluate the effect of different contrast administration parameters on the in vivo ultrasound imaging signal in tumor-bearing mice using a maximum intensity persistence (MIP) algorithm and to evaluate the reliability of in vivo MIP imaging in assessing tumor vascularity. The potential of in vivo MIP imaging for monitoring tumor vascularity during antiangiogenic cancer treatment was further evaluated. Materials and Methods In intraindividual experiments, varying contrast microbubble concentrations (5 × 105, 5 × 106, 5 × 107, 5 × 108 microbubbles in 100 µL saline) and contrast injection rates (0.6, 1.2, and 2.4 mL/min) in subcutaneous tumor-bearing mice were applied and their effects on in vivo contrast-enhanced ultrasound MIP imaging plateau values were obtained using a dedicated small animal ultrasound imaging system (40 MHz). Reliability of MIP ultrasound imaging was tested following 2 injections of the same micro-bubble concentration (5 × 107 microbubbles at 1.2 mL/min) in the same tumors. In mice with subcutaneous human colon cancer xenografts, longitudinal contrast-enhanced ultrasound MIP imaging plateau values (baseline and at 48 hours) were compared between mice with and without antiangiogenic treatment (anti-vascular endothelial growth factor antibody). Ex vivo CD31 immunostaining of tumor tissue was used to correlate in vivo MIP imaging plateau values with microvessel density analysis. Results In vivo MIP imaging plateau values correlated significantly (P = 0.001) with contrast microbubble doses. At 3 different injection rates of 0.6, 1.2, and 2.4 mL/min, MIP imaging plateau values did not change significantly (P = 0.61). Following 2 injections with the same microbubble dose and injection rate, MIP imaging plateau values were obtained with high reliability with an intraclass correlation
Analysis of simulated fluorescence intensities decays by a new maximum entropy method algorithm.
Esposito, Rosario; Altucci, Carlo; Velotta, Raffaele
2013-01-01
A new algorithm for the Maximum Entropy Method (MEM) is proposed for recovering the lifetime distribution in time-resolved fluorescence decays. The procedure is based on seeking the distribution that maximizes the Skilling entropy function subjected to the chi-squared constraint χ(2) ~ 1 through iterative linear approximations, LU decomposition of the Hessian matrix of the lagrangian problem and the Golden Section Search for backtracking. The accuracy of this algorithm has been investigated through comparisons with simulated fluorescence decays both of narrow and broad lifetime distributions. The proposed approach is capable to analyse datasets of up to 4,096 points with a discretization ranging from 100 to 1,000 lifetimes. A good agreement with non linear fitting estimates has been observed when the method has been applied to multi-exponential decays. Remarkable results have been also obtained for the broad lifetime distributions where the position is recovered with high accuracy and the distribution width is estimated within 3%. These results indicate that the procedure proposed generates MEM lifetime distributions that can be used to quantify the real heterogeneity of lifetimes in a sample.
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Jeng, K-S; Huang, C-C; Lin, C-K; Lin, C-C; Chen, K-H
2013-06-01
Early detection of Budd-Chiari syndrome (BCS) to give the appropriate therapy in time is crucial. Angiography remains the golden standard to diagnose BCS. However, to establish the diagnosis of BCS in complicated cirrhotic patients remains a challenge. We used maximum intensity projection (Max IP) and minimum intensity projection (Min IP) from computed tomographic (CT) images to detect this syndrome in such a patient. A 55-year-old man with a history of chronic hepatitis B infection and alcoholism had undergone previously a left lateral segmentectomy for hepatic epitheloid angiomyolipoma (4.6 × 3.5 × 3.3 cm) with a concomitant splenectomy. Liver decompensation with intractable ascites and jaundice occurred 4 months later. The reformed images of the venous phase of enhanced CT images with Max IP and Min IP showed middle hepatic vein thrombosis. He then underwent a living-related donor liver transplantation with a right liver graft from his daughter. Intraoperatively, we noted thrombosis of his middle hepatic vein protruding into inferior vena cava. The postoperative course was unevenful. Microscopic findings revealed micronodular cirrhosis with mixed inflammation in the portal areas. Some liver lobules exhibited congestion and sinusoidal dilation compatible with venous occlusion clinically. We recommend Max IP and Min IP of CT images as simple and effective techniques to establish the diagnosis of BCS, especially in complicated cirrhotic patients, thereby avoiding invasive interventional procedures. Copyright © 2013 Elsevier Inc. All rights reserved.
Bosmans, H.; Verbeeck, R.; Vandermeulen, D.; Suetens, P.; Wilms, G.; Maaly, M.; Marchal, G.; Baert, A.L. [Louvain Univ. (Belgium)
1995-12-01
The objective of this study was to validate a new post processing algorithm for improved maximum intensity projections (mip) of intracranial MR angiography acquisitions. The core of the post processing procedure is a new brain segmentation algorithm. Two seed areas, background and brain, are automatically detected. A 3D region grower then grows both regions towards each other and this preferentially towards white regions. In this way, the skin gets included into the final `background region` whereas cortical blood vessels and all brain tissues are included in the `brain region`. The latter region is then used for mip. The algorithm runs less than 30 minutes on a full dataset on a Unix workstation. Images from different acquisition strategies including multiple overlapping thin slab acquisition, magnetization transfer (MT) MRA, Gd-DTPA enhanced MRA, normal and high resolution acquisitions and acquisitions from mid field and high field systems were filtered. A series of contrast enhanced MRA acquisitions obtained with identical parameters was filtered to study the robustness of the filter parameters. In all cases, only a minimal manual interaction was necessary to segment the brain. The quality of the mip was significantly improved, especially in post Gd-DTPA acquisitions or using MT, due to the absence of high intensity signals of skin, sinuses and eyes that otherwise superimpose on the angiograms. It is concluded that the filter is a robust technique to improve the quality of MR angiograms.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Valenti, Davide; Denaro, Giovanni; Spagnolo, Bernardo; Conversano, Fabio; Brunet, Christophe
2015-01-01
During the last few years theoretical works have shed new light and proposed new hypotheses on the mechanisms which regulate the spatio-temporal behaviour of phytoplankton communities in marine pelagic ecosystems. Despite this, relevant physical and biological issues, such as effects of the time-dependent mixing in the upper layer, competition between groups, and dynamics of non-stationary deep chlorophyll maxima, are still open questions. In this work, we analyze the spatio-temporal behaviour of five phytoplankton populations in a real marine ecosystem by using a one-dimensional reaction-diffusion-taxis model. The study is performed, taking into account the seasonal variations of environmental variables, such as light intensity, thickness of upper mixed layer and profiles of vertical turbulent diffusivity, obtained starting from experimental findings. Theoretical distributions of phytoplankton cell concentration was converted in chlorophyll concentration, and compared with the experimental profiles measured in a site of the Tyrrhenian Sea at four different times (seasons) of the year, during four different oceanographic cruises. As a result we find a good agreement between theoretical and experimental distributions of chlorophyll concentration. In particular, theoretical results reveal that the seasonal changes of environmental variables play a key role in the phytoplankton distribution and determine the properties of the deep chlorophyll maximum. This study could be extended to other marine ecosystems to predict future changes in the phytoplankton biomass due to global warming, in view of devising strategies to prevent the decline of the primary production and the consequent decrease of fish species.
Kinosada, Yasutomi; Okuda, Yasuyuki (Mie Univ., Tsu (Japan). School of Medicine); Ono, Mototsugu (and others)
1993-02-01
We developed a new noninvasive technique to visualize the anatomical structure of the nerve fiber system in vivo, and named this technique magnetic resonance (MR) tractography and the acquired image an MR tractogram. MR tractography has two steps. One is to obtain diffusion-weighted images sensitized along axes appropriate for depicting the intended nerve fibers with anisotropic water diffusion MR imaging. The other is to extract the anatomical structure of the nerve fiber system from a series of diffusion-weighted images by the maximum intensity projection method. To examine the clinical usefulness of the proposed technique, many contiguous, thin (3 mm) coronal two-dimensional sections of the brain were acquired sequentially in normal volunteers and selected patients with paralyses, on a 1.5 Tesla MR system (Signa, GE) with an ECG-gated Stejskal-Tanner pulse sequence. The structure of the nerve fiber system of normal volunteers was almost the same as the anatomy. The tractograms of patients with paralyses clearly showed the degeneration of nerve fibers and were correlated with clinical symptoms. MR tractography showed great promise for the study of neuroanatomy and neuroradiology. (author).
Akai, Takanori; Taniguchi, Daigo; Oda, Ryo; Asada, Maki; Toyama, Shogo; Tokunaga, Daisaku; Seno, Takahiro; Kawahito, Yutaka; Fujii, Yosuke; Ito, Hirotoshi; Fujiwara, Hiroyoshi; Kubo, Toshikazu
2016-04-01
Contrast-enhanced magnetic resonance imaging with maximum intensity projection (MRI-MIP) is an easy, useful imaging method to evaluate synovitis in rheumatoid hands. However, the prognosis of synovitis-positive joints on MRI-MIP has not been clarified. The aim of this study was to evaluate the relationship between synovitis visualized by MRI-MIP and joint destruction on X-rays in rheumatoid hands. The wrists, metacarpophalangeal (MP) joints, and proximal interphalangeal (PIP) joints of both hands (500 joints in total) were evaluated in 25 rheumatoid arthritis (RA) patients. Synovitis was scored from grade 0 to 2 on the MRI-MIP images. The Sharp/van der Heijde score and Larsen grade were used for radiographic evaluation. The relationships between the MIP score and the progression of radiographic scores and between the MIP score and bone marrow edema on MRI were analyzed using the trend test. As the MIP score increased, the Sharp/van der Heijde score and Larsen grade progressed severely. The rate of bone marrow edema-positive joints also increased with higher MIP scores. MRI-MIP imaging of RA hands is a clinically useful method that allows semi-quantitative evaluation of synovitis with ease and can be used to predict joint destruction.
Groszko, Marian
2003-01-01
Electric and magnetic fields of 50 Hz from electric power devices affect not only workers, but also the general population, as these devices are also located in populated areas, hence the duality of regulations on maximum admissible intensities. This paper presents these regulations and discusses in detail the changes of 2001. Based on the Polish regulations, hygienic evaluation of electric power devices has been attempted. The Polish regulations on the 50 Hz electromagnetic fields were compared with relevant international regulations of CENELEC and the European Union recommendations. Our maximum admissible intensities have been found to conform with the international standards.
Meyfroidt, S.; Hulscher, M.; Cock, D. De; Elst, K. van; Joly, J.; Westhovens, R.; Verschueren, P.
2015-01-01
The objectives of the study were to determine the relative importance of barriers related to the provision of intensive combination treatment strategies with glucocorticoids (ICTS-GCs) in early rheumatoid arthritis (ERA) from the rheumatologists' perspective and to explore the relation between
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2016-11-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.
Chappell, Mark; Odell, Jason
2004-01-01
We measured maximal oxygen consumption (VO(2max)) and burst speed in populations of Trinidadian guppies (Poecilia reticulata) from contrasting high- and low-predation habitats but reared in "common garden" conditions. We tested two hypothesis: first, that predation, which causes rapid life-history evolution in guppies, also impacts locomotor physiology, and second, that trade-offs would occur between burst and aerobic performance. VO(2max) was higher than predicted from allometry, and resting VO(2) was lower than predicted. There were small interdrainage differences in male VO(2max), but predation did not affect VO(2max) in either sex. Maximum burst speed was correlated with size; absolute burst speed was higher in females, but size-adjusted speed was greater in males. For both sexes, burst speed conformed to allometric predictions. There were differences in burst speed between drainages in females, but predation regime did not affect burst speed in either sex. We did not find a significant correlation between burst speed and VO(2max), suggesting no trade-off between these traits. These results indicate that predation-mediated evolution of guppy life history does not produce concomitant evolution in aerobic capacity and maximum burst speed. However, other aspects of swimming performance (response latencies or acceleration) might show adaptive divergence in contrasting predation regimes.
Maximum Intensity Projection Based on Visual Perception Enhancement%基于视觉感知增强的最大密度投影算法∗
周志光; 陶煜波; 林海
2013-01-01
This paper proposed a maximum intensity projection method to enhance the depth and shape perception of the internal maximum intensity features, without a sophisticated or time-consuming transfer function specification. On the basis of a traditional maximum intensity projection, the study first searched for the boundary sample with a similar intensity value and the optimal normal in front of the maximum intensity feature. Through by comparing the intensity and gradient norm. Next, the local illumination coefficients were updated according to the depth of boundary structures, the consequential depth-based shading results largely enhanced the depth, and the shape perception of internal feasible structures. A two-threshold region growing scheme was designed to perform and further highlight the features of interest. The seed was selected by users interactively on the rendered image, and the growing process depended on the intensity values and 3D spatial distances of the boundary samples with optimal normal. The comparison results showed that the proposed method provided more depth cues and shape information of the maximum intensity features than traditional methods and had practical applications in medical and engineering fields.% 提出一种基于视觉感知增强的最大密度投影算法，无需调节复杂的传输函数，就可以有效增强体数据内部最大密度特征的深度感知和形状感知。在传统的最大密度投影算法的基础上，利用梯度模属性精确查找特征或相似特征的边界，以确定最佳法向特征；利用最佳法向特征的深度信息自适应地修改局部光照系数，进而对最大密度特征进行光照处理，以获得视觉感知增强的可视化结果；采用基于密度值和三维空间距离的双阈值区域增长策略，动态区分感兴趣区域和背景区域，交互地实现特征突出显示。实验结果表明，该算法在传统算法的基础上
Costa, R L D; Bueno, M S; Veríssimo, C J; Cunha, E A; Santos, L E; Oliveira, S M; Spósito Filha, E; Otsuk, I P
2007-05-01
The daily live weight gain (DLWG), faecal nematode egg counts (FEC), and packed cell volume (PCV) of Suffolk, Ile de France and Santa Inês ewe lambs were evaluated fortnightly for 56 days in the dry season (winter) and 64 days in the rainy season (summer) of 2001-2002. The animals were distributed in two similar groups, one located on Aruana and the other on Tanzania grass (Panicum maximum), in rotational grazing system at the Instituto de Zootecnia, in Nova Odessa city (SP), Brazil. In the dry season, 24 one-year-old ewe lambs were used, eight of each breed, and there was no difference (p > 0.05) between grasses for DLWG (100 g/day), although the Suffolk had higher values (p < 0.05) than the other breeds. In the rainy season, with 33 six-month-old ewe lambs, nine Suffolk, eight Ile de France and 16 Santa Inês, the DLWG was not affected by breed, but it was twice as great (71 g/day, p < 0.05) on Aruana as on Tanzânia grass (30 g/day). The Santa Inês ewe lambs had the lowest FEC (p < 0.05) and the highest PCV (p < 0.05), confirming their higher resistance to Haemonchus contortus, the prevalent nematode in the rainy season. It was concluded that the best performance of ewe lambs on Aruana pastures in the rainy season is probably explained by their lower nematode infection owing to the better protein content of this grass (mean contents 11.2% crude protein in Aruana grass and 8.7% in Tanzania grass, p < 0.05) which may have improved the immunological system with the consequence that the highest PCV (p < 0.05) observed in those animals.
Zhang, Zhi-Shan; Zhao, Yang; Li, Xin-Rong; Huang, Lei; Tan, Hui-Juan
2016-05-01
In water-limited regions, rainfall interception is influenced by rainfall properties and crown characteristics. Rainfall properties, aside from gross rainfall amount and duration (GR and RD), maximum rainfall intensity and rainless gap (RG), within rain events may heavily affect throughfall and interception by plants. From 2004 to 2014 (except for 2007), individual shrubs of Caragana korshinskii and Artemisia ordosica were selected to measure throughfall during 210 rain events. Various rainfall properties were auto-measured and crown characteristics, i.e., height, branch and leaf area index, crown area and volume of two shrubs were also measured. The relative interceptions of C. korshinskii and A. ordosica were 29.1% and 17.1%, respectively. Rainfall properties have more contributions than crown characteristics to throughfall and interception of shrubs. Throughfall and interception of shrubs can be explained by GR, RI60 (maximum rainfall intensities during 60 min), RD and RG in deceasing importance. However, relative throughfall and interception of two shrubs have different responses to rainfall properties and crown characteristics, those of C. korshinskii were closely related to rainfall properties, while those of A. ordosica were more dependent on crown characteristics. We highlight long-term monitoring is very necessary to determine the relationships between throughfall and interception with crown characteristics.
Sultana, S.; Satyanarayana, A. N. V.
2016-12-01
The Urban heat island (UHI) in general developed over cities, due to the drastic changes in land use and land cover (LULC), has profound impact on the atmospheric circulation patterns due to the changes in the energy transport mechanism which in turn affect the regional climate. In this study, an attempt has been made to quantify the intensity of UHI, and to identify the pockets of UHI over cities during last decade over fast developing cosmopolitan Indian cities such as New Delhi, Mumbai and Kolkata. For this purpose, Landsat TM and ETM+ images during winter period, in about 5 year intervals from 2002 to 2013, has been selected to retrieve the brightness temperatures and land use/cover, from which Land Surface Temperature (LST) has been estimated using Normalized Difference Vegetation Index (NDVI). Normalized Difference Build-up Index (NDBI) and Normalized Difference Bareness Index (NDBaI) are estimated to extract build-up areas and bare land from the satellite images to identify the UHI pockets over the study area. For this purpose image processing and GIS tools were employed. Results reveal a significant increase in the intensity of UHI and increase in its area of influence over all the three cities. An increase of 2 to 2.5 oC of UHI intensity over the study regions has been noticed. The range of increase in UHI intensity is found to be more over New Delhi compared to Mumbai and Kolkata which is more or less same. The number of hotspot pockets of UHI has also been increased as seen from the spatial distribution of LST, NDVI and NDBI. This result signifies the impact of rapid urbanization and infrastructural developments has a direct consequence in modulating the regional climate over the Indian cities.
Topp, Robert; Ng, Alex; Cybulski, Alyson; Skelton, Katalin; Papanek, Paula
2014-07-01
The purpose of this study was to compare the vascular responses in the brachial artery and perceived intensity of two different formulas of topical menthol gels prior to and following a bout of maximum voluntary muscular contractions (MVMCs). 18 adults completed the same protocol on different days using blinded topical menthol gels (Old Formula and New Formula). Heart rate, brachial artery blood flow (ml/min), vessel diameter and reported intensity of sensation were measured at baseline (T1), at 5 min after application of the gel to the upper arm (T2), and immediately following five MVMCs hand grips (T3). The New Formula exhibited a significant decline in blood flow (-22.6%) between T1 and T2 which was not different than the nonsignificant declines under the Old Formula 1 (-21.8%). Both formulas resulted in a significant increase in perceived intensity of sensation between T1 and T2. Blood flow increased significantly with the New Formula (488%) between T2 and T3 and nonsignificantly with the Old Formula (355%).
Kau, Thomas [Klinikum Klagenfurt, General Hospital of Klagenfurt, Institute of Diagnostic and Interventional Radiology, Klagenfurt (Austria); Klinikum Klagenfurt am Worthersee, Radiologie, Klagenfurt (Austria); Eicher, Wolfgang; Reiterer, Christian; Niedermayer, Martin; Rabitsch, Egon; Hausegger, Klaus A. [Klinikum Klagenfurt, General Hospital of Klagenfurt, Institute of Diagnostic and Interventional Radiology, Klagenfurt (Austria); Senft, Birgit [Section of Statistics, Reha Clinic for Mental Health, Klagenfurt (Austria)
2011-08-15
To evaluate the accuracy of dual-energy CT angiography (DE-CTA) maximum intensity projections (MIPs) in symptomatic peripheral arterial occlusive disease (PAOD). In 58 patients, DE-CTA of the lower extremities was performed on dual-source CT. In a maximum of 35 arterial segments, severity of the most stenotic lesion was graded (<10%, 10-49% and 50-99% luminal narrowing or occlusion) independently by two radiologists, with DSA serving as the reference standard. In DSA, 52.3% of segments were significantly stenosed or occluded. Agreement of DE-CTA MIPs with DSA was good in the aorto-iliac and femoro-popliteal regions ({kappa} = 0.72; {kappa} = 0.66), moderate in the crural region ({kappa} = 0.55), slight in pedal arteries ({kappa} = 0.10) and very good in bypass segments ({kappa} = 0.81). Accuracy was 88%, 78%, 74%, 55% and 82% for the respective territories and moderate (75%) overall, with good sensitivity (84%) and moderate specificity (67%). Sensitivity and specificity was 82% and 76% in claudicants and 84% and 61% in patients with critical limb ischaemia. While correlating well with DSA above the knee, accuracy of DE-CTA MIPs appeared to be moderate in the calf and largely insufficient in calcified pedal arteries, especially in patients with critical limb ischaemia. (orig.)
Li, Xubin, E-mail: lixb@bjmu.edu.cn [Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Reseaech Center for Cancer, Tianjin, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060 (China); Liu, Xia; Du, Xiangke [Department of Radiology, Peking University People' s Hospital, Beijing 100044 (China); Ye, Zhaoxiang [Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Reseaech Center for Cancer, Tianjin, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060 (China)
2014-05-15
Purpose: To evaluate the diagnostic performance of three-dimensional (3D) MR maximum intensity projection (MIP) in the assessment of synovitis of the hand and wrist in rheumatoid arthritis (RA) compared to 3D contrast-enhanced magnetic resonance imaging (CE-MRI). Materials and methods: Twenty-five patients with RA underwent MR examinations. 3D MR MIP images were derived from the enhanced images. MR images were reviewed by two radiologists for the presence and location of synovitis of the hand and wrist. The diagnostic sensitivity, specificity and accuracy of 3D MIP were, respectively, calculated with the reference standard 3D CE-MRI. Results: In all subjects, 3D MIP images yielded directly and clearly the presence and location of synovitis with just one image. Synovitis demonstrated high signal intensity on MIP images. The k-values for the detection of articular synovitis indicated excellent interobserver agreements using 3D MIP images (k = 0.87) and CE-MR images (k = 0.91), respectively. 3D MIP demonstrated a sensitivity, specificity and accuracy of 91.07%, 98.57% and 96.0%, respectively, for the detection of synonitis. Conclusion: 3D MIP can provide a whole overview of lesion locations and a reliable diagnostic performance in the assessment of articular synovitis of the hand and wrist in patients with RA, which has potential value of clinical practice.
G Oquend
2008-09-01
Full Text Available En un suelo Pardo sialítico del subtipo Cambisol cálcico, localizado en la Empresa Pecuaria «Calixto García», en la provincia de Holguín, se estudio la producción de semilla de guinea (Panicum maximum Jacq. en un sistema intensivo de ceba de ganado vacuno, en condiciones de riego. Los tratamientos fueron cinco variedades del pasto guinea: A Común; B Likoni; C Mombasa; D Tanzania; y E Tobiatá. Los siguientes métodos se consideraron a su vez como subtratamientos: 1 Siembra con semilla gámica; 2 Plantación por macollas; y 3 Por vía de trasplante. La carga se mantuvo ajustada a 2 UGM/ha. En la producción de semillas existieron interacciones favorables entre los métodos de siembra y las variedades: semilla gámica-guinea Likoni; macolla-guinea Mombasa, Tanzania y Tobiatá; trasplante-guinea Común. En todo el sistema de explotación se obtuvo un aporte adicional superior a los $1 000/ha por concepto de producción de semilla, sin afectar la producción animal, en la que se obtuvieron ganancias superiores a los 800 g/animal/día y producciones promedio de 46 212 t de carne en pie por ciclo de ceba. Se considera factible la producción de semilla del pasto guinea en sistemas intensivos de ceba de ganado vacuno.On a sialitic Brown soil of the calcic Cambisol subtype, located at the «Calixto García» Livestock Production Enterprise, in the Holguín province, the production of Guinea grass (Panicum maximum Jacq. was studied in an intensive cattle fattening system, with irrigation. The treatments were five varieties of Guinea grass: A Common; B Likoni; C Mombasa; D Tanzania; and E Tobiatá. The following methods were considered, in turn, sub-treatments: 1 Seeding with gamic seed; 2 Planting with tillers; and 3 Transplanting. The stocking rate remained adjusted at 2 animals/ha. In seed production there were favorable interactions between the planting methods and the varieties: gamic seed-Guinea grass Likoni; tiller-Guinea grass
Półrolniczak, Marek; Kolendowicz, Leszek; Majkowska, Agnieszka; Czernecki, Bartosz
2017-02-01
The study has analyzed influence of an atmospheric circulation on urban heat island (UHI) and urban cold island (UCI) in Poznań. Analysis was conducted on the basis of temperature data from two measurement points situated in the city center and in the Ławica airport (reference station) and the data concerning the air circulation (Niedźwiedź's calendar of circulation types and reanalysis of National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR)). The cases with UHI constitute about 85 % of all data, and UCI phenomena appear with a frequency of 14 % a year. The intensity of UHI phenomenon is higher in the anticyclonic circulation types. During the year in anticyclonic circulation, intensity of UHI is 1.2 °C on average while in cyclonic is only 0.8 °C. The occurring of UHI phenomena is possible throughout all seasons of the year in all hours of the day usually in anticyclonic circulation types. The cases with highest UHI intensity are related mostly to nighttime. The cases of UCI phenomena occurred almost ever on the daytime and the most frequently in colder part of the year together with cyclonic circulation. Study based on reanalysis data indicates that days with large intensity of UHI (above 4, 5, and 6 °C) are related to anticyclonic circulation. Anticyclonic circulation is also promoting the formation of the strongest UCI. Results based on both reanalysis and the atmospheric circulation data (Niedźwiedź's circulation type) confirm that cases with the strongest UHI and UCI during the same day occur in strong high-pressure system with the center situated above Poland or central Europe.
Moon, Il-Ju; Kim, Sung-Hun; Klotzbach, Phil; Chan, Johnny C. L.
2016-06-01
Recently a pronounced global poleward shift in the latitude at which the maximum intensities of tropical cyclones (TC) occur has been identified. Moon et al (2015 Environ. Res. Lett. 10 104004) reported that the poleward migration is significantly influenced by changes in interbasin frequency. These frequency changes are a larger contributor to the poleward shift than the intrabasin migration component. The strong role of interbasin frequency changes in the poleward migration also suggest that the poleward trend could be changed to an opposite equatorward trend in the future due to multi-decadal variability that significantly impacts Northern Hemisphere TC frequency. In the accompanying comment, Kossin et al (2016 Environ. Res. Lett. 11 068001) questioned the novelty and robustness of our results by raising issues associated with subsampling, contributions from some basins to poleward migration, and data dependency. Here, we explain the originality and importance of our main findings, which are different from those of Kossin et al (2014 Nature 509 349-52) and reaffirm that our conclusions are maintained regardless of the issues that were raised.
Lu, Wei, E-mail: wlu@umm.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Huang, Xuan [Research and Development, Care Management Department, Johns Hopkins HealthCare LLC, Glen Burnie, Maryland (United States); Regine, William F.; Feigenberg, Steven J.; D' Souza, Warren D. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States)
2014-01-01
Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.
Ding, Wen Quan, E-mail: dingwenquan1982@163.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Zhou, Xue Jun, E-mail: zxj0925101@sina.com [Department of Radiology, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Tang, Jin Bo, E-mail: jinbotang@yahoo.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Gu, Jian Hui, E-mail: gujianhuint@163.com [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Jin, Dong Sheng, E-mail: jindongshengnj@aliyun.com [Department of Radiology, Jiangsu Province Official Hospital, Nanjing, Jiangsu (China)
2015-06-15
Highlights: • 3D displays of peripheral nerves can be achieved by 2 MIP post-processing methods. • The median nerves’ FA and ADC values can be accurately measured by using DTI6 data. • Adopting 6-direction DTI scan and MIP can evaluate peripheral nerves efficiently. - Abstract: Objectives: To achieve 3-dimensional (3D) display of peripheral nerves in the wrist region by using maximum intensity projection (MIP) post-processing methods to reconstruct raw images acquired by a diffusion tensor imaging (DTI) scan, and to explore its clinical applications. Methods: We performed DTI scans in 6 (DTI6) and 25 (DTI25) diffusion directions on 20 wrists of 10 healthy young volunteers, 6 wrists of 5 patients with carpal tunnel syndrome, 6 wrists of 6 patients with nerve lacerations, and one patient with neurofibroma. The MIP post-processing methods employed 2 types of DTI raw images: (1) single-direction and (2) T{sub 2}-weighted trace. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values of the median and ulnar nerves were measured at multiple testing sites. Two radiologists used custom evaluation scales to assess the 3D nerve imaging quality independently. Results: In both DTI6 and DTI25, nerves in the wrist region could be displayed clearly by the 2 MIP post-processing methods. The FA and ADC values were not significantly different between DTI6 and DTI25, except for the FA values of the ulnar nerves at the level of pisiform bone (p = 0.03). As to the imaging quality of each MIP post-processing method, there were no significant differences between DTI6 and DTI25 (p > 0.05). The imaging quality of single-direction MIP post-processing was better than that from T{sub 2}-weighted traces (p < 0.05) because of the higher nerve signal intensity. Conclusions: Three-dimensional displays of peripheral nerves in the wrist region can be achieved by MIP post-processing for single-direction images and T{sub 2}-weighted trace images for both DTI6 and DTI25
Investigation of Urban Heat Island Intensity in Istanbul
Irem Bilgen, Simge; Unal, Yurdanur S.; Yuruk, Cemre; Goktepe, Nur; Diren, Deniz; Topcu, Sema; Mentes, Sibel; Incecik, Selahattin; Guney, Caner; Ozgur Dogru, Ahmet
2016-04-01
Urban heat island (UHI) is defined as the temperature difference between the urbanized areas and their surroundings due to local surface energy balance since urban materials and build up structures modify the heating and cooling rates of the ambient air. Istanbul is the largest city of Turkey with the population over 14 million inhabitants and the urbanization is drastically expanded since 1965 due to the population increase from 2 million to 14 million. In this study we investigate impacts of urban expansion on meteorological variables in relation to the UHI effect in Istanbul. To estimate the strength of UHI, temperature differences between urban and suburban stations are calculated by using temperature observations from 6 stations for 1960-2013 years, and 34 stations for 2007-2012. The results show that, the UHI intensity is stronger during summer season and Kartal experiences intensified UHI effect more than the others. The daytime(nighttime) UHI intensity defined with respect to Şile (suburban) varies between 0.41°C and 3.0°C (1.02°C and 2.18°C). The atmospheric UHI usually reaches its highest intensity on summer nights, and under calm air and a cloudless sky. Therefore, the total of 127 dry days, which have cloudiness less than 2/8 and wind speed less than 2 m/s are selected to estimate the strength of UHI in Istanbul. The hourly temperature differences between a selected urban station (Pendik) and a rural station (Terkos), are calculated as 5°C for daytime and 8°C for the nighttime. The urbanization negatively impacts the heat stress of urban areas. So that it is important to investigate what type of changes in the urban landscape affect the near-surface climate and elevate the intensity of UHI in the city. The relationship between urbanization and long-term modification of the urban climate of Istanbul is investigated by modeling the present-day spatial distribution of the urban heat load. Geographical data of the Istanbul Metropolitan Municipality
Liu Jin
2012-01-01
Full Text Available Abstract Background To evaluate the accuracy of the combined maximum and minimum intensity projection-based internal target volume (ITV delineation in 4-dimensional (4D CT scans for liver malignancies. Methods 4D CT with synchronized IV contrast data were acquired from 15 liver cancer patients (4 hepatocellular carcinomas; 11 hepatic metastases. We used five approaches to determine ITVs: (1. ITVAllPhases: contouring gross tumor volume (GTV on each of 10 respiratory phases of 4D CT data set and combining these GTVs; (2. ITV2Phase: contouring GTV on CT of the peak inhale phase (0% phase and the peak exhale phase (50% and then combining the two; (3. ITVMIP: contouring GTV on MIP with modifications based on physician's visual verification of contours in each respiratory phase; (4. ITVMinIP: contouring GTV on MinIP with modification by physician; (5. ITV2M: combining ITVMIP and ITVMinIP. ITVAllPhases was taken as the reference ITV, and the metrics used for comparison were: matching index (MI, under- and over-estimated volume (Vunder and Vover. Results 4D CT images were successfully acquired from 15 patients and tumor margins were clearly discernable in all patients. There were 9 cases of low density and 6, mixed on CT images. After comparisons of metrics, the tool of ITV2M was the most appropriate to contour ITV for liver malignancies with the highest MI of 0.93 ± 0.04 and the lowest proportion of Vunder (0.07 ± 0.04. Moreover, tumor volume, target motion three-dimensionally and ratio of tumor vertical diameter over tumor motion magnitude in cranio-caudal direction did not significantly influence the values of MI and proportion of Vunder. Conclusion The tool of ITV2M is recommended as a reliable method for generating ITVs from 4D CT data sets in liver cancer.
Kye, Heewon; Sohn, Bong-Soo; Lee, Jeongjin
2012-07-01
Maximum intensity projection (MIP) is an important visualization method that has been widely used for the diagnosis of enhanced vessels or bones by rotating or zooming MIP images. With the rapid spread of multidetector-row computed tomography (MDCT) scanners, MDCT scans of a patient generate a large data set. However, previous acceleration methods for MIP rendering of such a data set failed to generate MIP images at interactive rates. In this paper, we propose novel culling methods in both object and image space for interactive MIP rendering of large medical data sets. In object space, for the visibility test of a block, we propose the initial occluder resulting from a preceding image to utilize temporal coherence and increase the block culling ratio a lot. In addition, we propose the hole filling method using the mesh generation and rendering to improve the culling performance during the generation of the initial occluder. In image space, we find out that there is a trade-off between the block culling ratio in object space and the culling efficiency in image space. In this paper, we classify the visible blocks into two types by their visibility. And we propose a balanced culling method by applying a different culling algorithm in image space for each type to utilize the trade-off and improve the rendering speed. Experimental results on twenty CT data sets showed that our method achieved 3.85 times speed up in average without any loss of image quality comparing with conventional bricking method. Using our visibility culling method, we achieved interactive GPU-based MIP rendering of large medical data sets.
Taniguchi, Daigo; Tokunaga, Daisaku; Oda, Ryo; Fujiwara, Hiroyoshi; Ikeda, Takumi; Ikoma, Kazuya; Kishida, Aiko; Yamasaki, Tetsuro; Kawahito, Yutaka; Seno, Takahiro; Ito, Hirotoshi; Kubo, Toshikazu
2014-07-01
Magnetic resonance imaging (MRI) with maximum intensity projection (MIP) is used to evaluate the hand in rheumatoid arthritis (RA). MIP yields clear visualization of synovitis over the entirety of the bilateral hands with a single image. In this study, we assessed synovitis with MIP images, clinical findings, and power Doppler (PD) findings to examine the clinical usefulness of MIP images for RA in the hand. Thirty RA patients were assessed for swelling and tenderness in the joints included in the DAS28, and both contrast-enhanced MRI for bilateral hands and ultrasonography for bilateral wrist and metacarpophalangeal (MCP) joints were performed. Articular synovitis was scored in MIP images, and the scores were compared with those for PD. The agreement on synovitis between MIP and conventional MR images was excellent. Palpation showed low sensitivity and high specificity compared with both MIP and PD images. There were joints that were positive in MIP images only, but there were no joints that were positive in PD images only. A statistically significant correlation between the scores of MIP and PD images was found. Furthermore, the agreement between grade 2 on MIP images and positive on PD images was 0.87 (κ = 0.73) for the wrist and 0.92 (κ = 0.57) for MCP joints. Using MIP images together with palpation makes detailed evaluation of synovitis of the hand in RA easy. MIP images may predict further joint damage, since they allow semiquantitative estimation of the degree of thickening of the synovial membrane.
Zhou, Decheng; Zhang, Liangxia; Hao, Lu; Sun, Ge; Liu, Yongqiang; Zhu, Chao
2016-02-15
Urban heat island (UHI) represents a major anthropogenic modification to the Earth system and its relationship with urban development is poorly understood at a regional scale. Using Aqua MODIS data and Landsat TM/ETM+ images, we examined the spatiotemporal trends of the UHI effect (ΔT, relative to the rural reference) along the urban development intensity (UDI) gradient in 32 major Chinese cities from 2003 to 2012. We found that the daytime and nighttime ΔT increased significantly (purbanization effects on local climate cross China and offer limitations on how these certain methods should be used to quantify UHI intensity over large areas. Furthermore, the impacts of urbanization on climate are complex, thus future research efforts should focus more toward direct observation and physical-based modeling to make credible predictions of the effects.
Sarapultseva, E I; Igolkina, J V; Litovchenko, A V
2009-04-01
Electromagnetic radiation at the mobile connection frequency (1 GHz) at maximum energy flow density (10 microW/cm(2)) permitted in Russia causes serious functional disorders in the studied unicellular hydrobionts infusoria Spirostomum ambiguum: reduction of their spontaneous motor activity. The form of biological reaction is uncommon: the effect is threshold, overall, and does not depend on the duration of microwave exposure.
Gomez-Paccard, Miriam; Osete, Maria Luisa; Chauvin, Annick; Pérez-Asensio, Manuel; Jimenez-Castillo, Pedro
2014-05-01
Available European data indicate that during the past 2500 years there have been periods of rapid intensity geomagnetic fluctuations interspersed with periods of little change. The challenge now is to precisely describe these rapid changes. Due to the difficulty to obtain precisely dated heated materials to obtain a high-resolution description of past geomagnetic field intensity changes, new high-quality archeomagnetic data from archeological heated materials founded in well-defined superposed stratigraphic units are particularly valuable. In this work we report the archeomagnetic study of several groups of ceramic fragments from southeastern Spain that belong to 14 superposed stratigraphic levels corresponding to a surface no bigger than 3 m by 7 m. Between four and eight ceramic fragments were selected per stratigraphic unit. The age of the pottery fragments range from the second half of the 7th to the11th centuries. The dates were established by three radiocarbon dates and by archeological/historical constraints including typological comparisons and well-controlled stratigraphic constrains.Between two and four specimens per pottery fragment were studied. The classical Thellier and Thellier method including pTRM checks and TRM anisotropy and cooling rate corrections was used to estimate paleointensities at specimen level. All accepted results correspond to well-defined single components of magnetization going toward the origin and to high-quality paleointensity determinations. From these experiments nine new high-quality mean intensities have been obtained. The new data provide an improved description of the sharp abrupt intensity changes that took place in this region between the 7th and the 11th centuries. The results confirm that several rapid intensity changes (of about ~15-20 µT/century) took place in Western Europe during the recent history of the Earth.
Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kuzuha, Yasuhisa; Sivapalan, Murugesu; Tomosugi, Kunio; Kishii, Tokuo; Komatsu, Yosuke
2006-04-01
Eagleson's classical regional flood frequency model is investigated. Our intention was not to improve the model, but to reveal previously unidentified important and dominant hydrological processes in it. The change of the coefficient of variation (CV) of annual maximum discharge with catchment area can be viewed as representing the spatial variance of floods in a homogeneous region. Several researchers have reported that the CV decreases as the catchment area increases, at least for large areas. On the other hand, Eagleson's classical studies have been known as pioneer efforts that combine the concept of similarity analysis (scaling) with the derived flood frequency approach. As we have shown, the classical model can reproduce the empirical relationship between the mean annual maximum discharge and catchment area, but it cannot reproduce the empirical decreasing CV-catchment area curve. Therefore, we postulate that previously unidentified hydrological processes would be revealed if the classical model were improved to reproduce the decreasing of CV with catchment area. First, we attempted to improve the classical model by introducing a channel network, but this was ineffective. However, the classical model was improved by introducing a two-parameter gamma distribution for rainfall intensity. What is important is not the gamma distribution itself, but those characteristics of spatial variability of rainfall intensity whose CV decreases with increasing catchment area. Introducing the variability of rainfall intensity into the hydrological simulations explains how the CV of rainfall intensity decreases with increasing catchment area. It is difficult to reflect the rainfall-runoff processes in the model while neglecting the characteristics of rainfall intensity from the viewpoint of annual flood discharge variances.
Johnston, James D. [University of Saskatchewan, Department of Mechanical Engineering, Saskatoon, SK (Canada); University of British Columbia, Department of Mechanical Engineering, Vancouver, BC (Canada); Kontulainen, Saija A. [University of Saskatchewan, College of Kinesiology, Saskatoon, SK (Canada); Masri, Bassam A.; Wilson, David R. [University of British Columbia, Department of Orthopaedics, Vancouver, BC (Canada)
2010-09-15
The objective was to identify subchondral bone density differences between normal and osteoarthritic (OA) proximal tibiae using computed tomography osteoabsorptiometry (CT-OAM) and computed tomography topographic mapping of subchondral density (CT-TOMASD). Sixteen intact cadaver knees from ten donors (8 male:2 female; mean age:77.8, SD:7.4 years) were categorized as normal (n = 10) or OA (n = 6) based upon CT reconstructions. CT-OAM assessed maximum subchondral bone mineral density (BMD). CT-TOMASD assessed average subchondral BMD across three layers (0-2.5, 2.5-5 and 5-10 mm) measured in relation to depth from the subchondral surface. Regional analyses of CT-OAM and CT-TOMASD included: medial BMD, lateral BMD, and average BMD of a 10-mm diameter area that searched each medial and lateral plateau for the highest ''focal'' density present within each knee. Compared with normal knees, both CT-OAM and CT-TOMASD demonstrated an average of 17% greater whole medial compartment density in OA knees (p < 0.016). CT-OAM did not distinguish focal density differences between OA and normal knees (p > 0.05). CT-TOMASD focal region analyses revealed an average of 24% greater density in the 0- to 2.5-mm layer (p = 0.003) and 36% greater density in the 2.5- to 5-mm layer (p = 0.034) in OA knees. Both CT-OAM and TOMASD identified higher medial compartment density in OA tibiae compared with normal tibiae. In addition, CT-TOMASD indicated greater focal density differences between normal and OA knees with increased depth from the subchondral surface. Depth-specific density analyses may help identify and quantify small changes in subchondral BMD associated with OA disease onset and progression. (orig.)
Juan Wang
2015-03-01
Full Text Available Urban heat islands (UHIs created through urbanization can have negative impacts on the lives of people living in cities. They may also vary spatially and temporally over a city. There is, thus, a need for greater understanding of these patterns and their causes. While previous UHI studies focused on only a few cities and/or several explanatory variables, this research provides a comprehensive and comparative characterization of the diurnal and seasonal variation in surface UHI intensities (SUHIIs across 67 major Chinese cities. The factors associated with the SUHII were assessed by considering a variety of related social, economic and natural factors using a regression tree model. Obvious seasonal variation was observed for the daytime SUHII, and the diurnal variation in SUHII varied seasonally across China. Interestingly, the SUHII varied significantly in character between northern and southern China. Southern China experienced more intense daytime SUHIIs, while the opposite was true for nighttime SUHIIs. Vegetation had the greatest effect in the day time in northern China. In southern China, annual electricity consumption and the number of public buses were found to be important. These results have important theoretical significance and may be of use to mitigate UHI effects.
Gadda, Davide; Vannucchi, Letizia; Niccolai, Franco; Neri, Anna T.; Carmignani, Luca; Pacini, Patrizio [Ospedale del Ceppo, U.O. Radiodiagnostica, Pistoia (Italy)
2005-12-01
Maximum intensity projections reconstructions from 2.5 mm unenhanced multidetector computed tomography axial slices were obtained from 49 patients within the first 6 h of anterior-circulation cerebral strokes to identify different patterns of the dense artery sign and their prognostic implications for location and extent of the infarcted areas. The dense artery sign was found in 67.3% of cases. Increased density of the whole M1 segment with extension to M2 of the middle cerebral artery was associated with a wider extension of cerebral infarcts in comparison to M1 segment alone or distal M1 and M2. A dense sylvian branch of the middle cerebral artery pattern was associated with a more restricted extension of infarct territory. We found 62.5% of patients without a demonstrable dense artery to have a limited peripheral cortical or capsulonuclear lesion. In patients with a 7-10 points on the Alberta Stroke Early Programme Computed Tomography Score and a dense proximal MCA in the first hours of ictus the mean decrease in the score between baseline and follow-up was 5.09{+-}1.92 points. In conclusion, maximum intensity projections from thin-slice images can be quickly obtained from standard computed tomography datasets using a multidetector scanner and are useful in identifying and correctly localizing the dense artery sign, with prognostic implications for the entity of cerebral damage. (orig.)
Measuring Physical Activity Intensity
Full Text Available ... Older Adults Overcoming Barriers Measuring Physical Activity Intensity Target Heart Rate & Estimated Maximum Heart Rate Perceived Exertion ( ... a heavy backpack Other Methods of Measuring Intensity Target Heart Rate and Estimated Maximum Heart Rate Perceived ...
张玖霞; 方杰
2011-01-01
In this paper,Meihekou scale intensive arable land to achieve good results as the starting point,the transfer of land from the government guidance to promote,develop policies to create conditions for the scale,speed up the transfer of rural labor to expand the scale of operation in space in the analysis of Meihekou scale intensive arable land on the remarkable results.Meanwhile,for the land transfer process Meihekou exist in many non-standard issues,from land to carry out intensive,in order to achieve maximum efficiency of land use perspective,on how to do large-scale land operation Meihekou proposed measures.%本文以梅河口市做好耕地集约规模经营取得的成效为切入点,从政府引导推动土地流转,制定优惠扶持政策为规模经营创造条件,加快农村劳动力转移为规模经营拓展空间等方面分析了梅河口市在耕地集约规模经营上取得的显著成效。同时,针对梅河口市在土地流转过程中存在的问题,从实现土地使用效益最大化的视角,对梅河口市如何做好土地规模经营提出了相关的对策。
Jacira Porto dos Santos
2012-04-01
Full Text Available In the Universal Equation of Soil Loss (USLE, erosivity is the factor related to rain and express its potential to cause soil erosion, being necessary to know its kinetic energy and the maximum intensities of rain in duration of 30 min. Thus, the aim of this study was to verify and quantify the impact of the rain duration, considering 15 and 30 min, on the USLE erosivity factor. To achieve this, 863 rain gauge records were used, duiring the period of 1983 to 1998 in the city of Pelotas, RS, obtained from the Agrometeorological Station - Covenant EMBRAPA/UFPel, INMET (31o51´S; 52o21´O and altitude of 13,2 m. With the records, it was estimated the erosivity values from the maximum intensities of rain during the period evaluated. The average annual values of erosivity was 2551,3 MJ ha-1 h-1 ano-1 and 1406,1 MJ ha-1 h-1 ano-1, for the average intensities of 6,40 mm h-1 and 3,74 mm h-1, in durations of 15 and 30 min, respectively. The results of this study have shown that the percentage of erosive rainfalls in relation to the total precipitation was of 91.0%, and that the erosivity was influenced by the duration of the maximum intensity of rain.= Na Equação Universal de Perdas de Solo (EUPS a erosividade é o fator relacionado à chuva e expressa o seu potencial em provocar a erosão do solo, sendo necessário que se conheça a energia cinética da mesma e as máximas intensidades da chuva na duração de 30 min. Objetivou-se com este trabalho verificar e quantificar o impacto da duração da chuva, considerando 15 e 30 min, sobre o fator erosividade da EUPS. Para tanto foram utilizados 863 registros pluviográficos de chuva, no período de 1983 a 1998 da localidade de Pelotas, RS, obtidos na Estação Agroclimatológica – Convênio EMBRAPA/UFPel, INMET (31o51´S;52o21´O e altitude de 13,2 m. Com os registros foram estimados os valores de erosividade a partir de intensidades máximas de chuva nas durações consideradas. Os valores m
Intensity and Pattern of Land Surface Temperature in Hat Yai City, Thailand
Poonyanuch RUTHIRAKO
2014-07-01
Full Text Available Land Surface Temperature (LST is an important factor in global climate. LST is governed by surface heat fluxes, which are affected by urbanization. In order to understand urban climate, LST needs to be examined. This study aimed to investigate the intensity and pattern of LST and examine the relationships between LST and the characteristics of urban land use, indices, and population density in Hat Yai City. Landsat 5TM images were used for interpretation of land use characteristics and derivation of LST, normalized difference built-up index (NDBI and normalized vegetation index (NDVI. The characteristics of land use were classified into 4 types: commercial/high density residential, medium density residential, minimum density residential and vegetation cover/park. The average maximum and minimum LST derived from Landsat 5TM were 25.9, 33.7 and 15.8 °C, respectively. The areas with high LST were located principally in central built-up areas, slightly northwest-southeast of the study area, including the commercial center and the newly expanded residential areas. The LST pattern was well related to land use types and population density. The relationship between LST and NDVI however portrayed negative correlation, while that between LST and NDBI highlighted a positive correlation. It is concluded that NDVI and NDBI can be used to evaluate the risk of Urban Heat Island (UHI and may help city managers better prepare for possible impacts of climate change.
Lief, Aram Parrish
In 2005, Hurricane Katrina's diverse impacts on the Greater New Orleans area included damaged and destroyed trees, and other despoiled vegetation, which also increased the exposure of artificial and bare surfaces, known factors that contribute to the climatic phenomenon known as the urban heat island (UHI). This is an investigation of UHI in the aftermath of Hurricane Katrina, which entails the analysis of pre and post-hurricane Katrina thermal imagery of the study area, including changes to surface heat patterns and vegetative cover. Imagery from Landsat TM was used to show changes to the pattern and intensity of the UHI effect, caused by an extreme weather event. Using remote sensing visualization methods, in situ data, and local knowledge, the author found there was a measurable change in the pattern and intensity of the New Orleans UHI effect, as well as concomitant changes to vegetative land cover. This finding may be relevant for urban planners and citizens, especially in the context of recovery from a large-scale disaster of a coastal city, regarding future weather events, and other natural and human impacts.
Fuad Julardžija
2014-04-01
Full Text Available Introduction: Magnetic resonance cholangiopancreatography (MRCP is a method that allows noninvasive visualization of pancreatobiliary tree and does not require contrast application. It is a modern method based on heavily T2-weighted imaging (hydrography, which uses bile and pancreatic secretions as a natural contrast medium. Certain weaknesses in quality of demonstration of pancreatobiliary tract can be observed in addition to its good characteristics. Our aim was to compare the 3D Maximum intensity projection (MIP reconstruction and 2D T2 Half-Fourier Acquisition Single-Shot Turbo Spin-Echo (HASTE sequence in magnetic resonance cholangiopancreatography.Methods: During the period of one year 51 patients underwent MRCP on 3T „Trio“ system. Patients of different sex and age structure were included, both outpatient and hospitalized. 3D MIP reconstruction and 2D T2 haste sequence were used according to standard scanning protocols.Results: There were 45.1% (n= 23 male and 54.9% (n=28 female patients, age range from 17 to 81 years. 2D T2 haste sequence was more susceptible to respiratory artifacts presence in 64% patients, compared to 3D MIP reconstruction with standard error (0.09, result significance indication (p=0.129 and confidence interval (0.46 to 0.81. 2D T2 haste sequences is more sensitive and superior for pancreatic duct demonstration compared to 3D MIP reconstruction with standard error (0.07, result significance indication (p=0.01 and confidence interval (0.59 to 0.87Conclusion: In order to make qualitative demonstration and analysis of hepatobiliary and pancreatic system on MR, both 2D T2 haste sequence in transversal plane and 3D MIP reconstruction are required.
The Impact of the Urban Heat Island during an Intense Heat Wave in Oklahoma City
Jeffrey B. Basara
2010-01-01
Full Text Available During late July and early August 2008, an intense heat wave occurred in Oklahoma City. To quantify the impact of the urban heat island (UHI in Oklahoma City on observed and apparent temperature conditions during the heat wave event, this study used observations from 46 locations in and around Oklahoma City. The methodology utilized composite values of atmospheric conditions for three primary categories defined by population and general land use: rural, suburban, and urban. The results of the analyses demonstrated that a consistent UHI existed during the study period whereby the composite temperature values within the urban core were approximately 0.5∘C warmer during the day than the rural areas and over 2∘C warmer at night. Further, when the warmer temperatures were combined with ambient humidity conditions, the composite values consistently revealed even warmer heat-related variables within the urban environment as compared with the rural zone.
Measuring Physical Activity Intensity
Full Text Available ... Adults Overcoming Barriers Measuring Physical Activity Intensity Target Heart Rate & Estimated Maximum Heart Rate Perceived Exertion (Borg Rating of Perceived Exertion Scale) ...
Weekly cycles in peak time temperatures and urban heat island intensity
Earl, Nick; Simmonds, Ian; Tapper, Nigel
2016-07-01
Regular diurnal and weekly cycles (WCs) in temperature provide valuable insights into the consequences of anthropogenic activity on the urban environment. Different locations experience a range of identified WCs and have very different structures. Two important sources of urban heat are those associated with the effect of large urban structures on the radiation budget and energy storage and those from the heat generated as a consequence of anthropogenic activity. The former forcing will remain relatively constant, but a WC will appear in the latter. WCs for specific times of day and the urban heat island (UHI) have not been analysed heretofore. We use three-hourly surface (2 m) temperature data to analyse the WCs of seven major Australian cities at different times of day and to determine to what extent one of our major city’s (Melbourne) UHI exhibits a WC. We show that the WC of temperature in major cities differs according to the time of day and that the UHI intensity of Melbourne is affected on a WC. This provides crucial information that can contribute toward the push for healthier urban environments in the face of a more extreme climate.
Bae, Yun Jung; Choi, Byung Se; Yoon, Yeon Hong; Woo, Leonard Sun; Jung, Cheol Kyu; Kim, Jae Hyoung [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Lee, Kyung Mi [Dept. of Radiology, Kyung Hee University College of Medicine, Kyung Hee University Hospital, Seoul (Korea, Republic of)
2017-08-01
To evaluate the diagnostic benefits of 5-mm maximum intensity projection of improved motion-sensitized driven-equilibrium prepared contrast-enhanced 3D T1-weighted turbo-spin echo imaging (MIP iMSDE-TSE) in the detection of brain metastases. The imaging technique was compared with 1-mm images of iMSDE-TSE (non-MIP iMSDE-TSE), 1-mm contrast-enhanced 3D T1-weighted gradient-echo imaging (non-MIP 3D-GRE), and 5-mm MIP 3D-GRE. From October 2014 to July 2015, 30 patients with 460 enhancing brain metastases (size > 3 mm, n = 150; size ≤ 3 mm, n = 310) were scanned with non-MIP iMSDE-TSE and non-MIP 3D-GRE. We then performed 5-mm MIP reconstruction of these images. Two independent neuroradiologists reviewed these four sequences. Their diagnostic performance was compared using the following parameters: sensitivity, reading time, and figure of merit (FOM) derived by jackknife alternative free-response receiver operating characteristic analysis. Interobserver agreement was also tested. The mean FOM (all lesions, 0.984; lesions ≤ 3 mm, 0.980) and sensitivity ([reader 1: all lesions, 97.3%; lesions ≤ 3 mm, 96.2%], [reader 2: all lesions, 97.0%; lesions ≤ 3 mm, 95.8%]) of MIP iMSDE-TSE was comparable to the mean FOM (0.985, 0.977) and sensitivity ([reader 1: 96.7, 99.0%], [reader 2: 97, 95.3%]) of non-MIP iMSDE-TSE, but they were superior to those of non-MIP and MIP 3D-GREs (all, p < 0.001). The reading time of MIP iMSDE-TSE (reader 1: 47.7 ± 35.9 seconds; reader 2: 44.7 ± 23.6 seconds) was significantly shorter than that of non-MIP iMSDE-TSE (reader 1: 78.8 ± 43.7 seconds, p = 0.01; reader 2: 82.9 ± 39.9 seconds, p < 0.001). Interobserver agreement was excellent (κ > 0.75) for all lesions in both sequences. MIP iMSDE-TSE showed high detectability of brain metastases. Its detectability was comparable to that of non-MIP iMSDE-TSE, but it was superior to the detectability of non-MIP/MIP 3D-GREs. With a shorter reading time, the false-positive results of MIP i
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Mohan, Manju; Kikegawa, Yukihiro; Gurjar, B. R.; Bhati, Shweta; Kolli, Narendra Reddy
2013-05-01
Urban heat island intensities (UHI) have been assessed based on in situ measurements and satellite-derived observations for the megacity Delhi during a selected period in March 2010. A network of micrometeorological observational stations was set up across the city. Site selection for stations was based on dominant land use-land cover (LULC) classification. Observed UHI intensities could be classified into high, medium and low categories which overall correlated well with the LULC categories viz. dense built-up, medium dense built-up and green/open areas, respectively. Dense urban areas and highly commercial areas were observed to have highest UHI with maximum hourly magnitude peaking up to 10.7 °C and average daily maximum UHI reaching 8.3 °C. UHI obtained in the study was also compared with satellite-derived land surface temperatures (LST). UHI based on in situ ambient temperatures and satellite-derived land surface temperatures show reasonable comparison during nighttime in terms of UHI magnitude and hotspots. However, the relation was found to be poor during daytime. Further, MODIS-derived LSTs showed overestimation during daytime and underestimation during nighttime when compared with in situ skin temperature measurements. Impact of LULC was also reflected in the difference between ambient temperature and skin temperature at the observation stations as built-up canopies reported largest gradient between air and skin temperature. Also, a comparison of intra-city spatial temperature variations based UHI vis-à-vis a reference rural site temperature-based UHI indicated that UHI can be computed with respect to the station measuring lowest temperature within the urban area in the absence of a reference station in the rural area close to the study area. Comparison with maximum and average UHI of other cities of the world revealed that UHI in Delhi is comparable to other major cities of the world such as London, Tokyo and Beijing and calls for mitigation action
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-06-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Measuring Physical Activity Intensity
Full Text Available ... Measuring Intensity Target Heart Rate and Estimated Maximum Heart Rate Perceived Exertion (Borg Rating of Perceived Exertion Scale) Get Email Updates To receive email updates about this page, enter your ... ...
Measuring Physical Activity Intensity
Full Text Available ... Measuring Intensity Target Heart Rate and Estimated Maximum Heart Rate Perceived Exertion (Borg Rating of Perceived Exertion Scale) Get Email Updates To receive email updates about this page, enter your email ... ...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Carlos Augusto de Miranda Gomide
2002-11-01
defoliation, T2 - total defoliation, T3 - removal of the last expanded leaf and T4 - removal of the two lowest expanded leaves. In all cases primary tillers were completely defoliated by cutting at 8 cm height from soil level. The variables assessed were: leaf expansion rate, root growth, relative growth rate (RGR, net assimilation rate (NAR, leaf area ratio (LAR at the ages of 2, 5, 9 and 16 days regrowth, stem and root total non structural carbohydrate content and maximum leaf photosynthesis rate at the ages of 2, 6 and 13 days regrowth. There were five replications, according to a completely randomized design. There was no difference in the photosynthetic rate among the 3 expanded leaves considered at defoliation time. Photosynthetic rate of any leaf remaining after defoliation increased initially, but had declined by the 13th day of regrowth. Stem total non-structural carbohydrates content dropped in response to leaf removal, mainly in the total defoliation treatment which also brought about reduced root growth. Still, the completely defoliated plants had restored their RGR by the 16th day regrowth due to high leaf area expansion rate and leaf area ratio.
S. Weber
2017-07-01
Full Text Available ELI-Beamlines (ELI-BL, one of the three pillars of the Extreme Light Infrastructure endeavour, will be in a unique position to perform research in high-energy-density-physics (HEDP, plasma physics and ultra-high intensity (UHI (>1022W/cm2 laser–plasma interaction. Recently the need for HED laboratory physics was identified and the P3 (plasma physics platform installation under construction in ELI-BL will be an answer. The ELI-BL 10 PW laser makes possible fundamental research topics from high-field physics to new extreme states of matter such as radiation-dominated ones, high-pressure quantum ones, warm dense matter (WDM and ultra-relativistic plasmas. HEDP is of fundamental importance for research in the field of laboratory astrophysics and inertial confinement fusion (ICF. Reaching such extreme states of matter now and in the future will depend on the use of plasma optics for amplifying and focusing laser pulses. This article will present the relevant technological infrastructure being built in ELI-BL for HEDP and UHI, and gives a brief overview of some research under way in the field of UHI, laboratory astrophysics, ICF, WDM, and plasma optics.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Maximum intensity of rarefaction shock waves for dense gases
Guardone, A.; Zamfirescu, C.; Colonna, P.
2009-01-01
Modern thermodynamic models indicate that fluids consisting of complex molecules may display non-classical gasdynamic phenomena such as rarefaction shock waves (RSWs) in the vapour phase. Since the thermodynamic region in which non-classical phenomena are physically admissible is finite in terms of
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
José Carlos Ferreira
2005-12-01
Full Text Available Nesta fase do trabalho objetivou-se estimar parâmetros para equações mensais de estimativas de precipitação de intensidade máxima em intervalos de 5, 10, 15, 20, 25, 30 e 60 minutos para 165 localidades do Estado de São Paulo. A partir de dados mensais de séries históricas de 31 anos de precipitação máxima de "um dia", utilizou-se da distribuição de probabilidade de Gumbel para os cálculos da probabilidade de ocorrência de valores extremos em cada mês. Utilizando-se da metodologia proposta por Occhipinti & Santos (1966, as chuvas máximas de "um dia" foram desagregadas para precipitações de intensidade máxima em 24 horas e nos sete intervalos de tempo acima descritos, para cada uma das 165 localidades e em cada mês. Os parâmetros alfa e beta foram calculados, para cada um dos sete intervalos de duração da chuva, com F(x= 90% e em cada uma das 165 localidades propostas. As séries de precipitação máxima de "um dia" foram submetidas ao teste de Kolmogorov-Smirnov, confirmando bom ajuste com distribuição de Gumbel. A metodologia mostrou bom desempenho, considerando-se que as diferenças percentuais relativas dos resultados das precipitações máximas obtidas com os parâmetros alfa e beta, de 25 localidades, comparadas com os obtidos pela metodologia de Occhipinti, foram de modo geral menores que 0,5%.The objective of this phase of the work was to obtain parameters for monthly equations of maximum of estimations precipitation intensity in intervals of 5, 10, 15, 20, 25, 30 and 60 minutes covering 165 places of São Paulo State. Starting from the historical series of 31 years of maximum precipitation of "one day", it was used Gumbel probability distribution for calculating the probability of occurrence of extreme values in every month. Using the methodology proposed by Occhipinti & Santos (1966, the maximum rains of "one day" were dissociated in precipitation of maximum intensity in 24 hours in the seven intervals of
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Stochastic conditional intensity processes
Bauwens, Luc; Hautsch, Nikolaus
2006-01-01
model allows for a wide range of (cross-)autocorrelation structures in multivariate point processes. The model is estimated by simulated maximum likelihood (SML) using the efficient importance sampling (EIS) technique. By modeling price intensities based on NYSE trading, we provide significant evidence...
Elis Dener Lima Alves
2017-02-01
Full Text Available This study analyzes the influence of urban-geographical variables on determining heat islands and proposes a model to estimate and spatialize the maximum intensity of urban heat islands (UHI. Simulations of the UHI based on the increase of normalized difference vegetation index (NDVI, using multiple linear regression, in Iporá (Brazil are also presented. The results showed that the UHI intensity of this small city tended to be lower than that of bigger cities. Urban geometry and vegetation (UI and NDVI were the variables that contributed the most to explain the variability of the maximum UHI intensity. It was observed that areas located in valleys had lower thermal values, suggesting a cool island effect. With the increase in NDVI in the central area of a maximum UHI, there was a significant decrease in its intensity and size (a 45% area reduction. It is noteworthy that it was possible to spatialize the UHI to the whole urban area by using multiple linear regression, providing an analysis of the urban set from urban-geographical variables and thus performing prognostic simulations that can be adapted to other small tropical cities.
Gawuć, Lech
2017-04-01
Urban Heat Island (UHI) is a direct consequence of altered energy balance in urban areas (Oke 1982). There has been a significant effort put into an understanding of air temperature variability in urban areas and underlying mechanisms (Arnfield 2003, Grimmond 2006, Stewart 2011, Barlow 2014). However, studies that are concerned on surface temperature are less frequent. Therefore, Voogt & Oke (2003) proposed term "Surface Urban Heat Island (SUHI)", which is analogical to UHI and it is defined as a difference in land surface temperature (LST) between urban and rural areas. SUHI is a phenomenon that is not only concerned with high spatial variability, but also with high temporal variability (Weng and Fu 2014). In spite of the fact that satellite remote sensing techniques give a full spatial pattern over a vast area, such measurements are strictly limited to cloudless conditions during a satellite overpass (Sobrino et al., 2012). This significantly reduces the availability and applicability of satellite LST observations, especially over areas and seasons with high cloudiness occurrence. Also, the surface temperature is influenced by synoptic conditions (e.g., wind and humidity) (Gawuc & Struzewska 2016). Hence, utilising single observations is not sufficient to obtain a full image of spatiotemporal variability of urban LST and SUHI intensity (Gawuc & Struzewska 2016). One of the possible solutions would be a utilisation of time-series of LST data, which could be useful to monitor the UHI growth of individual cities and thus, to reveal the impact of urbanisation on local climate (Tran et al., 2006). The relationship between UHI and synoptic conditions have been summarised by Arnfield (2003). However, similar analyses conducted for urban LST and SUHI are lacking. We will present analyses of the relationship between time series of remotely-sensed LST and SUHI intensity and in-situ meteorological observations collected by road weather stations network, namely: road surface
The diurnal evolution of the urban heat island of Paris: a model-based case study during Summer 2006
H. Wouters
2012-10-01
Full Text Available The urban heat island (UHI over Paris during summer 2006 was simulated using the Advanced Regional Prediction System (ARPS updated with a simple urban parametrization at a horizontal resolution of 1 km. Two integrations were performed, one with the urban land cover of Paris and another in which Paris was replaced by cropland. The focus is on a five-day clear-sky period, for which the UHI intensity reaches its maximum. The diurnal evolution of the UHI intensity was found to be adequately simulated for this five day period. The maximum difference at night in 2-m temperature between urban and rural areas stemming from the urban heating is reproduced with a relative error of less than 10%. The UHI has an ellipsoidal shape and stretches along the prevailing wind direction. The maximum UHI intensity of 6.1 K occurs at 23:00 UTC located 6 km downstream of the city centre and this largely remains during the whole night. An idealized one-column model study demonstrates that the nocturnal differential sensible heat flux, even though much smaller than its daytime value, is mainly responsible for the maximum UHI intensity. The reason for this nighttime maximum is that additional heat is only affecting a shallow layer of 150 m. At the same time, an idealized study shows that the orography around the city of Paris induces an uplift. This leads to a considerable nocturnal adiabatic cooling over cropland. In contrast, this uplift has little effect on the mixed-layer temperature over the city. About twenty percent of the total maximum UHI intensity is estimated to be caused by this uplift.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Green Roof Technology- Mitigate Urban Heat Island (UHI Effect
Odli Z.S. M.
2016-01-01
Full Text Available Alterations on the land surfaces, which are attributed by human activities, especially in cities, cause many implications to the ecosystem. The increase of buildings in cities is reflecting the growth of human activities resulted in a significant temperature increase and warmer pattern in the urban area than the surrounding countryside. The phenomenon defined as urban heat island. This study investigates the application and efficiency of the green roof as an approach to mitigate urban heat island and reducing indoor temperature in a building. Two types of roof models, which consist of vegetative roof and non-vegetative roof, were built to investigate the efficiency of vegetated roof in reducing indoor temperature compared to the non-vegetated roof. The outdoor and indoor temperature and humidity of each roof model were monitored by using RH520 Thermo Hygrometer. The data was collected for three times in a week for 9 weeks at 9:00am to 5:00pm. It was found that the indoor average temperature data for vegetative roof could be reduced 2.4°C from the outdoor average temperature and 0.8°C for non-vegetative roof. The difference of temperature reduction for vegetative roof was greater than the nonvegetative roof, thus indicate that green roof was highly efficient in reducing indoor temperature and mitigate urban heat island impact.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Fernanda Vargas Ferreira
2012-04-01
Full Text Available TEMA: Verificar os achados de força muscular respiratória (FMR, postura corporal (PC, intensidade vocal (IV e tempos máximos de fonação (TMF, em indivíduos com Doença de Parkinson (DP e casos de controle, conforme o sexo, o estágio da DP e o nível de atividade física (AF. PROCEDIMENTOS: três homens e duas mulheres com DP, entre 36 e 63 anos (casos de estudo - CE, e cinco indivíduos sem doenças neurológicas, pareados em idade, sexo e nível de AF (casos de controle - CC. Avaliadas a FMR, PC, IV e TMF. RESULTADOS: homens: diminuição mais acentuada dos TMF, IV e FMR nos parkinsonianos, mais alterações posturais nos idosos; mulheres com e sem DP: alterações posturais similares, relação positiva entre estágio, nível de AF e as demais medidas. CONCLUSÕES: Verificou-se nas parkinsonianas, prejuízo na IV e nos parkinsonianos déficits nos TMF, IV e FMR. Sugerem-se novos estudos sob um viés interdisciplinar.PURPOSE: To check the findings on respiratory muscular strength (RMS, body posture (BP, vocal intensity (VI and maximum phonation time (MPT, in patients with Parkinson Disease (PD and control cases, according to gender, Parkinson Disease stage (PD and the level of physical activity (PA. METHODS: three men and two women with PD, between 36 and 63 year old (study cases - SC, and five subjects without neurologic diseases, of the same age, gender and PA level (control cases - CC. We evaluated RMS, BP, VI and MPT. RESULTS: men: a more pronounced decrease of MPT, VI, RMS in Parkinson patients, plus postural alterations in the elderly; women: similar postural alterations, positive relation between stages, PA level and the other measures. CONCLUSIONS: We observed in women with PD, impaired VI; in men with PD deficits in MPT, VI, RMS. We suggest further studies under an interdisciplinary bias.
Crocker, M.J.; Jacobsen, Finn
1997-01-01
This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....
Crocker, M.J.; Jacobsen, Finn
1997-01-01
This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....
Crocker, Malcolm J.; Jacobsen, Finn
1998-01-01
This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....
Assessing the effect of wind speed/direction changes on urban heat island intensity of Istanbul.
Perim Temizoz, Huriye; Unal, Yurdanur S.
2017-04-01
Assessing the effect of wind speed/direction changes on urban heat island intensity of Istanbul. Perim Temizöz, Deniz H. Diren, Cemre Yürük and Yurdanur S. Ünal Istanbul Technical University, Department of Meteorological Engineering, Maslak, Istanbul, Turkey City or metropolitan areas are significantly warmer than the outlying rural areas since the urban fabrics and artificial surfaces which have different radiative, thermal and aerodynamic features alter the surface energy balance, interact with the regional circulation and introduce anthropogenic sensible heat and moisture into the atmosphere. The temperature contrast between urban and rural areas is most prominent during nighttime since heat is absorbed by day and emitted by night. The intensity of the urban heat island (UHI) vary considerably depending on the prevailent meteorological conditions and the characteristics of the region. Even though urban areas cover a small fraction of Earth, their climate has greater impact on the world's population. Over half of the world population lives in the cities and it is expected to rise within the coming decades. Today almost one fifth of the Turkey's population resides in Istanbul with the percentage expected to increase due to the greater job opportunities compared to the other cities. Its population has been increased from 2 millions to 14 millions since 1960s. Eventually, the city has been expanded tremendously within the last half century, shifting the landscape from vegetation to built up areas. The observations of the last fifty years over Istanbul show that the UHI is most pronounced during summer season. The seasonal temperature differences between urban and suburban sites reach up to 3 K and roughly haft degree increase in UHI intensity is observed after 2000. In this study, we explore the possible range of heat load and distribution over Istanbul for different prevailing wind conditions by using the non-hydrostatic MUKLIMO3 model developed by DWD
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Human influence on tropical cyclone intensity
Sobel, Adam H.; Camargo, Suzana J.; Hall, Timothy M.; Lee, Chia-Ying; Tippett, Michael K.; Wing, Allison A.
2016-07-01
Recent assessments agree that tropical cyclone intensity should increase as the climate warms. Less agreement exists on the detection of recent historical trends in tropical cyclone intensity. We interpret future and recent historical trends by using the theory of potential intensity, which predicts the maximum intensity achievable by a tropical cyclone in a given local environment. Although greenhouse gas-driven warming increases potential intensity, climate model simulations suggest that aerosol cooling has largely canceled that effect over the historical record. Large natural variability complicates analysis of trends, as do poleward shifts in the latitude of maximum intensity. In the absence of strong reductions in greenhouse gas emissions, future greenhouse gas forcing of potential intensity will increasingly dominate over aerosol forcing, leading to substantially larger increases in tropical cyclone intensities.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Urban effects and summer thunderstorms in a tropical cyclone affected situation over Guangzhou city
MENG; WeiGuang; YAN; JingHua; HU; HaiBo
2007-01-01
With data mainly from Guangzhou mesonet Automatic Weather Stations (AWS), Guangzhou Doppler Radar and satellite TBB data, characteristics and evolution of the urban heat island (UHI) over Guangzhou City were analyzed in a tropical cyclone affected situation for early August 2005. In particular, two thunderstorms occurring during this period respectively at the night of 4 August and in the afternoon of 7 August were investigated to study the relationships between the development of thunderstorms and the UHI. Results showed that two thunderstorms were associated with the UHI effects. UHI induced local air convergence and initiated the thunderstorm convections. Both cases showed a general agreement in time and space for the locations of maximum UHI, convergence, convection, and precipitation. Convection was found to be more favorable to developing in time periods and locations with stronger intensity of UHI. Analysis also showed that, due to the urban effects, both thunderstorms got strengthened when moving over Guangzhou City, with maximum radar echoes observed right over the urban area and precipitation located within the city. All these features reveal that two thunderstorms were urban-induced storms.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Theeuwes, N.E.; Steeneveld, G.J.; Ronda, R.J.; Holtslag, A.A.M.
2016-01-01
The urban heat island (UHI) effect, defined as the air temperature difference between the urban canyon and the nearby rural area, is investigated. Because not all cities around the world are equipped with an extensive measurement network, a need exists for a relatively straightforward equation for t
Refreshing the role of open water surfaces on mitigating the maximum urban heat island effect
Steeneveld, G.J.; Koopmans, S.; Heusinkveld, B.G.; Theeuwes, N.E.
2014-01-01
During warm summer episodes citizens in urban areas are subject to reduced human thermal comfort and negative health effects. To mitigate these adverse effects, land use planners and urban designers have used the evaporative power of water bodies as a tool to limit the urban heat island effect (UHI)
Vannini, Phillip; Bissell, David; Jensen, Ole B.
This paper explores the intensities of long distance commuting journeys as a way of exploring how bodily sensibilities are being changed by the mobilities that they undertake. The context of this paper is that many people are travelling further to work than ever before owing to a variety of facto....... By exploring how experiences of long-distance workers become constituted by a range of different material forces enables us to more sensitively consider the practical, technical, and political implications of this increasingly prevalent yet underexplored regime of work....... which relate to transport, housing and employment. Yet we argue that the experiential dimensions of long distance mobilities have not received the attention that they deserve within geographical research on mobilities. This paper combines ideas from mobilities research and contemporary social theory...... with fieldwork conducted in Canada, Denmark and Australia to develop our understanding of the experiential politics of long distance workers. Rather than focusing on the extensive dimensions of mobilities that are implicated in patterns and trends, our paper turns to the intensive dimensions of this experience...
Exercise Intensity: How to Measure It
... 50 to about 70 percent of your maximum heart rate Vigorous exercise intensity: 70 to about 85 percent of your ... numbers are your training zone heart rate. Your heart rate during exercise should be between these two numbers. For example, ...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Measuring Physical Activity Intensity
... aerobic activity: relative intensity and absolute intensity. Relative Intensity The level of effort required by a person to do an activity. When using relative intensity, people pay attention to how physical activity affects ...
Measuring Physical Activity Intensity
Full Text Available ... aerobic activity: relative intensity and absolute intensity. Relative Intensity The level of effort required by a person to do an activity. When using relative intensity, people pay attention to how physical activity affects ...
Measuring Physical Activity Intensity
Full Text Available ... Compartir For more help with what counts as aerobic activity, watch this video: Windows Media Player, 4: ... ways to understand and measure the intensity of aerobic activity: relative intensity and absolute intensity. Relative Intensity ...
[The future of intensive medicine].
Palencia Herrejón, E; González Díaz, G; Mancebo Cortés, J
2011-05-01
Although Intensive Care Medicine is a young specialty compared with other medical disciplines, it currently plays a key role in the process of care for many patients. Experience has shown that professionals with specific training in Intensive Care Medicine are needed to provide high quality care to critically ill patients. In Europe, important steps have been taken towards the standardization of training programs of the different member states. However, it is now necessary to take one more step forward, that is, the creation of a primary specialty in Intensive Care Medicine. Care of the critically ill needs to be led by specialists who have received specific and complete training and who have the necessary professional competences to provide maximum quality care to their patients. The future of the specialty presents challenges that must be faced with determination, with the main objective of meeting the needs of the population.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
GROWTH ANALYSIS AND ASSESSMENT OF PIG’S BIOLOGICAL MAXIMUM
Dragutin Vincek
2010-06-01
Full Text Available The aim of this study was to determine a mathematical model which can be used to describe the growth of domestic animals in an attempt to predict the optimal time of slaughter/weight or the development of body parts or tissues and estimate the biological maximum. The study was conducted on 60 pigs (30 barrows and 30 gilts in the interval between the age of 49 and 215 days. By applying the generalized logistic function, the growth of live weight and tissues were described. The observed gilts reached the inflection point in approximately 121 days (I = 70.7 kg. The point at which the interval of intensive growth starts was at the age of approximately 42 days, (TB=17.35 kg and the saturation point the pigs reached at the age of 200.5 days (TC=126.74 kg. The estimated biological maximum weight of gilts was 179.79 kg. The barrows reached the inflection point in approximately 149 days (I=92.2 kg. The point at which the intensive interval of growth starts was estimated at the age of approximately 52 days (TB=22.93 kg, and the saturation point the barrows reached at the age of 245 days (TC=164.8 kg. The estimated biological maximum weight of barrows was 233.25 kg. Muscle tissue of gilts reached the inflection point (I = 28.46 kg in approximately 110 days. The point at which the interval of intensive growth of muscle tissue starts (TB=6.06 kg was estimated at approximately 53 days, and the saturation point of growth (TC=52.25 kg the muscle tissue of gilts reached at the age of 162 days. The estimated maximum biological growth of muscle tissue in gilts was 75.79 kg. The muscle tissue of barrows reached the inflection point (I=28.78 kg in approximately 118 days, the point at which the interval of intensive growth starts (TB=6.36 kg at the age of approximately 35 days. The saturation point of muscle tissue growth in barrows (TC=52.51 kg was reached at the age of 202 days. The estimated maximum biological growth of muscle tissue in barrows was 75.74 kg. The
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
S. Sridevi
2013-02-01
Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
MB Distribution and its application using maximum entropy approach
Bhadra Suman
2016-01-01
Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.
Penalized maximum likelihood estimation for generalized linear point processes
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
The role of one large greenspace in mitigating London's nocturnal urban heat island.
Doick, Kieron J; Peace, Andrew; Hutchings, Tony R
2014-09-15
The term urban heat island (UHI) describes a phenomenon where cities are on average warmer than the surrounding rural area. Trees and greenspaces are recognised for their strong potential to regulate urban air temperatures and combat the UHI. Empirical data is required in the UK to inform predictions on cooling by urban greenspaces and guide planning to maximise cooling of urban populations. We describe a 5-month study to measure the temperature profile of one of central London's large greenspaces and also in an adjacent street to determine the extent to which the greenspace reduced night-time UHI intensity. Statistical modelling displayed an exponential decay in the extent of cooling with increased distance from the greenspace. The extent of cooling ranged from an estimated 20 m on some nights to 440 m on other nights. The mean temperature reduction over these distances was 1.1 °C in the summer months, with a maximum of 4 °C cooling observed on some nights. Results suggest that calculation of London's UHI using Met Stations close to urban greenspace can underestimate 'urban' heat island intensity due to the cooling effect of the greenspace and values could be in the region of 45% higher. Our results lend support to claims that urban greenspace is an important component of UHI mitigation strategies. Lack of certainty over the variables that govern the extent of the greenspace cooling influence indicates that the multifaceted roles of trees and greenspaces in the UK's urban environment merit further consideration.
ZHANG Ning; ZHU Lianfang; ZHU Yan
2011-01-01
A strong urban heat island (UHI) appeared in a hot weather episode in Suzhou City during the period from 25 July to 1 August 2007. This paper analyzes the urban heat island characteristics of Suzhou City under this hot weather episode. Both meteorological station observations and MODIS satellite observations show a strong urban heat island in this area. The maximum UHI intensity in this hot weather episode is 2.2℃, which is much greater than the summer average of 1.0℃ in this year and the 37-year (from 1970 to 2006) average of 0.35℃. The Weather Research and Forecasting (WRF) model simulation results demonstrate that the rapid urbanization processes in this area will enhance the UHI in intensity, horizontal distribution, and vertical extension. The UHI spatial distribution expands as the urban size increases. The vertical extension of UHI in the afternoon increases about 50 m higher under the year 2006 urban land cover than that under the 1986 urban land cover. The conversion from rural land use to urban land type also strengthens the local lake-land breeze circulations in this area and modifies the vertical wind speed field.
Whistler intensities above thunderstorms
J. Fiser
2010-01-01
Full Text Available We report a study of penetration of the VLF electromagnetic waves induced by lightning to the ionosphere. We compare the fractional hop whistlers recorded by the ICE experiment onboard the DEMETER satellite with lightning detected by the EUCLID detection network. To identify the fractional hop whistlers, we have developed software for automatic detection of the fractional-hop whistlers in the VLF spectrograms. This software provides the detection times of the fractional hop whistlers and the average amplitudes of these whistlers. Matching the lightning and whistler data, we find the pairs of causative lightning and corresponding whistler. Processing data from ~200 DEMETER passes over the European region we obtain a map of mean amplitudes of whistler electric field as a function of latitudinal and longitudinal difference between the location of the causative lightning and satellite magnetic footprint. We find that mean whistler amplitude monotonically decreases with horizontal distance up to ~1000 km from the lightning source. At larger distances, the mean whistler amplitude usually merges into the background noise and the whistlers become undetectable. The maximum of whistler intensities is shifted from the satellite magnetic footprint ~1° owing to the oblique propagation. The average amplitude of whistlers increases with the lightning current. At nighttime (late evening, the average amplitude of whistlers is about three times higher than during the daytime (late morning for the same lightning current.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Hardness/intensity correlations among BATSE bursts
Paciesas, William S.; Pendleton, Geoffrey N.; Kouveliotou, Chryssa; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.
1992-01-01
Conclusions about the nature of gamma-ray bursts derived from the size-frequency distribution may be altered if a significant correlation exists between burst intensity and spectral shape. Moreover, if gamma-ray bursts have a cosmological origin, such a correlation may be expected to result from the expansion of the universe. We have performed a rudimentary search of the BATSE bursts for hardness/intensity correlations. The range of spectral shapes was determined for each burst by computing the ratio of the intensity in the range 100-300 keV to that in 55-300 keV. We find weak evidence for the existence of a correlation, the strongest effect being present when comparing the maximum hardness ratio for each burst with its maximum rate.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
New downshifted maximum in stimulated electromagnetic emission spectra
Sergeev, Evgeny; Grach, Savely
A new spectral maximum in spectra of stimulated electromagnetic emission of the ionosphere (SEE, [1]) was detected in experiments at the SURA facility in 2008 for the pump frequencies f0 4.4-4.5 MHz, most stably for f0 = 4.3 MHz, the lowest possible pump frequency at the SURA facility. The new maximum is situated at frequency shifts ∆f -6 kHz from the pump wave frequency f0 , ∆f = fSEE - f0 , somewhat closer to the f0 than the well known [2,3] Downshifted Maximum in the SEE spectrum at ∆f -9 kHz. The detection and detailed study of the new feature (which we tentatively called the New Downshifted Maximum, NDM) became possible due to high frequency resolution in spectral analysis. The following properties of the NDM are established. (i) The NDM appears in the SEE spectra simultaneously with the DM and UM features after the pump turn on (recall that the less intensive Upshifted Maximum, UM, is situated at ∆f +(6-8) kHz [2,3]). The NDM can't be attributed to 1 DM [4] or Narrow Continuum Maximum (NCM, 2 [5]) SEE features, as well as to splitted DM near gyroharmonics [2]. (ii) The NDM is observed as prominent feature for maximum pump power of the SURA facility P ≈ 120 MW ERP, for which the DM is almost covered by the Broad Continuum SEE feature [2,3]. For P ˜ 30-60 MW ERP the DM and NDM have comparable intensities. For the lesser pump power the DM prevails in the SEE spectrum, while the NDM becomes invisible being covered by the thermal Narrow Continuum feature [2]. (iii) The NDM is exactly symmetrical for the UM relatively to f0 when the former one is observed, although the UM frequency offset increases up to ∆fUM ≈ +9 kHz with a decrease of the pump power up to P ≈ 4 MW ERP. The DM formation in the SEE spectrum is attributed to a three-wave interaction between the upper and lower hybrid waves in the ionosphere, and the lower hybrid frequency ( 7 kHz) determines the frequency offset of the DM high frequency flank [2,6]. The detection of the NDM with
Iowa Intensive Archaeological Survey
Iowa State University GIS Support and Research Facility — This shape file contains intensive level archaeological survey areas for the state of Iowa. All intensive Phase I surveys that are submitted to the State Historic...
Measuring Physical Activity Intensity
Full Text Available ... for a breath. Absolute Intensity The amount of energy used by the body per minute of activity. ... or vigorous-intensity based upon the amount of energy used by the body while doing the activity. ...
Rainfed intensive crop systems
Olesen, Jørgen E
2014-01-01
This chapter focuses on the importance of intensive cropping systems in contributing to the world supply of food and feed. The impact of climate change on intensive crop production systems is also discussed.......This chapter focuses on the importance of intensive cropping systems in contributing to the world supply of food and feed. The impact of climate change on intensive crop production systems is also discussed....
A Maximum-Entropy Method for Estimating the Spectrum
无
2007-01-01
Based on the maximum-entropy (ME) principle, a new power spectral estimator for random waves is derived in the form of ~S(ω)=(a/8)-H2(2π)d+1ω-(d+2)exp[-b(2π/ω)n], by solving a variational problem subject to some quite general constraints. This robust method is comprehensive enough to describe the wave spectra even in extreme wave conditions and is superior to periodogram method that is not suitable to process comparatively short or intensively unsteady signals for its tremendous boundary effect and some inherent defects of FFT. Fortunately, the newly derived method for spectral estimation works fairly well, even though the sample data sets are very short and unsteady, and the reliability and efficiency of this spectral estimator have been preliminarily proved.
Analytical maximum likelihood estimation of stellar magnetic fields
González, M J Martínez; Ramos, A Asensio; Belluzzi, L
2011-01-01
The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Estimating landscape carrying capacity through maximum clique analysis.
Donovan, Therese M; Warrington, Gregory S; Schwenk, W Scott; Dinitz, Jeffrey H
2012-12-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be
Maximum power point tracking for photovoltaic system using model predictive control
Ma, Chao; Li, Ning; Li, Shaoyuan [Shanghai Jiao Tong Univ., Shanghai (China). Key Lab. of System Control and Information Processing
2013-07-01
In this paper, T-G-P model is built to find maximum power point according to light intensity and temperature, making it easier and more clearly for photovoltaic system to track the MPP. A predictive controller considering constraints for safe operation is designed. The simulation results show that the system can track MPP quickly, accurately and effectively.
Biogeochemistry of the MAximum TURbidity Zone of Estuaries (MATURE): some conclusions
Herman, P.M.J.; Heip, C.H.R.
1999-01-01
In this paper, we give a short overview of the activities and main results of the MAximum TURbidity Zone of Estuaries (MATURE) project. Three estuaries (Elbe, Schelde and Gironde) have been sampled intensively during a joint 1-week campaign in both 1993 and 1994. We introduce the publicly available
Beer Foam: Effect of Pressure on Gushing Intensity.
Bandy, D.; Poštulková, M. (Michaela); Zítková, K.; Růžička, M.; Stanovský, P. (Petr); Brányik, T.
2016-01-01
The extract of hydrophobin HFBII showed large gushing activity, much stronger than the BSA solution. With HFBII, the gushing intensity monotonously increased with the pressure, in the range tested (0-6 bars). In contrast, with HFBII, the gushing intensity showed the maximum at about 5 bars, for which we presently have no explanation.\
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Global Optimization Using Diffusion Perturbations with Large Noise Intensity
G. Yin; K. Yin
2006-01-01
This work develops an algorithm for global optimization. The algorithm is of gradient ascent type and uses random perturbations. In contrast to the annealing type procedures, the perturbation noise intensity is large. We demonstrate that by properly varying the noise intensity, approximations to the global maximum can be achieved. We also show that the expected time to reach the domain of attraction of the global maximum,which can be approximated by the solution of a boundary value problem, is finite. Discrete-time algorithms are proposed; recursive algorithms with occasional perturbations involving large noise intensity are developed.Numerical examples are provided for illustration.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Beam intensity upgrade at Fermilab
Marchionni, A.; /Fermilab
2006-07-01
The performance of the Fermilab proton accelerator complex is reviewed. The coming into operation of the NuMI neutrino line and the implementation of slip-stacking to increase the anti-proton production rate has pushed the total beam intensity in the Main Injector up to {approx} 3 x 10{sup 13} protons/pulse. A maximum beam power of 270 kW has been delivered on the NuMI target during the first year of operation. A plan is in place to increase it to 350 kW, in parallel with the operation of the Collider program. As more machines of the Fermilab complex become available with the termination of the Collider operation, a set of upgrades are being planned to reach first 700 kW and then 1.2 MW by reducing the Main Injector cycle time and by implementing proton stacking.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Macintire, D K
1999-07-01
To provide optimal care, a veterinarian in a pediatric intensive care situation for a puppy or kitten should be familiar with normal and abnormal vital signs, nursing care and monitoring considerations, and probable diseases. This article is a brief discussion of the pediatric intensive care commonly required to treat puppies or kittens in emergency situations and for canine parvovirus type 2 enteritis.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Okamoto, T; Masuhara, M; Ikuta, K
2013-05-01
Although high-intensity resistance training increases arterial stiffness, low-intensity resistance training reduces arterial stiffness. The present study investigates the effect of low-intensity resistance training before and after high-intensity resistance training on arterial stiffness. 30 young healthy subjects were randomly assigned to a group that performed low-intensity resistance training before high-intensity resistance training (BLRT, n=10), a group that performed low-intensity resistance training after high-intensity resistance training (ALRT, n=10) and a sedentary control group (n=10). The BLRT and ALRT groups performed resistance training at 80% and 50% of one repetition maximum twice each week for 10 wk. Arterial stiffness was measured using carotid-femoral and femoral-ankle pulse wave velocity (PWV). One-repetition maximum strength in the both ALRT and BLRT significantly increased after the intervention (Ptraining in the ALRT group did not change from before training. In contrast, carotid-femoral PWV after combined training in the BLRT group increased from before training (P training in the both BLRT and ALRT groups did not change from before training. These results suggest that although arterial stiffness is increased by low-intensity resistance training before high-intensity resistance training, performing low-intensity resistance training thereafter can prevent the increase of arterial stiffness. © Georg Thieme Verlag KG Stuttgart · New York.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Xu, L Y; Xie, X D; Li, S
2013-07-01
This study combines the methods of observation statistics and remote sensing retrieval, using remote sensing information including the urban heat island (UHI) intensity index, the normalized difference vegetation index (NDVI), the normalized difference water index (NDWI), and the difference vegetation index (DVI) to analyze the correlation between the urban heat island effect and the spatial and temporal concentration distributions of atmospheric particulates in Beijing. The analysis establishes (1) a direct correlation between UHI and DVI; (2) an indirect correlation among UHI, NDWI and DVI; and (3) an indirect correlation among UHI, NDVI, and DVI. The results proved the existence of three correlation types with regional and seasonal effects and revealed an interesting correlation between UHI and DVI, that is, if UHI is below 0.1, then DVI increases with the increase in UHI, and vice versa. Also, DVI changes more with UHI in the two middle zones of Beijing.
Intensity Biased PSP Measurement
Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.
2000-01-01
The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.
Stretching Effects: High-intensity & Moderate-duration vs. Low-intensity & Long-duration.
Freitas, S R; Vaz, J R; Bruno, P M; Andrade, R; Mil-Homens, P
2016-03-01
This study examined whether a high-intensity, moderate-duration bout of stretching would produce the same acute effects as a low-intensity, long-duration bout of stretching. 17 volunteers performed 2 knee-flexor stretching protocols: a high-intensity stretch (i. e., 100% of maximum tolerable passive torque) with a moderate duration (243.5 ± 69.5-s); and a low-intensity stretch (50% of tolerable passive torque) with a long duration (900-s). Passive torque at a given sub-maximal angle, peak passive torque, maximal range of motion (ROM), and muscle activity were assessed before and after each stretching protocol (at intervals of 1, 30 and 60 min). The maximal ROM and tolerable passive torque increased for all time points following the high-intensity stretching (p0.05). 1 min post-stretching, the passive torque decreased in both protocols, but to a greater extent in the low-intensity protocol. 30 min post-test, torque returned to baseline for the low-intensity protocol and had increased above the baseline for the high-intensity stretches. The following can be concluded: 1) High-intensity stretching increases the maximal ROM and peak passive torque compared to low-intensity stretching; 2) low-intensity, long-duration stretching is the best way to acutely decrease passive torque; and 3) high-intensity, moderate-duration stretching increases passive torque above the baseline 30 min after stretching. © Georg Thieme Verlag KG Stuttgart · New York.
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Proton energy dependence of slow neutron intensity
Teshigawara, Makoto; Harada, Masahide; Watanabe, Noboru; Kai, Tetsuya; Sakata, Hideaki; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ooi, Motoki [Hokkaido Univ., Sapporo (Japan)
2001-03-01
The choice of the proton energy is an important issue for the design of an intense-pulsed-spallation source. The optimal proton beam energy is rather unique from a viewpoint of the leakage neutron intensity but no yet clear from the slow-neutron intensity view point. It also depends on an accelerator type. Since it is also important to know the proton energy dependence of slow-neutrons from the moderators in a realistic target-moderator-reflector assembly (TMRA). We studied on the TMRA proposed for Japan Spallation Neutron Source. The slow-neutron intensities from the moderators per unit proton beam power (MW) exhibit the maximum at about 1-2 GeV. At higher proton energies the intensity per MW goes down; at 3 and 50 GeV about 0.91 and 0.47 times as low as that at 1 GeV. The proton energy dependence of slow-neutron intensities was found to be almost the same as that of total neutron yield (leakage neutrons) from the same bare target. It was also found that proton energy dependence was almost the same for the coupled and decoupled moderators, regardless the different moderator type, geometry and coupling scheme. (author)
Measuring Physical Activity Intensity
Full Text Available ... rate and breathing. The talk test is a simple way to measure relative intensity. In general, if ... Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs File Formats Help: How do I view different ...
Measuring Physical Activity Intensity
Full Text Available ... Button Our Division About Us Nutrition Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs Measuring Physical Activity Intensity Recommend on Facebook Tweet Share Compartir For ...
Measuring Physical Activity Intensity
Full Text Available ... energy used by the body while doing the activity. Top of Page Moderate Intensity Walking briskly (3 miles per hour or faster, but not race-walking) Water aerobics Bicycling slower than 10 miles per hour ...
Measuring Physical Activity Intensity
Full Text Available ... per hour or faster, but not race-walking) Water aerobics Bicycling slower than 10 miles per hour Tennis (doubles) Ballroom dancing General gardening Vigorous Intensity Race walking, jogging, or running Swimming ...
Measuring Physical Activity Intensity
Full Text Available ... be able to say more than a few words without pausing for a breath. Absolute Intensity The ... per hour or faster, but not race-walking) Water aerobics Bicycling slower than 10 miles per hour ...
Measuring Physical Activity Intensity
Full Text Available ... be able to say more than a few words without pausing for a breath. Absolute Intensity The ... site? Adobe PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple ...
Measuring Physical Activity Intensity
Full Text Available ... Adults Needs for Children What Counts Needs for Older Adults Needs for Pregnant or Postpartum Women Physical Activity & ... to Your Life Activities for Children Activities for Older Adults Overcoming Barriers Measuring Physical Activity Intensity Target Heart ...
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
[Intensive medicine in Spain].
2011-03-01
Intensive care medicine is a medical specialty that was officially established in our country in 1978, with a 5-year training program including two years of common core training followed by three years of specific training in an intensive care unit accredited for training. During this 32-year period, intensive care medicine has carried out an intense and varied activity, which has allowed its positioning as an attractive and with future specialty in the hospital setting. This document summarizes the history of the specialty, its current situation, the key role played in the programs of organ donation and transplantation of the National Transplant Organization (after more than 20 years of mutual collaboration), its training activities with the development of the National Plan of Cardiopulmonary Resuscitation, with a trajectory of more than 25 years, its interest in providing care based on quality and safety programs for the severely ill patient. It also describes the development of reference registries due to the need for reliable data on the care process for the most prevalent diseases, such as ischemic heart disease or ICU-acquired infections, based on long-term experience (more than 15 years), which results in the availability of epidemiological information and characteristics of care that may affect the practical patient's care. Moreover, features of its scientific society (SEMICYUC) are reported, an organization that agglutinates the interests of more than 280 ICUs and more than 2700 intensivists, with reference to the journal Medicina Intensiva, the official journal of the society and the Panamerican and Iberian Federation of Critical Medicine and Intensive Care Societies. Medicina Intensiva is indexed in the Thompson Reuters products of Science Citation Index Expanded (Scisearch(®)) and Journal Citation Reports, Science Edition. The important contribution of the Spanish intensive care medicine to the scientific community is also analyzed, and in relation to
Critchlow, Terence
2013-01-01
Data-intensive science has the potential to transform scientific research and quickly translate scientific progress into complete solutions, policies, and economic success. But this collaborative science is still lacking the effective access and exchange of knowledge among scientists, researchers, and policy makers across a range of disciplines. Bringing together leaders from multiple scientific disciplines, Data-Intensive Science shows how a comprehensive integration of various techniques and technological advances can effectively harness the vast amount of data being generated and significan
CERN Bulletin
2010-01-01
Over the past 2 weeks, commissioning of the machine protection system has advanced significantly, opening up the possibility of higher intensity collisions at 3.5 TeV. The intensity has been increased from 2 bunches of 1010 protons to 6 bunches of 2x1010 protons. Luminosities of 6x1028 cm-2s-1 have been achieved at the start of fills, a factor of 60 higher than those provided for the first collisions on 30 March. The recent increase in LHC luminosity as recorded by the experiments. (Graph courtesy of the experiments and M. Ferro-Luzzi) To increase the luminosity further, the commissioning crews are now trying to push up the intensity of the individual proton bunches. After the successful injection of nominal intensity bunches containing 1.1x1011 protons, collisions were subsequently achieved at 450 GeV with these intensities. However, half-way through the first ramping of these nominal intensity bunches to 3.5 TeV on 15 May, a beam instability was observed, leading to partial beam loss...
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Joint modelling of annual maximum drought severity and corresponding duration
Tosunoglu, Fatih; Kisi, Ozgur
2016-12-01
In recent years, the joint distribution properties of drought characteristics (e.g. severity, duration and intensity) have been widely evaluated using copulas. However, history of copulas in modelling drought characteristics obtained from streamflow data is still short, especially in semi-arid regions, such as Turkey. In this study, unlike previous studies, drought events are characterized by annual maximum severity (AMS) and corresponding duration (CD) which are extracted from daily streamflow of the seven gauge stations located in Çoruh Basin, Turkey. On evaluation of the various univariate distributions, the Exponential, Weibull and Logistic distributions are identified as marginal distributions for the AMS and CD series. Archimedean copulas, namely Ali-Mikhail-Haq, Clayton, Frank and Gumbel-Hougaard, are then employed to model joint distribution of the AMS and CD series. With respect to the Anderson Darling and Cramér-von Mises statistical tests and the tail dependence assessment, Gumbel-Hougaard copula is identified as the most suitable model for joint modelling of the AMS and CD series at each station. Furthermore, the developed Gumbel-Hougaard copulas are used to derive the conditional and joint return periods of the AMS and CD series which can be useful for designing and management of reservoirs in the basin.
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
Gabriel N. Maggio
2015-01-01
Full Text Available The space-time whitened matched filter (ST-WMF maximum likelihood sequence detection (MLSD architecture has been recently proposed (Maggio et al., 2014. Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i drastically reduces the number of states of the Viterbi decoder (VD and (ii offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We show that the space compression of the nonlinear channel is an instrumental property of the ST-WMF-MLSD which results in a major reduction of the implementation complexity in intensity modulation and direct detection (IM/DD fiber optic systems. Moreover, we assess the performance of ST-WMF-MLSD in IM/DD optical systems with chromatic dispersion (CD and polarization mode dispersion (PMD. Numerical results for a 10 Gb/s, 700 km, and IM/DD fiber-optic link with 50 ps differential group delay (DGD show that the number of states of the VD in ST-WMF-MLSD can be reduced ~4 times compared to an oversampled MLSD. Finally, we analyze the impact of the imperfect channel estimation on the performance of the ST-WMF-MLSD. Our results show that the performance degradation caused by channel estimation inaccuracies is low and similar to that achieved by existing MLSD schemes (~0.2 dB.
The maximum contribution to reionization from metal-free stars
Rozas, J M; Salvador-Solé, E; Rozas, Jose M.; Miralda-Escude, Jordi; Salvador-Sole, Eduard
2005-01-01
We estimate the maximum contribution to reionization from the first generation of massive stars, with zero metallicity, under the assumption that one of these stars forms with a fixed mass in every collapsed halo in which metal-free gas is able to cool. We assume that any halo that has already had stars previously formed in one of their halo progenitors will form only stars with metals, which are assigned an emissivity of ionizing radiation equal to that determined at z=4 from the measured intensity of the ionizing background. We examine the impact of molecular hydrogen photodissociation (which tends to reduce cooling when a photodissociating background is produced by the first stars) and X-Ray photoheating (which heats the atomic medium, raising the entropy of the gas before it collapses into halos). We find that in the CDM$\\Lambda$ model supported by present observations, and even assuming no negative feedbacks for the formation of metal-free stars, a reionized mass fraction of 50% is not reached until reds...
On the maximum-entropy method for kinetic equation of radiation, particle and gas
El-Wakil, S.A. [Mansoura Univ. (Egypt). Phys. Dept.; Madkour, M.A. [Mansoura Univ. (Egypt). Phys. Dept.; Degheidy, A.R. [Mansoura Univ. (Egypt). Phys. Dept.; Machali, H.M. [Mansoura Univ. (Egypt). Phys. Dept.
1995-02-01
The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.).
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K
2009-01-01
is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data....... The saddle-point approximation is an adequate replacement in most practical situations. The performance of normexp for assessing differential expression is improved by adding a small offset to the corrected intensities....
Lewis, W.B
1966-07-01
The presentation discusses both the economic and research contexts that would be served by producing neutrons in gram quantities at high intensities by electrical means without uranium-235. The revenue from producing radioisotopes is attractive. The array of techniques introduced by the multipurpose 65 megawatt Intense Neutron Generator project includes liquid metal cooling, superconducting magnets for beam bending and focussing, super-conductors for low-loss high-power radiofrequency systems, efficient devices for producing radiofrequency power, plasma physics developments for producing and accelerating hydrogen, ions at high intensity that are still far out from established practice, a multimegawatt high voltage D.C. generating machine that could have several applications. The research fields served relate principally to materials science through neutron-phonon and other quantum interactions as well as through neutron diffraction. Nuclear physics is served through {mu}-, {pi}- and K-meson production. Isotope production enters many fields of applied research. (author)
2014-12-12
TYPE Journal Article 3. DATES COVERED (From - To) 01 Oct 2014 – 30 Nov 2014 4. TITLE AND SUBTITLE Estimate of Solar Maximum Using the 1–8 Å...predict the intensity and date of the solar maximum of the current solar cycle. The solar cycle 24 prediction panel3 (Biesecker & Prediction Panel 2007...statement of the solar cycle 24 prediction panel is available at http://www.swpc.noaa.gov/SolarCycle/SC24/. 2. DETERMINATION OF THE SOLAR CYCLE
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
An Ocean-Based Potential Intensity Index for Tropical Cyclones
Lin, I. I.; Black, P. G.; Price, J. F.; Yang, C.; Chen, S. S.; Chi, N.; Harr, P.; Lien, C.; D'Asaro, E. A.; Wu, C.
2012-12-01
Improvement in tropical cyclones' intensity prediction is an important ongoing effort. Cooling of the ocean by storm mixing reduces storm intensity by reducing the air-sea enthalpy flux. Here, we modify the widely used Sea Surface Temperature Potential Intensity (SST_PI) index by including information from the upper subsurface ocean to form a new Ocean Cooling Potential Intensity index, OC_PI. Applied to a 14-year (1998-2011) Western Pacific typhoon archive, the correlation coefficient between the predicted maximum intensity and the observed peak intensity increased from 0.08 to 0.31. For the sub group of slow-moving TCs that has the strongest interaction with subsurface ocean, r2 increases to 0.56. OC_PI thus contributes to the improvement on the existing PI through incorporation of ocean's subsurface information.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O22225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the oxycline, by a "carbon excess" induced by a specific remineralization. Indeed, a possible co-existence of bacterial heterotrophic and autotrophic processes usually occurring at different depths could
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
Doumy, G
2006-01-15
The continuous progress in the development of laser installations has already lead to ultra-short pulses capable of achieving very high focalized intensities (I > 10{sup 18} W/cm{sup 2}). At these intensities, matter presents new non-linear behaviours, due to the fact that the electrons are accelerated to relativistic speeds. The experimental access to this interaction regime on solid targets has long been forbidden because of the presence, alongside the femtosecond pulse, of a pedestal (mainly due to the amplified spontaneous emission (ASE) which occurs in the laser chain) intense enough to modify the state of the target. In this thesis, we first characterized, both experimentally and theoretically, a device which allows an improvement of the temporal contrast of the pulse: the Plasma Mirror. It consists in adjusting the focusing of the pulse on a dielectric target, so that the pedestal is mainly transmitted, while the main pulse is reflected by the overcritical plasma that it forms at the surface. The implementation of such a device on the UHI 10 laser facility (CEA Saclay - 10 TW - 60 fs) then allowed us to study the interaction between ultra-intense, high contrast pulses with solid targets. In a first part, we managed to generate and characterize dense plasmas resulting directly from the interaction between the main pulse and very thin foils (100 nm). This characterization was realized by using an XUV source obtained by high order harmonics generation in a rare gas jet. In a second part, we studied experimentally the phenomenon of high order harmonics generation on solid targets, which is still badly understood, but could potentially lead to a new kind of energetic ultra-short XUV sources. (author)
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Murata, A.; Sasaki, H.; Hanafusa, M.; Kurihara, K.
2012-12-01
We evaluated the performance of a well-developed nonhydrostatic regional climate model (NHRCM) with a spatial resolution of 5 km with respect to temperature in the present-day climate of Japan, and estimated urban heat island (UHI) intensity by comparing the model results and observations. The magnitudes of root mean square error (RMSE) and systematic error (bias) for the annual average of daily mean (Ta), maximum (Tx), and minimum (Tn) temperatures are within 1.5 K, demonstrating that the temperatures of the present-day climate are reproduced well by NHRCM. These small errors indicate that temperature variability produced by local-scale phenomena is represented well by the model with a higher spatial resolution. It is also found that the magnitudes of RMSE and bias in the annually-average Tx are relatively large compared with those in Ta and Tn. The horizontal distributions of the error, defined as the difference between simulated and observed temperatures (simulated minus observed), illustrate negative errors in the annually-averaged Tn in three major metropolitan areas: Tokyo, Osaka, and Nagoya. These negative errors in urban areas affect the cold bias in the annually-averaged Tx. The relation between the underestimation of temperature and degree of urbanization is therefore examined quantitatively using National Land Numerical Information provided by the Ministry of Land, Infrastructure, Transport, and Tourism. The annually-averaged Ta, Tx, and Tn are all underestimated in the areas where the degree of urbanization is relatively high. The underestimations in these areas are attributed to the treatment of urban areas in NHRCM, where the effects of urbanization, such as waste heat and artificial structures, are not included. In contrast, in rural areas, the simulated Tx is underestimated and Tn is overestimated although the errors in Ta are small. This indicates that the simulated diurnal temperature range is underestimated. The reason for the relatively large
Michelsen, Anders Ib
2012-01-01
Scott Lash argumenterer i bogen Intensive Culture for en vending fra ”ekstensiv” til ”intensiv” i den nutidige globalisering. Bogens udgangspunkt er en stadig mere ekstensiv og gennemtrængende globalisering af kultur, forbrugs- og vareformer, ”comtemporary culture, today’s capitalism – our global...... information society – is ever more extensive”. Dette medfører imidlertid et paradoks, fordi den ekstensive kultur slår om i intensive kulturformer: ”Given this growing extensification of contemporary culture, on another level and at the same time, we seem to be experiencing a parallel phenomenon whose colours......, samlivsmøstre etc.; ”the sheer pace of life in the streets of today’s mega-city would seem somehow to be intensive”....
Bissell, David; Vannini, Phillip; Jensen, Ole B.
2016-01-01
This paper explores the intensities of long-distance commuting journeys in order to understand how bodily sensibilities become attuned to the regular mobilities which they undertake. More people are travelling farther to and from work than ever before, owing to a variety of factors which relate...... to complex social and geographical dynamics of transport, housing, lifestyle, and employment. Yet, the experiential dimensions of long-distance commuting have not received the attention that they deserve within research on mobilities. Drawing from fieldwork conducted in Australia, Canada, and Denmark...... this paper aims to further develop our collective understanding of the experiential particulars of long-distance workers or ‘supercommuters’. Rather than focusing on the extensive dimensions of mobilities that are implicated in broad social patterns and trends, our paper turns to the intensive dimensions...
Seipt, D; Marklund, M; Bulanov, S S
2016-01-01
The interaction of charged particles and photons with intense electromagnetic fields gives rise to multi-photon Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multi-photon nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude $a_0 \\sim 10^3$ and electron bunches with charges of the order of nC.
Seipt, D.; Heinzl, T.; Marklund, M.; Bulanov, S. S.
2017-04-01
The interaction of charged particles and photons with intense electromagnetic fields gives rise to multiphoton Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multiphoton nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude a0˜1 03 and electron bunches with charges of the order of 10 nC.
Intense fusion neutron sources
Kuteev, B. V.; Goncharov, P. R.; Sergeev, V. Yu.; Khripunov, V. I.
2010-04-01
The review describes physical principles underlying efficient production of free neutrons, up-to-date possibilities and prospects of creating fission and fusion neutron sources with intensities of 1015-1021 neutrons/s, and schemes of production and application of neutrons in fusion-fission hybrid systems. The physical processes and parameters of high-temperature plasmas are considered at which optimal conditions for producing the largest number of fusion neutrons in systems with magnetic and inertial plasma confinement are achieved. The proposed plasma methods for neutron production are compared with other methods based on fusion reactions in nonplasma media, fission reactions, spallation, and muon catalysis. At present, intense neutron fluxes are mainly used in nanotechnology, biotechnology, material science, and military and fundamental research. In the near future (10-20 years), it will be possible to apply high-power neutron sources in fusion-fission hybrid systems for producing hydrogen, electric power, and technological heat, as well as for manufacturing synthetic nuclear fuel and closing the nuclear fuel cycle. Neutron sources with intensities approaching 1020 neutrons/s may radically change the structure of power industry and considerably influence the fundamental and applied science and innovation technologies. Along with utilizing the energy produced in fusion reactions, the achievement of such high neutron intensities may stimulate wide application of subcritical fast nuclear reactors controlled by neutron sources. Superpower neutron sources will allow one to solve many problems of neutron diagnostics, monitor nano-and biological objects, and carry out radiation testing and modification of volumetric properties of materials at the industrial level. Such sources will considerably (up to 100 times) improve the accuracy of neutron physics experiments and will provide a better understanding of the structure of matter, including that of the neutron itself.
BS Denadai
2007-06-01
Full Text Available OBJECTIVE: The objective of this study was to analyze the effects of prolonged continuous running performed at the intensity corresponding to the onset of blood lactate accumulation (OBLA, on the peak torque of the knee extensors, analyzed in relation to different types of contraction and movement velocities in active individuals. METHOD: Eight men (23.4 ± 2.1 years; 75.8 ± 8.7 kg; 171.1 ± 4.5 cm participated in this study. First, the subjects performed an incremental test until volitional exhaustion to determine the velocity corresponding to OBLA. Then, the subjects returned to the laboratory on two occasions, separated by at least seven days, to perform five maximal isokinetic contractions of the knee extensors at two angular velocities (60 and 180º.s-1 under eccentric and concentric conditions. Eccentric peak torque (EPT and Concentric peak torque (CPT were measured at each velocity. One session was performed after a standardized warm-up period (5 min at 50% VO2max. The other session was performed after continuous running at OBLA until volitional exhaustion. These sessions were conducted in random order. RESULTS: There was a significant reduction in CPT only at 60º.s-1 (259.0 ± 46.4 and 244.0 ± 41.4 N.m. However, the reduction in EPT was significant at 60º.s-1 (337.3 ± 43.2 and 321.7 ± 60.0 N.m and 180º.s-1 (346.1 ± 38.0 and 319.7 ± 43.6 N.m. The relative strength losses after the running exercise were significant different between contraction types only at 180º.s-1. CONCLUSION: We can conclude that, in active individuals, the reduction in peak torque after prolonged continuous running at OBLA may be dependent on the type of contraction and angular velocity.OBJETIVO: O objetivo deste estudo foi analisar os efeitos da corrida contínua prolongada realizada na intensidade correspondente ao início do acúmulo do lactato no sangue (OBLA sobre o torque máximo dos extensores do joelho analisado em diferentes tipos de contração e
Alvah C. Stahlnecker IV
2008-12-01
Full Text Available A percentage of either measured or predicted maximum heart rate is commonly used to prescribe and measure exercise intensity. However, maximum heart rate in athletes may be greater during competition or training than during laboratory exercise testing. Thus, the aim of the present investigation was to determine if endurance-trained runners train and compete at or above laboratory measures of 'maximum' heart rate. Maximum heart rates were measured utilising a treadmill graded exercise test (GXT in a laboratory setting using 10 female and 10 male National Collegiate Athletic Association (NCAA division 2 cross-country and distance event track athletes. Maximum training and competition heart rates were measured during a high-intensity interval training day (TR HR and during competition (COMP HR at an NCAA meet. TR HR (207 ± 5.0 b·min-1; means ± SEM and COMP HR (206 ± 4 b·min-1 were significantly (p < 0.05 higher than maximum heart rates obtained during the GXT (194 ± 2 b·min-1. The heart rate at the ventilatory threshold measured in the laboratory occurred at 83.3 ± 2.5% of the heart rate at VO2 max with no differences between the men and women. However, the heart rate at the ventilatory threshold measured in the laboratory was only 77% of the maximal COMP HR or TR HR. In order to optimize training-induced adaptation, training intensity for NCAA division 2 distance event runners should not be based on laboratory assessment of maximum heart rate, but instead on maximum heart rate obtained either during training or during competition
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Transcranial magnetic stimulation intensities in cognitive paradigms.
Jakob A Kaminski
Full Text Available BACKGROUND: Transcranial magnetic stimulation (TMS has become an important experimental tool for exploring the brain's functional anatomy. As TMS interferes with neural activity, the hypothetical function of the stimulated area can thus be tested. One unresolved methodological issue in TMS experiments is the question of how to adequately calibrate stimulation intensities. The motor threshold (MT is often taken as a reference for individually adapted stimulation intensities in TMS experiments, even if they do not involve the motor system. The aim of the present study was to evaluate whether it is reasonable to adjust stimulation intensities in each subject to the individual MT if prefrontal regions are stimulated prior to the performance of a cognitive paradigm. METHODS AND FINDINGS: Repetitive TMS (rTMS was applied prior to a working memory task, either at the 'fixed' intensity of 40% maximum stimulator output (MSO, or individually adapted at 90% of the subject's MT. Stimulation was applied to a target region in the left posterior middle frontal gyrus (pMFG, as indicated by a functional magnetic resonance imaging (fMRI localizer acquired beforehand, or to a control site (vertex. Results show that MT predicted the effect size after stimulating subjects with the fixed intensity (i.e., subjects with a low MT showed a greater behavioral effect. Nevertheless, the individual adaptation of intensities did not lead to stable effects. CONCLUSION: Therefore, we suggest assessing MT and account for it as a measure for general cortical TMS susceptibility, even if TMS is applied outside the motor domain.
Effect of exercise intensity on exercise and post exercise energy ...
Effect of exercise intensity on exercise and post exercise energy expenditure in ... treadmill at 57% of maximum heart rate, as well as for 4 hours post exercise. ... in order to increase energy expenditure as well as enhance the oxidation of fat ...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.
Li, Haisen S; Romeijn, H Edwin; Dempsey, James F
2006-09-01
We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the
Alison D. Egan
2004-03-01
Full Text Available The purpose of this study was to measure the salivary cortisol response to different intensities of resistance exercise. In addition, we wanted to determine the reliability of the session rating of perceived exertion (RPE scale to monitor resistance exercise intensity. Subjects (8 men, 9 women completed 2 trials of acute resistance training bouts in a counterbalanced design. The high intensity resistance exercise protocol consisted of six, ten-repetition sets using 75% of one repetition maximum (RM on a Smith machine squat and bench press exercise (12 sets total. The low intensity resistance exercise protocol consisted of three, ten-repetition sets at 30% of 1RM of the same exercises as the high intensity protocol. Both exercise bouts were performed with 2 minutes of rest between each exercise and sessions were repeated to test reliability of the measures. The order of the exercise bouts was randomized with least 72 hours between each session. Saliva samples were obtained immediately before, immediately after and 30 mins following each resistance exercise bout. RPE measures were obtained using Borg's CR-10 scale following each set. Also, the session RPE for the entire exercise session was obtained 30 minutes following completion of the session. There was a significant 97% increase in the level of salivary cortisol immediately following the high intensity exercise session (P<0.05. There was also a significant difference in salivary cortisol of 145% between the low intensity and high intensity exercise session immediately post-exercise (P<0.05. The low intensity exercise did not result in any significant changes in cortisol levels. There was also a significant difference between the session RPE values for the different intensity levels (high intensity 7.1 vs. low intensity 1.9 (P<0.05. The intraclass correlation coefficient for the session RPE measure was 0.95. It was concluded that the session RPE method is a valid and reliable method of
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Low-SNR Capacity of MIMO Optical Intensity Channels
Chaaban, Anas
2017-09-18
The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Analysis of the maximum discharge of karst springs
Bonacci, Ognjen
2001-07-01
Analyses are presented of the conditions that limit the discharge of some karst springs. The large number of springs studied show that, under conditions of extremely intense precipitation, a maximum value exists for the discharge of the main springs in a catchment, independent of catchment size and the amount of precipitation. Outflow modelling of karst-spring discharge is not easily generalized and schematized due to numerous specific characteristics of karst-flow systems. A detailed examination of the published data on four karst springs identified the possible reasons for the limitation on the maximum flow rate: (1) limited size of the karst conduit; (2) pressure flow; (3) intercatchment overflow; (4) overflow from the main spring-flow system to intermittent springs within the same catchment; (5) water storage in the zone above the karst aquifer or epikarstic zone of the catchment; and (6) factors such as climate, soil and vegetation cover, and altitude and geology of the catchment area. The phenomenon of limited maximum-discharge capacity of karst springs is not included in rainfall-runoff process modelling, which is probably one of the main reasons for the present poor quality of karst hydrological modelling. Résumé. Les conditions qui limitent le débit de certaines sources karstiques sont présentées. Un grand nombre de sources étudiées montrent que, sous certaines conditions de précipitations extrêmement intenses, il existe une valeur maximale pour le débit des sources principales d'un bassin, indépendante des dimensions de ce bassin et de la hauteur de précipitation. La modélisation des débits d'exhaure d'une source karstique n'est pas facilement généralisable, ni schématisable, à cause des nombreuses caractéristiques spécifiques des écoulements souterrains karstiques. Un examen détaillé des données publiées concernant quatre sources karstiques permet d'identifier les raisons possibles de la limitation de l'écoulement maximal: (1
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Hostrup, Morten; Bangsbo, Jens
2016-01-01
Mechanisms underlying fatigue development and limitations for performance during intense exercise have been intensively studied during the past couple of decades. Fatigue development may involve several interacting factors and depends on type of exercise undertaken and training level...... and power output during intense exercise. Regular speed endurance training (SET), i.e. exercise performed at intensities above that corresponding to maximum oxygen consumption (VO2max ), enhances intense exercise performance. However, most of the studies that have provided mechanistic insight...
Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study
Jisun Byun
2014-12-01
Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.
Peterson, Jeffrey B; Ansari, Reza; Bandura, Kevin; Bond, Dick; Bunton, John; Carlson, Kermit; Chang, Tzu-Ching; DeJongh, Fritz; Dobbs, Matt; Dodelson, Scott; Darhmaoui, Hassane; Gnedin, Nick; Halpern, Mark; Hogan, Craig; Goff, Jean-Marc Le; Liu, Tiehui Ted; Legrouri, Ahmed; Loeb, Avi; Loudiyi, Khalid; Magneville, Christophe; Marriner, John; McGinnis, David P; McWilliams, Bruce; Moniez, Marc; Palanque-Delabruille, Nathalie; Pasquinelli, Ralph J; Pen, Ue-Li; Rich, Jim; Scarpine, Vic; Seo, Hee-Jong; Sigurdson, Kris; Seljak, Uros; Stebbins, Albert; Steffen, Jason H; Stoughton, Chris; Timbie, Peter T; Vallinotto, Alberto; Wyithe, Stuart; Yeche, Christophe
2009-01-01
Using the 21 cm line, observed all-sky and across the redshift range from 0 to 5, the large scale structure of the Universe can be mapped in three dimensions. This can be accomplished by studying specific intensity with resolution ~ 10 Mpc, rather than via the usual galaxy redshift survey. The data set can be analyzed to determine Baryon Acoustic Oscillation wavelengths, in order to address the question: 'What is the nature of Dark Energy?' In addition, the study of Large Scale Structure across this range addresses the questions: 'How does Gravity effect very large objects?' and 'What is the composition our Universe?' The same data set can be used to search for and catalog time variable and transient radio sources.
Cooray, Asantha; Burgarella, Denis; Chary, Ranga; Chang, Tzu-Ching; Doré, Olivier; Fazio, Giovanni; Ferrara, Andrea; Gong, Yan; Santos, Mario; Silva, Marta; Zemcov, Michael
2016-01-01
Cosmic Dawn Intensity Mapper is a "Probe Class" mission concept for reionization studies of the universe. It will be capable of spectroscopic imaging observations between 0.7 to 6-7 microns in the near-Infrared. The primary observational objective is pioneering observations of spectral emission lines of interest throughout the cosmic history, but especially from the first generation of distant, faint galaxies when the universe was less than 800 million years old. With spectro-imaging capabilities, using a set of linear variable filters (LVFs), CDIM will produce a three-dimensional tomographic view of the epoch of reionization (EoR). CDIM will also study galaxy formation over more than 90% of the cosmic history and will move the astronomical community from broad-band astronomical imaging to low-resolution (R=200-300) spectro-imaging of the universe.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Enhancement of the maximum proton energy by funnel-geometry target in laser-plasma interactions
Yang, Peng; Fan, Dapeng; Li, Yuxiao
2016-09-01
Enhancement of the maximum proton energy using a funnel-geometry target is demonstrated through particle simulations of laser-plasma interactions. When an intense short-pulse laser illuminate a thin foil target, the foil electrons are pushed by the laser ponderomotive force, and then form an electron cloud at the target rear surface. The electron cloud generates a strong electrostatic field, which accelerates the protons to high energies. If there is a hole in the rear of target, the shape of the electron cloud and the distribution of the protons will be affected by the protuberant part of the hole. In this paper, a funnel-geometry target is proposed to improve the maximum proton energy. Using particle-in-cell 2-dimensional simulations, the transverse electric field generated by the side wall of four different holes are calculated, and protons inside holes are restricted to specific shapes by these field. In the funnel-geometry target, more protons are restricted near the center of the longitudinal accelerating electric field, thus protons experiencing longer accelerating time and distance in the sheath field compared with that in a traditional cylinder hole target. Accordingly, more and higher energy protons are produced from the funnel-geometry target. The maximum proton energy is improved by about 4 MeV compared with a traditional cylinder-shaped hole target. The funnel-geometry target serves as a new method to improve the maximum proton energy in laser-plasma interactions.
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
J.M. Groen (Jan); R. van Mastrigt (Ron); J.L.H.R. Bosch (Ruud)
1995-01-01
textabstractThe course of micturition depends on bladder contractility and urethral resistance. The former is determined by geometrical, muscular and neurogenic factors. The muscular aspects of bladder contractility can be characterized by the parameters Pisv, the isovolumetric detrusor pressure, an
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
ZHENG Yongguang; CHEN Jiong; TAO Zuyu
2014-01-01
To address the defi ciency of climatological research on tropical cyclones (TCs) infl uencing China, we analyze the distributions of TCs with diff erent intensities in the region, based on the best-track TC data for 1949-2011 provided by the Shanghai Typhoon Institute. We also present the distributions of 50-and 100-yr return-period TCs with diff erent intensities using the Gumbel probability distribution. The results show that TCs with diff erent intensities exert distinctive eff ects on various regions of China and its surrounding waters. The extreme intensity distributions of TCs over these diff erent regions also diff er. Super and severe typhoons mainly infl uence Taiwan Island and coastal areas of Fujian and Zhejiang provinces, while typhoons and TCs with lower intensities infl uence South China most frequently. The probable maximum TC intensity (PMTI) with 50- and 100-yr return periods infl uencing Taiwan Island is below 890 hPa; the PMTI with a 50-yr return period infl uencing the coastal areas of Fujian and Zhejiang provinces is less than 910 hPa, and that with a 100-yr return period is less than 900 hPa;the PMTI with a 50-yr return period infl uencing the coastal areas of Hainan, Guangdong, and the northern part of the South China Sea is lower than 930 hPa, and that with a 100-yr return period is less than 920 hPa. The results provide a useful reference for the estimation of extreme TC intensities over diff erent regions of China.
The Danish Intensive Care Database
Christiansen, Christian Fynbo; Møller, Morten Hylander; Nielsen, Henrik
2016-01-01
AIM OF DATABASE: The aim of this database is to improve the quality of care in Danish intensive care units (ICUs) by monitoring key domains of intensive care and to compare these with predefined standards. STUDY POPULATION: The Danish Intensive Care Database (DID) was established in 2007...
The Taxonomy of Intervention Intensity
Fuchs, Lynn S.; Fuchs, Douglas; Malone, Amelia S.
2016-01-01
The purpose of this article is to describe the Taxonomy of Intervention Intensity, which articulates 7 dimensions for evaluating and building intervention intensity. We explain the Taxonomy's dimensions of intensity. In explaining the Taxonomy, we rely on a case study to illustrate how the Taxonomy can systematize the process by which special…
Dynamics of a multi-mode maximum entangled coherent state over an amplitude damping channel
A. E1 Allati; Y. Hassouni; N. Metwally
2011-01-01
The dynamics of the maximum entangled coherent state traveling through an amplitude damping channel is investigated.For small values of the transmissivity rate,the traveling state is very fragile to this noise channel,which suffers from the phase flip error with high probability. The entanglement decays smoothly for larger values of the transmissivity rate and speedily for smaller values of this rate.As the number of modes increases,the traveling state over this noise channel quickly loses its entanglement.The odd and even states vanish at the same value of field intensity.
Dynamics of multi-modes maximum entangled coherent state over amplitude damping channel
Allati, A El; Metwally, N
2012-01-01
The dynamics of maximum entangled coherent state travels through an amplitude damping channel is investigated. For small values of the transmissivity rate the travelling state is very fragile to this noise channel, where it suffers from the phase flip error with high probability. The entanglement decays smoothly for larger values of the transmissivity rate and speedily for smaller values of this rate. As the number of modes increases, the travelling state over this noise channel loses its entanglement hastily. The odd and even states vanish at the same value of the field intensity.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Wenchao Cui
2013-01-01
Full Text Available This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Emotionally Intense Science Activities
King, Donna; Ritchie, Stephen; Sandhu, Maryam; Henderson, Senka
2015-08-01
Science activities that evoke positive emotional responses make a difference to students' emotional experience of science. In this study, we explored 8th Grade students' discrete emotions expressed during science activities in a unit on Energy. Multiple data sources including classroom videos, interviews and emotion diaries completed at the end of each lesson were analysed to identify individual student's emotions. Results from two representative students are presented as case studies. Using a theoretical perspective drawn from theories of emotions founded in sociology, two assertions emerged. First, during the demonstration activity, students experienced the emotions of wonder and surprise; second, during a laboratory activity, students experienced the intense positive emotions of happiness/joy. Characteristics of these activities that contributed to students' positive experiences are highlighted. The study found that choosing activities that evoked strong positive emotional experiences, focused students' attention on the phenomenon they were learning, and the activities were recalled positively. Furthermore, such positive experiences may contribute to students' interest and engagement in science and longer term memorability. Finally, implications for science teachers and pre-service teacher education are suggested.
Intensity Frontier Instrumentation
Kettell S.; Rameika, R.; Tshirhart, B.
2013-09-24
The fundamental origin of flavor in the Standard Model (SM) remains a mystery. Despite the roughly eighty years since Rabi asked “Who ordered that?” upon learning of the discovery of the muon, we have not understood the reason that there are three generations or, more recently, why the quark and neutrino mixing matrices and masses are so different. The solution to the flavor problem would give profound insights into physics beyond the Standard Model (BSM) and tell us about the couplings and the mass scale at which the next level of insight can be found. The SM fails to explain all observed phenomena: new interactions and yet unseen particles must exist. They may manifest themselves by causing SM reactions to differ from often very precise predictions. The Intensity Frontier (1) explores these fundamental questions by searching for new physics in extremely rare processes or those forbidden in the SM. This often requires massive and/or extremely finely tuned detectors.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Founda, Dimitra; Pierros, Fragiskos; Santamouris, Mathew
2016-04-01
Considerable recent research suggests that heat waves are becoming more frequent, more intense and longer in the future. Heat waves are characterised by the dominance of prolonged abnormally hot conditions related to synoptic scale anomalies, thus they affect extensive geographical areas. Heat waves (HW) have a profound impact on humans and they have been proven to increase mortality. Urban areas are known to be hotter than the surrounding rural areas due to the well documented urban heat island (UHI) phenomenon. Urban areas face increased risk under heat waves, due to the added heat from the urban heat island and increased population density. Given that urban populations keep increasing, citizens are exposed to significant heat related risk. Mitigation and adaptation strategies require a deep understanding of the response of the urban heat islands under extremely hot conditions. The response of the urban heat island under selected episodes of heat waves is examined in the city of Athens, from the comparison between stations of different characteristics (urban, suburban, coastal and rural). Two distinct episodes of heat waves occurring during summer 2000 were selected. Daily maximum air temperature at the urban station of the National Observatory of Athens (NOA) exceeded 40 0C for at least three consecutive days for both episodes. The intensity of UHI during heat waves was compared to the intensity under 'normal' conditions, represented from a period 'before' and 'after' the heat wave. Striking differences of UHI features between HW and no HW cases were observed, depending on the time of the day and the type of station. The comparison between the urban and the coastal station showed an increase of the order of 3 0C in the intensity of UHI during the HW days, as regards both daytime and nighttime conditions. The comparison between urban and a suburban (inland) station, revealed some different behaviour during HWs, with increases of the order of 3 0C in the nocturnal
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Bornstein, R. D.; Lebassi, B.; Gonzalez, J.
2010-12-01
The study evaluated long-term (1948-2005) air temperatures at over 300 urban and rural sites in California (CA) during summer (June-August, JJA). The aggregate CA results showed asymmetric warming, as daily min temperatures increased faster than daily max temperatures. The spatial distributions of daily max temperatures in the heavily urbanized South Coast and San Francisco Bay Area air basins, however, exhibited a complex pattern, with cooling at low-elevation (mainly urban) coastal-areas and warming at (mainly rural) inland areas. Previous studies have suggested that cooling summer max temperatures in CA were due to increased irrigation, coastal upwelling, or cloud cover. The current hypothesis, however, is that this temperature pattern arises from a “reverse-reaction” to greenhouse gas (GHG) induced global-warming. In this hypothesis, the global warming of inland areas resulted in an increased (cooling) sea breeze activity in coastal areas. That daytime summer coastal cooling was seen in coastal urban areas implies that urban heat island (UHI) warming was weaker than the reverse-reaction sea breeze cooling; if there was no UHI effect, then the cooling would have been even stronger. Analysis of daytime summer max temperatures at four adjacent pairs of urban and rural sites near the inland cooling-warming boundary, however, showed that the rural sites experienced cooling, while the urban sites showed warming due to UHI development. The rate of heat island growth was estimated as the sum of each urban warming rate and the absolute magnitude of the concurrent adjacent rural cooling rate. Values ranged from 0.12 to 0.55 K decade-1, and were proportional to changes in urban population and urban extent. As Sacramento, Modesto, Stockton, and San José have grown in aerial extent (21 to 59%) and population (40 to 118%), part of the observed increased JJA max values could be due to increased daytime UHI-intensity. Without UHI effects, the currently observed JJA SFBA
Acute plasma volume change with high-intensity sprint exercise.
Bloomer, Richard J; Farney, Tyler M
2013-10-01
When exercise is of long duration or of moderate to high intensity, a decrease in plasma volume can be observed. This has been noted for both aerobic and resistance exercise, but few data are available with regard to high-intensity sprint exercise. We measured plasma volume before and after 3 different bouts of acute exercise, of varying intensity, and/or duration. On different days, men (n = 12; 21-35 years) performed aerobic cycle exercise (60 minutes at 70% heart rate reserve) and 2 different bouts of cycle sprints (five 60-second sprints at 100% maximum wattage obtained during graded exercise testing (GXT) and ten 15-second sprints at 200% maximum wattage obtained during GXT). Blood was collected before and 0, 30, and 60 minutes postexercise and analyzed for hematocrit and hemoglobin and plasma volume was calculated. Plasma volume decreased significantly for all exercise bouts (p sprint bouts (∼19%) compared with aerobic exercise bouts (∼11%). By 30 minutes postexercise, plasma volume approached pre-exercise values. We conclude that acute bouts of exercise, in particular high-intensity sprint exercise, significantly decrease plasma volume during the immediate postexercise period. It is unknown what, if any negative implications these transient changes may have on exercise performance. Strength and conditioning professionals may aim to rehydrate athletes appropriately after high-intensity exercise bouts.
The language of pain: intensity.
Bailey, C A; Davidson, P O
1976-09-01
Thirty-nine adjectives which may be used to describe a pain experience were rated on an "intensity" continuum by 93 subjects, and in a second study by an additional 90 subjects. In each study these ratings were intercorrelated and factor-analyzed. The first 6 factors extracted were rotated to a simple structure criterion. The first factor was identified as an "intensity" factor. Examination of the adjectives indicated that intensity relates to "affective-evaluative" adjectives rather than "sensory" ones. The implications of these findings for the language a patient may use to communicate the intensity of a pain are discussed.
Handbook of data intensive computing
Furht, Borko
2011-01-01
Data Intensive Computing refers to capturing, managing, analyzing, and understanding data at volumes and rates that push the frontiers of current technologies. The challenge of data intensive computing is to provide the hardware architectures and related software systems and techniques which are capable of transforming ultra-large data into valuable knowledge. Handbook of Data Intensive Computing is written by leading international experts in the field. Experts from academia, research laboratories and private industry address both theory and application. Data intensive computing demands a fund
Data intensive ATLAS workflows in the Cloud
Rzehorz, Gerhard Ferdinand; The ATLAS collaboration
2016-01-01
This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value
Data intensive ATLAS workflows in the Cloud
Rzehorz, Gerhard Ferdinand; The ATLAS collaboration
2017-01-01
This contribution reports on the feasibility of executing data intensive workflows on Cloud infrastructures. In order to assess this, the metric ETC = Events/Time/Cost is formed, which quantifies the different workflow and infrastructure configurations that are tested against each other. In these tests ATLAS reconstruction Jobs are run, examining the effects of overcommitting (more parallel processes running than CPU cores available), scheduling (staggered execution) and scaling (number of cores). The desirability of commissioning storage in the Cloud is evaluated, in conjunction with a simple analytical model of the system, and correlated with questions about the network bandwidth, caches and what kind of storage to utilise. In the end a cost/benefit evaluation of different infrastructure configurations and workflows is undertaken, with the goal to find the maximum of the ETC value.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum
Kostiuk, T.; Livengood, T. A.; Hewagama, T.
2009-01-01
Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.
Penalized maximum likelihood estimation for generalized linear point processes
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...
Patellar tendon adaptation in relation to load-intensity and contraction type
Malliaras, Peter; Kamal, Beenish; Nowell, Alastair
2013-01-01
BACKGROUND: Loading leads to tendon adaptation but the influence of load-intensity and contraction type is unclear. Clinicians need to be aware of the type and intensity of loading required for tendon adaptation when prescribing exercise. The aim of this study was to investigate the influence...... of contraction type and load-intensity on patellar tendon mechanical properties. METHOD: Load intensity was determined using the 1 repetition maximum (RM) on a resistance exercise device at baseline and fortnightly intervals in four randomly allocated groups of healthy, young males: (1) control (no training); (2...... maximum torque, patellar tendon CSA and length were measured with dynamometry and ultrasound imaging. Patellar tendon force, stress and strain were calculated at 25%, 50%, 75% and 100% of maximum torque during isometric knee extension contractions, and stiffness and modulus at torque intervals of 50...
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Individuals underestimate moderate and vigorous intensity physical activity.
Karissa L Canning
Full Text Available BACKGROUND: It is unclear whether the common physical activity (PA intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI classifications. OBJECTIVES: The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. METHODS: 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. RESULTS: On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05. CONCLUSION: When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.
Catfish production using intensive aeration
For the last 3 years, researchers at UAPB and NWAC have been monitoring and verifying production yields in intensively aerated catfish ponds with aeration rates greater than 6 hp/acre. We now have three years of data on commercial catfish production in intensively aerated ponds. With stocking densi...
The Dynamics of Intensive Cultivation
Christian Bidard
2008-01-01
An increase in the demand for agricultural goods leads to the use of more intensive cultivation methods. Though Ricardo sees no difficulties in the intensification process, their existence is revealed by the possible occurrence of multiple equilibria. A general theory of intensive rent is based on a formal parallel with single-product systems without land.
Traffic light intensity meter, TIM®
Leden, N. van der; Varkevisser, J.; Vroom, J. de; Oijen, T van
2005-01-01
The intensity of traffic lights decreases over time as a result of pollution and ageing. The Dutch Traffic Research Centre of the Ministry of Transport, Public Works and Water Management is searching for a convenient method for measuring the luminous intensity of traffic lights on the road, in order
Influence of mesoscale topography on vortex intensity
无
2008-01-01
The effect of mesoscale topography on multi-vortex self-organization is investigated numerically in this paper using a barotropic primitive equation model with topographic term. In the initial field there are one DeMaria major vortex with the maximum wind radius rm of 80 km at the center of the computational domain, and four meso-β vortices in the vicinity of rm to the east of the major vortex center.When there is no topography present, the initial vortices self-organize into a quasi-final state flow pattern, I.e. A quasi-axisymmetric vortex whose intensity is close to that of the initial major vortex. However, when a mesoscale topography is incorporated, the spatial scale of the quasi-final state vortex reduces, and the relative vorticity at the center of the vortex and the local maximum wind speed remarkably increase. The possible mechanism for the enhancement of the quasi-final state vortex might be that the negative relative vorticity lump,generated above the mesoscale topography because of the constraint of absolute vorticity conservation, squeezes the center of positive vorticity towards the mountain slope area, and thus reduces the spatial range of the major vortex. Meanwhile, because the total kinetic energy is basically conservative, the squeezing directly leads to the concentration of the energy in a smaller area, I.e. The strengthening of the vortex.
MRI intensity inhomogeneity correction by combining intensity and spatial information
Vovk, Uros; Pernus, Franjo; Likar, Bostjan [Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1000 Ljubljana (Slovenia)
2004-09-07
We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic.
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Intense electron and ion beams
Molokovsky, Sergey Ivanovich
2005-01-01
Intense Ion and Electron Beams treats intense charged-particle beams used in vacuum tubes, particle beam technology and experimental installations such as free electron lasers and accelerators. It addresses, among other things, the physics and basic theory of intense charged-particle beams; computation and design of charged-particle guns and focusing systems; multiple-beam charged-particle systems; and experimental methods for investigating intense particle beams. The coverage is carefully balanced between the physics of intense charged-particle beams and the design of optical systems for their formation and focusing. It can be recommended to all scientists studying or applying vacuum electronics and charged-particle beam technology, including students, engineers and researchers.
The economics of intense exercise.
Meltzer, David O; Jena, Anupam B
2010-05-01
Despite the well-known benefits of exercise, the time required for exercise is widely understood as a major reason for low levels of exercise in the US. Intensity of exercise can change the time required for a given amount of total exercise but has never been studied from an economic perspective. We present a simple model of exercise behavior which suggests that the intensity of exercise should increase relative to time spent exercising as wages increase, holding other determinants of exercise constant. Our empirical results identify an association between income and exercise intensity that is consistent with the hypothesis that people respond to increased time costs of exercise by increasing intensity. More generally, our results suggest that time costs may be an important determinant of exercise patterns and that factors that can influence the time costs of exercise, such as intensity, may be important concerns in designing interventions to promote exercise.
Maximum available flux of charged particles from the laser ablation plasma
Sakai, Yasuo; Itagaki, Tomonobu; Horioka, Kazuhiko
2016-12-01
The laser ablation plasma was characterized for high-flux sources of ion and electron beams. An ablation plasma was biased to a positive or a negative high voltage, and the fluxes of charged particles through a pair of extraction electrodes were measured as a function of the laser intensity IL. Maximum available fluxes and the ratios of electron and ion beam currents Je/Ji were evaluated as a function of the laser irradiance. The ion and the electron fluxes increased with a laser intensity and the current ratio was around 40 at IL = 1.3 × 108 W/cm2 which monotonically decreased with an increase of the laser intensity. The current ratios Je/Ji were correlated to the parameters of ablation plasma measured by the electrostatic probes. The results showed that the ion fluxes are basically enhanced by super-sonically drifting ions in the plasma and the electron fluxes are also enhanced by the drift motion together with a reduction of the sheath potential due to the enhanced ion flux to the surrounding wall.
McKibben, R. B.; Connell, J. J.; Lopate, C.; Zhang, M.; Anglin, J.D.; Balogh, A.; Dalla, S.; Sanderson, T. R.; Marsden, R. G.; Hofer, M. Y.; Kunow, H.; Posner, A.; Heber, B.
2003-01-01
In 2000–2001 Ulysses passed from the south to the north polar regions of the Sun in the inner heliosphere, providing a snapshot of the latitudinal structure of cosmic ray modulation and solar energetic particle populations during a period near solar maximum. Observations from the COSPIN suite of energetic charged particle telescopes show that latitude variations in the cosmic ray intensity in the inner heliosphere are nearly non-existent near solar maximum, whereas small but ...
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Competence of nurses in the intensive cardiac care unit
Nobahar, Monir
2016-01-01
Introduction Competence of nurses is a complex combination of knowledge, function, skills, attitudes, and values. Delivering care for patients in the Intensive Cardiac Care Unit (ICCU) requires nurses’ competences. This study aimed to explain nurses’ competence in the ICCU. Methods This was a qualitative study in which purposive sampling with maximum variation was used. Data were collected through semi-structured interviews with 23 participants during 2012–2013. Interviews were recorded, tran...
Intensive insulin therapy in the intensive cardiac care unit.
Hasin, Tal; Eldor, Roy; Hammerman, Haim
2006-01-01
Treatment in the intensive cardiac care unit (ICCU) enables rigorous control of vital parameters such as heart rate, blood pressure, body temperature, oxygen saturation, serum electrolyte levels, urine output and many others. The importance of controlling the metabolic status of the acute cardiac patient and specifically the level of serum glucose was recently put in focus but is still underscored. This review aims to explain the rationale for providing intensive control of serum glucose levels in the ICCU, especially using intensive insulin therapy and summarizes the available clinical evidence suggesting its effectiveness.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
On the Effect of Mortgages of Maximum Amount
YangZongping
2005-01-01
Since the enactment of the PRC Guarantee Law, mortgages of maximum amount has won wide application in a variety of business occupations and particularly in banking. Compared with the rich content of the 21clause statute on mortgages of maximum amount in Japan's Civil Law, the Chinese law has only four principled clauses. Its lack of operability plus its legislative gaps and defects has a severe impact on the positive effectiveness of the law. The core issue is the question of effectiveness. Because the principles stipulated in the Law run counter to the diversity of its actual practices,
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Maximum-entropy distributions of correlated variables with prespecified marginals.
Larralde, Hernán
2012-12-01
The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
A maximum in the strength of nanocrystalline copper
Schiøtz, Jakob; Jacobsen, Karsten Wedel
2003-01-01
We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
ADRIANA MIHAELA PORCUȚAN
2016-11-01
Full Text Available Suceava river basin gets its tributaries from the eastern slopes of the northern group of the Eastern Romanian Carpathians, situated under the influence of Baltic air masses, which bring rainfalls and cold weather, felt into the water flow regime of rivers in the region studied. This water flow regime varies from month to month, with the maximum flow having the most important role in the restoration of underground water reserves. This study examines the temporal (frequency, duration and intensity and quantitative (volume and flow parameters of periods with maximum flow, at monthly and seasonal level, revealing the differences in the water flow regime induced by the relief’s morphometric particularities (altitude, fragmentation degree, exhibition. For the evaluation of mentioned parameters was appealed to the TML 2.1 extension from the HydroOffice software package, which uses quantitative thresholds, depending on which it is set the appearance, and disappearance of periods with maximum flow.
High-frequency maximum observable shaking map of Italy from fault sources
Zonno, Gaetano
2012-03-17
We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.
Yasso, B; Li, Y; Alexander, A; Mel'nikova, N B; Mukhina, I V
2014-01-01
A comparison of the relative bioavailability and intensity of penetration of glucosamine sulfate in oral, injection and topical administration of the dosage form Hondroxid Maximum as a cream containing micellar system for transdermal delivery of glucosamine in the experiment by Sprague-Dawley rats was carried out. On the base on the pharmacokinetic profiles data of glucosamine in rat blood plasma with daily administration in 3 times a day for 1 week by cream Hondroxid Maximum 400 mg/kg and the single injection solution of 4% Glucosamine sulfate 400 mg/kg was found that the relative bioavailability was 61.6%. Calculated penetration rate of glucosamine in the plasma through the rats skin in 4 hours, equal to 26.9 μg/cm2 x h, and the penetration of glucosamine through the skin into the plasma after a single dose of cream in 4 hours was 4.12%. Comparative analysis of literature and experimental data and calculations based on them suggest that medicine Hondroxid Maximum, cream with transdermal glucosamine complex in the treatment in accordance with the instructions can provide an average concentration of glucosamine in the synovial fluid of an inflamed joint in the range (0.7 - 1.5) μg/ml, much higher than the concentration of endogenous glucosamine human synovial joint fluid (0.02 - 0.07 μg/ml). By theoretical calculations taking into account experimental data it is shown that the medicine Hondroxid Maximum can reach the bioavailability level of the modern injection forms and exceed the bioavailability level of modern oral forms of glucosamine up to 2 times.
Ticha Sethapakdi
2012-01-01
Full Text Available A small steel ball was dropped onto the center of a bongo drum. The relationship between impact energy and the maximum amplitude of the sound produced upon impact was determined using release heights ranging from 5 to 70 cm. It was found that there was a power relation between the impact energy and the maximum amplitude of the sound, indicating that the partitioning of energy in the system is dependent on impact energy.
Intensity measures for seismic liquefaction hazard evaluation of sloping site
陈志雄; 程印; 肖杨; 卢谅; 阳洋
2015-01-01
This work investigates the correlation between a large number of widely used ground motion intensity measures (IMs) and the corresponding liquefaction potential of a soil deposit during earthquake loading. In order to accomplish this purpose the seismic responses of 32 sloping liquefiable site models consisting of layered cohesionless soil were subjected to 139 earthquake ground motions. Two sets of ground motions, consisting of 80 ordinary records and 59 pulse-like near-fault records are used in the dynamic analyses. The liquefaction potential of the site is expressed in terms of the the mean pore pressure ratio, the maximum ground settlement, the maximum ground horizontal displacement and the maximum ground horizontal acceleration. For each individual accelerogram, the values of the aforementioned liquefaction potential measures are determined. Then, the correlation between the liquefaction potential measures and the IMs is evaluated. The results reveal that the velocity spectrum intensity (VSI) shows the strongest correlation with the liquefaction potential of sloping site. VSI is also proven to be a sufficient intensity measure with respect to earthquake magnitude and source-to-site distance, and has a good predictability, thus making it a prime candidate for the seismic liquefaction hazard evaluation.
Influence of Noise on Time Evolution of Intensity Correlation Function
无
2005-01-01
Using the linear approximation method, we have studied how the correlation function C(t) of the laser intensity changes with time in the loss-noise model of the single-mode laser driven by the colored pump noise with signal modulation and the quantum noise with cross-correlation between the real and imaginary parts. We have found that when the pump noise self-correlation time τ changes, (I) in the case ofτ 1, the curve only exhibits periodically surging with descending envelope. When τ < 1 and τ does not change, with the increase of the pump noise intensity P, the curve experiences a repeated changing process, that is, from the monotonous descending to the appearance of a maximum, then to monotonous rise, and finally to the appearance of a maximum again. With the increase of the quantum noise intensity Q, the curve experiences a changing process from the monotonous rise to the appearance of a maximum, and finally to the monotonous descending. The increase of the quantum noise with cross-correlation between the real and imaginary parts will lead to the fall of the whole curve, but not affect the form of the time evolution of C(t).
A Brooks type theorem for the maximum local edge connectivity
Stiebitz, Michael; Toft, Bjarne
2017-01-01
For a graph $G$, let $\\cn(G)$ and $\\la(G)$ denote the chromatic number of $G$ and the maximum local edge connectivity of $G$, respectively. A result of Dirac \\cite{Dirac53} implies that every graph $G$ satisfies $\\cn(G)\\leq \\la(G)+1$. In this paper we characterize the graphs $G$ for which $\\cn(G)...
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
On the maximum backscattering cross section of passive linear arrays
Solymar, L.; Appel-Hansen, Jørgen
1974-01-01
The maximum backscattering cross section of an equispaced linear array connected to a reactive network and consisting of isotropic radiators is calculated forn = 2, 3, and 4 elements as a function of the incident angle and of the distance between the elements. On the basis of the results obtained...
Scientific substantination of maximum allowable concentration of fluopicolide in water
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
A Unified Maximum Likelihood Approach to Document Retrieval.
Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex
2001-01-01
Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
Fischer, Paul
1997-01-01
such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
34 CFR 682.204 - Maximum loan amounts.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Maximum loan amounts. 682.204 Section 682.204 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION FEDERAL FAMILY EDUCATION LOAN (FFEL) PROGRAM General Provisions § 682.204...
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
On the maximum entropy principle in non-extensive thermostatistics
Naudts, Jan
2004-01-01
It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
40 CFR 35.145 - Maximum federal share.
2010-07-01
... STATE AND LOCAL ASSISTANCE Environmental Program Grants Air Pollution Control (section 105) § 35.145 Maximum federal share. (a) The Regional Administrator may provide air pollution control agencies, as... programs for the prevention and control of air pollution or implementing national primary and...
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
A MAXIMUM ENTROPY METHOD FOR CONSTRAINED SEMI-INFINITEPROGRAMMING PROBLEMS
ZHOU Guanglu; WANG Changyu; SHI Zhenjun; SUN Qingying
1999-01-01
This paper presents a new method, called the maximum entropy method,for solving semi-infinite programming problems, in which thesemi-infinite programming problem is approximated by one with a singleconstraint. The convergence properties for this method are discussed.Numerical examples are given to show the high effciency of thealgorithm.
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Estruturas silicosas na gramínea Panicum maximum
Pedro Fontana Junior
1957-05-01
Full Text Available The silica structure of the grass Panicum maximum were studied by electron and phase contrast microscopy. Several interesting kinds of silica formation (spiklets were found. The most ineresting structures ressembels the "Schaumstruktur" found by HELMCKE in diatoms. Another interesting structure was described in the "silica cells" and a detailed study of the mophology of some different kinds of spiklets was made.
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
21 CFR 801.415 - Maximum acceptable level of ozone.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Maximum acceptable level of ozone. 801.415 Section... level of ozone. (a) Ozone is a toxic gas with no known useful medical application in specific, adjunctive, or preventive therapy. In order for ozone to be effective as a germicide, it must be present in...
The maximum number of minimal codewords in long codes
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.;
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981...
A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods
Bijmolt, T.H.A.; Wedel, M.
1996-01-01
We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
Maximum likelihood estimation of phase-type distributions
Esparza, Luz Judith R
This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...
Mechanical Sun-Tracking Technique Implemented for Maximum ...
The solar panel is allowed to move from east to west and back forth with a maximum allowable angle of 180o. Its movement is in only one axis. The prototype built carries the panel from eastward to westward tracking the sun movement from ...
24 CFR 941.306 - Maximum project cost.
2010-04-01
...) project costs that are subject to the TDC limit (i.e., Housing Construction Costs and Community Renewal Costs); and (2) project costs that are not subject to the TDC limit (i.e., Additional Project Costs... expended for the project, and this becomes the maximum project cost for purposes of the ACC. (b) TDC...
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
33 CFR 401.3 - Maximum vessel dimensions.
2010-07-01
..., and having dimensions that do not exceed the limits set out in the block diagram in appendix I of this... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Maximum vessel dimensions. 401.3 Section 401.3 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION, DEPARTMENT...
A relationship between maximum packing of particles and particle size
Fedors, R. F.
1979-01-01
Experimental data indicate that the volume fraction of particles in a packed bed (i.e. maximum packing) depends on particle size. One explanation for this is based on the idea that particle adhesion is the primary factor. In this paper, however, it is shown that entrainment and immobilization of liquid by the particles can also account for the facts.