The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Physics-based estimates of maximum magnitude of induced earthquakes
Ampuero, Jean-Paul; Galis, Martin; Mai, P. Martin
2016-04-01
In this study, we present new findings when integrating earthquake physics and rupture dynamics into estimates of maximum magnitude of induced seismicity (Mmax). Existing empirical relations for Mmax lack a physics-based relation between earthquake size and the characteristics of the triggering stress perturbation. To fill this gap, we extend our recent work on the nucleation and arrest of dynamic ruptures derived from fracture mechanics theory. There, we derived theoretical relations between the area and overstress of overstressed asperity and the ability of ruptures to either stop spontaneously (sub-critical ruptures) or runaway (super-critical ruptures). These relations were verified by comparison with simulation and laboratory results, namely 3D dynamic rupture simulations on faults governed by slip-weakening friction, and laboratory experiments of frictional sliding nucleated by localized stresses. Here, we apply and extend these results to situations that are representative for the induced seismicity environment. We present physics-based predictions of Mmax on a fault intersecting cylindrical reservoir. We investigate Mmax dependence on pore-pressure variations (by varying reservoir parameters), frictional parameters and stress conditions of the fault. We also derive Mmax as a function of injected volume. Our approach provides results that are consistent with observations but suggests different scaling with injected volume than that of empirical relation by McGarr, 2014.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
A viable method for goodness-of-fit test in maximum likelihood fit
张锋; 高原宁; 霍雷
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood func
A viable method for goodness-of-fit test in maximum likelihood fit
ZHANG Feng; GAO Yuan-Ning; HUO Lei
2011-01-01
A test statistic is proposed to perform the goodness-of-fit test in the unbinned maximum likelihood fit. Without using a detailed expression of the efficiency function, the test statistic is found to be strongly correlated with the maximum likelihood function if the efficiency function varies smoothly. We point out that the correlation coefficient can be estimated by the Monte Carlo technique. With the established method, two examples are given to illustrate the performance of the test statistic.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
On the magnitude of temperature decrease in the equatorial regions during the Last Glacial Maximum
王宁练; 姚檀栋; 施雅风; L.G.Thompson; J.Cole-Dai; P.-N.Lin; and; M.E.Davis
1999-01-01
Based on the data of temperature changes revealed by means of various palaeothermometric proxy indices,it is found that the magnitude of temperature decrease became large with altitude in the equatorial regions during the Last Glacial Maximum. The direct cause of this phenomenon was the change in temperature lapse rate, which was about(0.1±0.05)℃/100 m larger in the equator during the Last Glacial Maximum than at present. Moreover, the analyses show that CLIMAP possibly underestimated the sea surface temperature decrease in the equatorial regions during the Last Glacial Maximum.
Maximum credible earthquake (MCE) magnitude of structures affecting the Ujung Lemahabang site
Soerjodibroto, M. [National Atomic Energy Agency, Jakarta (Indonesia)
1997-03-01
This report analyse the geological structures in/around Muria Peninsula that might originating potential earthquake hazard toward the selected site for NPP, Ujung Lemahabang (ULA). Analysis was focused on the Lasem fault and AF-1/AF-4 offshore faults that are considered as the determinant structures affecting the seismicity of ULA (Nira, 1979, Newjec, 1994). Methods for estimating the MCE of the structures include maximum historical earthquake, and relationship between the length of the fault and the magnitude of earthquake originating from the known structure (Tocher, Iida, Matsuda, Wells and Coopersmith). The MCE magnitude estimating by these method for earthquake originating along the Lasem and AF-1/AF-4 faults vary from 2,1M to 7,0M. Comparison between the result of historical data and fault-magnitude relationship, however, suggest a MCE magnitude of Ms=7,0M for both fault zones. (author)
Prediction of maximum magnitude and original time of reservoir induced seismicity
无
2001-01-01
This paper deals with the prediction of potentially maximum magnitude and origin time for reservoir induced seismicity (RIS). The factor and sign of seismology and geology of RIS has been studied, and the information quantity for magnitude of induced seismicity provided by them has been calculated. In terms of information quan-tity the biggest possible magnitude of RIS is determined. The changes of seismic frequency with time are studied using grey model method, and the time of the biggest change rate is taken as original time of the main shock. The feasibility of methods for predicting magnitude and time has been tested for the reservoir induced seismicity in the Xinfengjiang reservoir, China and the Koyna reservoir, India.
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2017-03-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert
2016-10-01
The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.
Mark Last
Full Text Available This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010 are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698 is reached by the Multi-Objective Info-Fuzzy Network (M-IFN algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.
Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.
Brazilian Cardiorespiratory Fitness Classification Based on Maximum Oxygen Consumption
Herdy, Artur Haddad; Caixeta, Ananda
2016-01-01
Background Cardiopulmonary exercise test (CPET) is the most complete tool available to assess functional aerobic capacity (FAC). Maximum oxygen consumption (VO2 max), an important biomarker, reflects the real FAC. Objective To develop a cardiorespiratory fitness (CRF) classification based on VO2 max in a Brazilian sample of healthy and physically active individuals of both sexes. Methods We selected 2837 CEPT from 2837 individuals aged 15 to 74 years, distributed as follows: G1 (15 to 24); G2 (25 to 34); G3 (35 to 44); G4 (45 to 54); G5 (55 to 64) and G6 (65 to 74). Good CRF was the mean VO2 max obtained for each group, generating the following subclassification: Very Low (VL): VO2 105%. Results Men VL 105% G1 53.13 G2 49.77 G3 47.67 G4 42.52 G5 37.06 G6 31.50 Women G1 40.85 G2 40.01 G3 34.09 G4 32.66 G5 30.04 G6 26.36 Conclusions This chart stratifies VO2 max measured on a treadmill in a robust Brazilian sample and can be used as an alternative for the real functional evaluation of physically and healthy individuals stratified by age and sex. PMID:27305285
Variable selection for modeling the absolute magnitude at maximum of Type Ia supernovae
Uemura, Makoto; Kawabata, Koji S.; Ikeda, Shiro; Maeda, Keiichi
2015-06-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and a LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates for the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernovae: (i) The absolute magnitude at maximum depends on the color and light-curve width. (ii) The light-curve width depends on the strength of Si II. Recent studies have suggested adding more variables in order to explain the absolute magnitude. However, our analysis does not support adding any other variables in order to have a better generalization error.
Hoechner, Andreas; Babeyko, Andrey Y.; Zamora, Natalia
2016-06-01
Despite having been rather seismically quiescent for the last decades, the Makran subduction zone is capable of hosting destructive earthquakes and tsunami. In particular, the well-known thrust event in 1945 (Balochistan earthquake) led to about 4000 casualties. Nowadays, the coastal regions are more densely populated and vulnerable to similar events. Furthermore, some recent publications discuss rare but significantly larger events at the Makran subduction zone as possible scenarios. We analyze the instrumental and historical seismicity at the subduction plate interface and generate various synthetic earthquake catalogs spanning 300 000 years with varying magnitude-frequency relations. For every event in the catalogs we compute estimated tsunami heights and present the resulting tsunami hazard along the coasts of Pakistan, Iran and Oman in the form of probabilistic tsunami hazard curves. We show how the hazard results depend on variation of the Gutenberg-Richter parameters and especially maximum magnitude assumption.
Mukuhira, Yusuke; Asanuma, Hiroshi; Ito, Takatoshi; Häring, Markus
2016-04-01
Occurrence of induced seismicity with large magnitude is critical environmental issues associated with fluid injection for shale gas/oil extraction, waste water disposal, carbon capture and storage, and engineered geothermal systems (EGS). Studies for prediction of the hazardous seismicity and risk assessment of induced seismicity has been activated recently. Many of these studies are based on the seismological statistics and these models use the information of the occurrence time and event magnitude. We have originally developed physics based model named "possible seismic moment model" to evaluate seismic activity and assess seismic moment which can be ready to release. This model is totally based on microseismic information of occurrence time, hypocenter location and magnitude (seismic moment). This model assumes existence of representative parameter having physical meaning that release-able seismic moment per rock volume (seismic moment density) at given field. Seismic moment density is to be estimated from microseismic distribution and their seismic moment. In addition to this, stimulated rock volume is also inferred by progress of microseismic cloud at given time and this quantity can be interpreted as the rock volume which can release seismic energy due to weakening effect of normal stress by injected fluid. Product of these two parameters (equation (1)) provide possible seismic moment which can be released from current stimulated zone as a model output. Difference between output of this model and observed cumulative seismic moment corresponds the seismic moment which will be released in future, based on current stimulation conditions. This value can be translated into possible maximum magnitude of induced seismicity in future. As this way, possible seismic moment can be used to have feedback to hydraulic stimulation operation in real time as an index which can be interpreted easily and intuitively. Possible seismic moment is defined as equation (1), where D
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
Kim, Alex
2011-01-01
I present an analysis for fitting cosmological parameters from a Hubble Diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data themselves, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for sub-types and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently-used fitters are negligibly small for existing and projected supernova data sets.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent
Wang, Dong; Lu, Kaiyuan; Rasmussen, Peter Omand
2015-01-01
The conventional high frequency signal injection method is to superimpose a high frequency voltage signal to the commanded stator voltage before space vector modulation. Therefore, the magnitude of the voltage used for machine torque production is limited. In this paper, a new high frequency...... injection method, in which high frequency signal is generated by shifting the duty cycle between two neighboring switching periods, is proposed. This method allows injecting a high frequency signal at half of the switching frequency without the necessity to sacrifice the machine fundamental voltage...... amplitude. This may be utilized to develop new position estimation algorithm without involving the inductance in the medium to high speed range. As an application example, a developed inductance independent position estimation algorithm using the proposed high frequency injection method is applied to drive...
Halil Karahan
2013-03-01
Full Text Available Knowing the properties like amount, duration, intensity, spatial and temporal variation etc of precipitation which is the primary input of water resources is required for planning, design, construction and operation studies of various sectors like water resources, agriculture, urbanization, drainage, flood control and transportation. For executing the mentioned practices, reliable and realistic estimations based on existing observations should be made. The first step of making a reliable estimation is to test the reliability of existing observations. In this study, Kolmogorov-Smirnov, Anderson-Darling and Chi-Square goodness of distribution fit tests were applied for determining to which distribution the measured standard duration maximum precipitation values (in the years 1929-2005 fit in the meteorological stations operated by the Turkish State Meteorological Service (DMİ which are located in the city and town centers of Aegean Region. While all the observations fit to GEV distribution according to Anderson-Darling test, it was seen that short, mid-term and long duration precipitation observations generally fit to GEV, Gamma and Log-normal distribution according to Kolmogorov-Smirnov and Chi-square tests. To determine the parameters of the chosen probability distribution, maximum likelihood (LN2, LN3, EXP2, Gamma3, probability-weighted distribution (LP3,Gamma2, L-moments (GEV and least squares (Weibull2 methods were used according to different distributions.
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.
Smith, Francesca A.; Wing, Scott L.; Freeman, Katherine H.
2007-10-01
Carbon-isotope measurements ( δ13C) of leaf-wax n-alkanes from the Paleocene-Eocene Thermal Maximum (PETM) in the Bighorn Basin, Wyoming, reveal a negative carbon isotope excursion (CIE) of 4-5‰, which is 1-2‰ larger than that observed in marine carbonate δ13C records. Reconciling these records requires either that marine carbonates fail to record the full magnitude of the CIE or that the CIE in plants has been amplified relative to the marine. Amplification of the CIE has been proposed to result from an increase in available moisture that allowed terrestrial plants to increase 13C-discrimination during the PETM. Leaf physiognomy, paleopedology and hydrogen isotope ratios of leaf-wax lipids from the Bighorn Basin, however, all suggest that rather than a simple increase in available moisture, climate alternated between wet and dry during the PETM. Here we consider two other explanations and test them quantitatively with the carbon isotopic record of plant lipids. The "marine modification" hypothesis is that the marine carbonate record was modified by chemical changes at the PETM and that plant lipids record the true magnitude of the CIE. Using atmospheric CO 2δ13C values estimated from the lipid record, and equilibrium fractionation between CO 2 and carbonate, we estimate the expected CIE for planktonic foraminifera to be 6‰. Instead, the largest excursion observed is about 4‰. No mechanism for altering marine carbonate by 2‰ has been identified and we thus reject this explanation. The "plant community change" hypothesis is that major changes in floral composition during the PETM amplified the CIE observed in n-alkanes by 1-2‰ relative to marine carbonate. This effect could have been caused by a rapid transition from a mixed angiosperm/conifer flora to a purely angiosperm flora. The plant community change hypothesis is consistent with both the magnitude and pattern of CIE amplification among the different n-alkanes, and with data from fossil plants
Otto-Bliesner, Bette L.; Brady, Esther C.
2010-01-01
Proxy records indicate that the locations and magnitudes of freshwater forcing to the Atlantic Ocean basin as iceberg discharges into the high-latitude North Atlantic, Laurentide meltwater input to the Gulf of Mexico, or meltwater diversion to the North Atlantic via the St. Lawrence River and other eastern outlets may have influenced the North Atlantic thermohaline circulation and global climate. We have performed Last Glacial Maximum (LGM) simulations with the NCAR Community Climate System Model (CCSM3) in which the magnitude of the freshwater forcing has been varied from 0.1 to 1 Sv and inserted either into the subpolar North Atlantic Ocean or the Gulf of Mexico. In these glacial freshening experiments, the less dense freshwater provides a lid on the ocean water below, suppressing ocean convection and interaction with the atmosphere above and reducing the Atlantic Meridional Overturning Circulation (AMOC). This is the case whether the freshwater is added directly to the area of convection south of Greenland or transported there by the subtropical and subpolar gyres when added to the Gulf of Mexico. The AMOC reduction is less for the smaller freshwater forcings, but is not linear with the size of the freshwater perturbation. The recovery of the AMOC from a "slow" state is ˜200 years for the 0.1 Sv experiment and ˜500 years for the 1 Sv experiment. For glacial climates, with large Northern Hemisphere ice sheets and reduced greenhouse gases, the cold subpolar North Atlantic is primed to respond rapidly and dramatically to freshwater that is either directly dumped into this region or after being advected from the Gulf of Mexico. Greenland temperatures cool by 6-8 °C in all the experiments, with little sensitivity to the magnitude, location or duration of the freshwater forcing, but exhibiting large seasonality. Sea ice is important for explaining the responses. The Northern Hemisphere high latitudes are slow to recover. Antarctica and the Southern Ocean show a
A.F.C. Infantosi
2006-12-01
Full Text Available The present study proposes to apply magnitude-squared coherence (MSC to the somatosensory evoked potential for identifying the maximum driving response band. EEG signals, leads [Fpz'-Cz'] and [C3'-C4'], were collected from two groups of normal volunteers, stimulated at the rate of 4.91 (G1: 26 volunteers and 5.13 Hz (G2: 18 volunteers. About 1400 stimuli were applied to the right tibial nerve at the motor threshold level. After applying the anti-aliasing filter, the signals were digitized and then further low-pass filtered (200 Hz, 6th order Butterworth and zero-phase. Based on the rejection of the null hypothesis of response absence (MSC(f > 0.0060 with 500 epochs and the level of significance set at a = 0.05, the beta and gamma bands, 15-66 Hz, were identified as the maximum driving response band. Taking both leads together ("logical-OR detector", with a false-alarm rate of a = 0.05, and hence a = 0.0253 for each derivation, the detection exceeded 70% for all multiples of the stimulation frequency within this range. Similar performance was achieved for MSC of both leads but at 15, 25, 35, and 40 Hz. Moreover, the response was detected in [C3'-C4'] at 35.9 Hz and in [Fpz'-Cz'] at 46.2 Hz for all members of G2. Using the "logical-OR detector" procedure, the response was detected at the 7th multiple of the stimulation frequency for the series as a whole (considering both groups. Based on these findings, the MSC technique may be used for monitoring purposes.
Franke, Kristin; Heitmann, Nadja; Tobner, Anne; Fischer, Klaus
2014-04-01
Plastic responses to changes in environmental conditions are ubiquitous and typically highly effective, but are predicted to incur costs. We here investigate the effects of different frequencies and magnitudes of temperature change in the tropical butterfly Bicyclus anynana, considering developmental (Experiment 1) and adult stage plasticity (Experiment 2). We predicted negative effects of more frequent temperature changes on development, immune function and/or reproduction. Results from Experiment 1 showed that repeated temperature changes during development, if involving large amplitudes, negatively affect larval time, larval growth rate and pupal mass, while adult traits remained unaffected. However, results from treatment groups with smaller temperature amplitudes yielded no clear patterns. In Experiment 2 prolonged but not repeated exposure to 39°C increased heat tolerance, potentially reflecting costs of repeatedly activating emergency responses. At the same time fecundity was more strongly reduced in the group with prolonged heat stress, suggesting a trade-off between heat tolerance and reproduction. Clear effects were restricted to conditions involving large temperature amplitudes or high temperatures.
Maximum Likelihood Fitting of Tidal Streams With Application to the Sagittarius Dwarf Tidal Tails
Cole, Nathan; Magdon-Ismail, Malik; Desell, Travis; Dawsey, Kristopher; Hayashi, Warren; Xinyang,; Liu,; Purnell, Jonathan; Szymanski, Boleslaw; Varela, Carlos; Willett, Benjamin; Wisniewski, James
2008-01-01
We present a maximum likelihood method for determining the spatial properties of tidal debris and of the Galactic spheroid. With this method we characterize Sagittarius debris using stars with the colors of blue F turnoff stars in SDSS stripe 82. The debris is located at (alpha, delta, R) = (31.37 deg +/- 0.26 deg, 0.0 deg, 29.22 +/- 0.20 kpc), with a (spatial) direction given by the unit vector , in Galactocentric Cartesian coordinates, and with FWHM = 6.74 +/- 0.06 kpc. This 2.5 degee-wide stripe contains 0.892% as many F turnoff stars as the current Sagittarius dwarf galaxy. Over small spatial extent, the debris is modeled as a cylinder with a density that falls off as a Gaussian with distance from the axis, while the smooth component of the spheroid is modeled with a Hernquist profile. We assume that the absolute magnitude of F turnoff stars is distributed as a Gaussian, which is an improvement over previous methods which fixed the absolute magnitude at Mg0 = 4.2. The effectiveness and correctness of the ...
Bo You
2015-01-01
Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.
Weiser, Deborah Anne
Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.
Wheeler, Russell L.
2016-01-01
Probabilistic seismic‐hazard assessment (PSHA) requires an estimate of Mmax, the moment magnitude M of the largest earthquake that could occur within a specified area. Sparse seismicity hinders Mmax estimation in the central and eastern United States (CEUS) and tectonically similar regions worldwide (stable continental regions [SCRs]). A new global catalog of moderate‐to‐large SCR earthquakes is analyzed with minimal assumptions about enigmatic geologic controls on SCR Mmax. An earlier observation that SCR earthquakes of M 7.0 and larger occur in young (250–23 Ma) passive continental margins and associated rifts but not in cratons is not strongly supported by the new catalog. SCR earthquakes of M 7.5 and larger are slightly more numerous and reach slightly higher M in young passive margins and rifts than in cratons. However, overall histograms of M from young margins and rifts and from cratons are statistically indistinguishable. This conclusion is robust under uncertainties inM, the locations of SCR boundaries, and which of two available global SCR catalogs is used. The conclusion stems largely from recent findings that (1) large southeast Asian earthquakes once thought to be SCR were in actively deforming crust and (2) long escarpments in cratonic Australia were formed by prehistoric faulting. The 2014 seismic‐hazard model of the U.S. Geological Survey represents CEUS Mmax as four‐point probability distributions. The distributions have weighted averages of M 7.0 in cratons and M 7.4 in passive margins and rifts. These weighted averages are consistent with Mmax estimates of other SCR PSHAs of the CEUS, southeastern Canada, Australia, and India.
Brogaard, K.; VandenBerg, D. A.; Bedin, L. R.
2017-01-01
completely self-consistent isochrone fitting method to ground based and $\\textit{HST}$ cluster colour-magnitude diagrams and the eclipsing binary member V69. The analysis suggests that the composition of V69, and by extension one of the populations of $47\\,$Tuc, is given by [Fe/H]$\\sim-0.70$, [O/Fe]$\\sim+0...
Thanassoulas, C
2008-01-01
The (any) seismogenic area in the lithosphere is considered as an open physical system. Following its energy balance analysis earlier presented (Part - I, Thanassoulas, 2008), the specific case when the seismogenic area is under normal (input energy equals released energy) seismogenic conditions is studied. In this case the cumulative seismic energy release is a linear time function. Starting from this linear function a method is postulated for the determination of the maximum expected magnitude of a future earthquake. The proposed method has been tested "a posteriori" on real EQs from the Greek territory, USA and data obtained from the seismological literature. The obtained results validate the methodology while an analysis is presented that justifies the obtained high degree of accuracy compared to the corresponding calculated EQ magnitudes with seismological methods.
Zamora, N.; Hoechner, A.; Babeyko, A. Y.
2014-12-01
Iran and Pakistan are countries frequently affected by destructive earthquakes, as for instance, the magnitude 6.6 Bam earthquake in 2003 in Iran with about 30 000 casualties, or the magnitude 7.6 Kashmir earthquake 2005 in Pakistan with about 80'000 casualties. Both events took place inland, but in terms of magnitude, even significantly larger events can be expected to happen offshore, at the Makran subduction zone. This small subduction zone is seismically rather quiescent, nevertheless a tsunami caused by a thrust event in 1945 (Balochistan earthquake) led to about 4000 casualties. Nowadays, the coastal regions are more densely populated and vulnerable to similar events. Furthermore, some recent publications discuss the possiblity of rather rare huge magnitude 9 events at the Makran subduction zone. We analyze the seismicity at the subduction plate interface and generate various synthetic earthquake catalogs spanning 100000 years. All the events are projected onto the plate interface using scaling relations and a tsunami model is run for every scenario. The tsunami hazard along the coast is computed and presented in the form of annual probability of exceedance, probabilistic tsunami height for different time periods and other measures. We show how the hazard reacts to variation of the Gutenberg-Richter parameters and maximum magnitudes.We model the historic Balochistan event and its effect in terms of coastal wave heights. Finally, we show how an effective tsunami early warning could be achieved by using an array of high-precision real-time GNSS (Global Navigation Satellite System) receivers along the coast by applying it to the 1945 event and by performing a sensitivity analysis.
Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun
2012-01-01
A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.
Christopher J Pappas
2011-07-01
Full Text Available Borrelia burgdorferi, the spirochetal agent of Lyme disease, is a vector-borne pathogen that cycles between a mammalian host and tick vector. This complex life cycle requires that the spirochete modulate its gene expression program to facilitate growth and maintenance in these diverse milieus. B. burgdorferi contains an operon that is predicted to encode proteins that would mediate the uptake and conversion of glycerol to dihydroxyacetone phosphate. Previous studies indicated that expression of the operon is elevated at 23°C and is repressed in the presence of the alternative sigma factor RpoS, suggesting that glycerol utilization may play an important role during the tick phase. This possibility was further explored in the current study by expression analysis and mutagenesis of glpD, a gene predicted to encode glycerol 3-phosphate dehydrogenase. Transcript levels for glpD were significantly lower in mouse joints relative to their levels in ticks. Expression of GlpD protein was repressed in an RpoS-dependent manner during growth of spirochetes within dialysis membrane chambers implanted in rat peritoneal cavities. In medium supplemented with glycerol as the principal carbohydrate, wild-type B. burgdorferi grew to a significantly higher cell density than glpD mutant spirochetes during growth in vitro at 25°C. glpD mutant spirochetes were fully infectious in mice by either needle or tick inoculation. In contrast, glpD mutants grew to significantly lower densities than wild-type B. burgdorferi in nymphal ticks and displayed a replication defect in feeding nymphs. The findings suggest that B. burgdorferi undergoes a switch in carbohydrate utilization during the mammal to tick transition. Further, the results demonstrate that the ability to utilize glycerol as a carbohydrate source for glycolysis during the tick phase of the infectious cycle is critical for maximal B. burgdorferi fitness.
Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob
2016-03-01
This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.
Uchiyama, Takanori; Minamitani, Haruyuki; Sakata, Makoto
1990-01-01
The complex maximum entropy method and complex autoregressive model fitting with the singular value decomposition method (SVD) were applied to the free induction decay signal data obtained with a Fourier transform nuclear magnetic resonance spectrometer to estimate superresolved NMR spectra. The practical estimation of superresolved NMR spectra are shown on the data of phosphorus-31 nuclear magnetic resonance spectra. These methods provide sharp peaks and high signal-to-noise ratio compared with conventional fast Fourier transform. The SVD method was more suitable for estimating superresolved NMR spectra than the MEM because the SVD method allowed high-order estimation without spurious peaks, and it was easy to determine the order and the rank.
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)
2012-06-20
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
Riddell, A.E.; Britcher, A.R. (British Nuclear Fuels plc, Sellafield (United Kingdom))
1994-01-01
The PLUTO software package was developed at Sellafield to make optimum use of the analysis data from plutonium in urine samples in arriving at the best estimate of intake/uptake. The program prompts the assessor to enter the assessment parameters required to fit the data to the excretion function using the maximum likelihood method. A critical appraisal is given of the relative strengths and weaknesses of this assessment package. (author).
Youhua Chen
2016-09-01
Full Text Available In this report, a maximum likelihood model is developed to incorporate data uncertainty in response and explanatory variables when fitting power-law bivariate relationships in ecology and evolution. This simple likelihood model is applied to an empirical data set related to the allometric relationship between body mass and length of Sciuridae species worldwide. The results show that the values of parameters estimated by the proposed likelihood model are substantially different from those fitted by the nonlinear least-of-square (NLOS method. Accordingly, the power-law models fitted by both methods have different curvilinear shapes. These discrepancies are caused by the integration of measurement errors in the proposed likelihood model, in which NLOS method fails to do. Because the current likelihood model and the NLOS method can show different results, the inclusion of measurement errors may offer new insights into the interpretation of scaling or power laws in ecology and evolution.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Magnitude-frequency of sea cliff instabilities
F. M. S. F. Marques
2008-10-01
Full Text Available The magnitude-frequency relationship of sea cliff failures in strong, low retreat rate cliffs, was studied using systematic historical inventories carried out in the coasts of Portugal and Morocco, in different geological and geomorphological settings, covering a wide size scale, from small to comparatively large rockslides, topples and rockfalls, at different time and spatial scales. The magnitude-frequency expressed in terms of volume displaced and of horizontal area lost at the cliff top showed good fit by inverse power laws of the type p=a.x^{−b}, with a values from 0.2 to 0.3, and exponents b close to 1.0, similar to those proposed for rockfall inventories. The proposed power laws address the magnitude-frequency for sea cliff failures, which is an important component of hazard assessment, to be completed with adequate models for space and time hazard components. Maximum local retreat at the cliff top provided acceptable fitting to inverse power laws only for failures wider than 2m, with a = 4.0, and exponent b = 2.3, which may be useful to assess the cliff retreat hazard for the use of areas located near the cliff top.
Lin, C. H.; Jan, J. C.; Pu, H. C.; Tu, Y.; Chen, C. C.; Wu, Y. M.
2015-11-01
Landslides have become one of the most deadly natural disasters on earth, not only due to a significant increase in extreme climate change caused by global warming, but also rapid economic development in topographic relief areas. How to detect landslides using a real-time system has become an important question for reducing possible landslide impacts on human society. However, traditional detection of landslides, either through direct surveys in the field or remote sensing images obtained via aircraft or satellites, is highly time consuming. Here we analyze very long period seismic signals (20-50 s) generated by large landslides such as Typhoon Morakot, which passed though Taiwan in August 2009. In addition to successfully locating 109 large landslides, we define landslide seismic magnitude based on an empirical formula: Lm = log (A) + 0.55 log (Δ) + 2.44, where A is the maximum displacement (μm) recorded at one seismic station and Δ is its distance (km) from the landslide. We conclude that both the location and seismic magnitude of large landslides can be rapidly estimated from broadband seismic networks for both academic and applied purposes, similar to earthquake monitoring. We suggest a real-time algorithm be set up for routine monitoring of landslides in places where they pose a frequent threat.
Extreme value distribution of earthquake magnitude
Zi, Jun Gan; Tung, C. C.
1983-07-01
Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.
Study on High-Speed Magnitude Approximation for Complex Vectors
陈建春; 杨万海; 许少英
2003-01-01
High-speed magnitude approximation algorithms for complex vectors are discussed intensively. The performance and the convergence speed of these approximation algorithms are analyzed. For the polygon fitting algorithms, the approximation formula under the least mean square error criterion is derived. For the iterative algorithms, a modified CORDIC (coordinate rotation digital computer) algorithm is developed. This modified CORDIC algorithm is proved to be with a maximum relative error about one half that of the original CORDIC algorithm. Finally, the effects of the finite register length on these algorithms are also concerned, which shows that 9 to 12-bit coefficients are sufficient for practical applications.
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Calibration de la magnitude absolue.
Gómez, A. E.; Mennessier, M. O.
Les parallaxes mesurées par Hipparcos permettront d'obtenir des magnitudes absolues individuelles meilleures que ±0m4 pour les étoiles placées dans un volume de rayon inférieur à 150 pc environ autour du soleil. Les algorithmes développés dans le cadre de l'exploitation des données d'Hipparcos, basés sur la méthode de maximum de vraisemblance, permettent non seulement de faire une estimation de la magnitude absolue moyenne d'un groupe physiquement homogène d'étoiles, de son comportement cinématique et de sa distribution spatiale mais aussi d'estimer une magnitude absolue individuelle pour chaque étoile de l'échantillon considéré.
Teleseismic magnitude relations
Markus Båth
2010-02-01
Full Text Available Using available sets of magnitude determinations, primarily from Uppsala seismological bulletin, various extensions are made of the Zurich magnitude recommendations of 1967. Thus, body-wave magnitude (m and surface-wave magnitudes (M are related to each other for 12 different earthquake regions as well as world-wide. Depth corrections for M are derived for all focal depths. Formulas are developed which permit calculation of M also from vertical component long-period seismographs. Body-wave magnitudes from broad-band and narrow-band short-period seismographs are compared and relations deduced. Applications are made both to underground nuclear explosions and to earthquakes. The possibilities of explosion-earthquake discrimination on the basis of magnitudes are examined, as well as the determination of explosive yield from magnitudes. For earthquakes, relations between magnitudes of main earthquakes and largest aftershocks are investigated. A world-wide station network for more homogeneous magnitude determinations is suggested in order to provide the necessary reference system.
Telescopic limiting magnitudes
Schaefer, Bradley E.
1990-01-01
The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments
Amina Shahzadi; Ahmad Saeed Akhter; Betul Saf
2013-01-01
The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles) for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997). The study is based on 23 ...
Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers
Whyte, Jemma Catherine; Bull, Rebecca
2008-01-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…
Number Games, Magnitude Representation, and Basic Number Skills in Preschoolers
Whyte, Jemma Catherine; Bull, Rebecca
2008-01-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was…
Fitness club
2011-01-01
General fitness Classes Enrolments are open for general fitness classes at CERN taking place on Monday, Wednesday, and Friday lunchtimes in the Pump Hall (building 216). There are shower facilities for both men and women. It is possible to pay for 1, 2 or 3 classes per week for a minimum of 1 month and up to 6 months. Check out our rates and enrol at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch
Magnitude M w in metropolitan France
Cara, Michel; Denieul, Marylin; Sèbe, Olivier; Delouis, Bertrand; Cansi, Yves; Schlupp, Antoine
2016-12-01
The recent seismicity catalogue of metropolitan France Sismicité Instrumentale de l'Hexagone (SI-Hex) covers the period 1962-2009. It is the outcome of a multipartner project conducted between 2010 and 2013. In this catalogue, moment magnitudes (M w) are mainly determined from short-period velocimetric records, the same records as those used by the Laboratoire de Détection Géophysique (LDG) for issuing local magnitudes (M L) since 1962. Two distinct procedures are used, whether M L-LDG is larger or smaller than 4. For M L-LDG >4, M w is computed by fitting the coda-wave amplitude on the raw records. Station corrections and regional properties of coda-wave attenuation are taken into account in the computations. For M L-LDG ≤4, M w is converted from M L-LDG through linear regression rules. In the smallest magnitude range M L-LDG French networks or LDG duration magnitude (M D) are first converted into M L-LDG before applying the conversion rules. This paper shows how the different sources of information and the different magnitude ranges are combined in order to determine an unbiased set of M w for the whole 38,027 events of the catalogue.
Local magnitude estimate at Mt. Etna
V. Maiolino
2005-06-01
Full Text Available In order to verify the duration magnitude MD we calculated local magnitude ML values of 288 earthquakes occurring from October 2002 to April 2003 at Mt. Etna. The analysis was computed at three digital stations of the permanent seismic network of Istituto Nazionale di Geofisica e Vulcanologia of Catania, using the relationship ML = logA+alog?-b, where A is maximum half-amplitude of the horizontal component of the seismic recording measured in mm and the term «+alog?-b» takes the place of the term «-logA0» of Richter relationship. In particular, a = 0.15 for ?<200 km, b=0.16 for ?<200 km. Duration magnitude MD values, moment magnitude MW values and other local magnitude values were compared. Differences between ML and MD were obtained for the strong seismic swarms occurring on October 27, during the onset of 2002-2003 Mt. Etna eruption, characterized by a high earthquake rate, with very strong events (seismogram results clipped in amplitude on drum recorder trace and high level of volcanic tremor, which not permit us to estimate the duration of the earthquakes correctly. ML and MD relationships were related and therefore a new relationship for MD is proposed. Cumulative strain release calculated after the eruption using ML values is about 1.75E+06 J1/2 higher than the one calculated using MD values.
Determination of the Limiting Magnitude
Kingery, Aaron; Blaauw, Rhiannon
2017-01-01
The limiting magnitude of an optical camera system is an important property to understand since it is used to find the completeness limit of observations. Limiting magnitude depends on the hardware and software of the system, current weather conditions, and the angular speed of the objects observed. If an object exhibits a substantial angular rate during the exposure, its light spreads out over more pixels than the stationary stars. This spreading causes the limiting magnitude to be brighter when compared to the stellar limiting magnitude. The effect, which begins to become important when the object moves a full width at half max during a single exposure or video frame. For targets with high angular speeds or camera systems with narrow field of view or long exposures, this correction can be significant, up to several magnitudes. The stars in an image are often used to measure the limiting magnitude since they are stationary, have known brightness, and are present in large numbers, making the determination of the limiting magnitude fairly simple. In order to transform stellar limiting magnitude to object limiting magnitude, a correction must be applied accounting for the angular velocity. This technique is adopted in meteor and other fast-moving object observations, as the lack of a statistically significant sample of targets makes it virtually impossible to determine the limiting magnitude before the weather conditions change. While the weather is the dominant factor in observing satellites, the limiting magnitude for meteors also changes throughout the night due to the motion of a meteor shower or sporadic source radiant across the sky. This paper presents methods for determining the limiting stellar magnitude and the conversion to the target limiting magnitude.
Fitness Club
2011-01-01
The CERN Fitness Club is organising Zumba Classes on the first Wednesday of each month, starting 7 September (19.00 – 20.00). What is Zumba®? It’s an exhilarating, effective, easy-to-follow, Latin-inspired, calorie-burning dance fitness-party™ that’s moving millions of people toward joy and health. Above all it’s great fun and an excellent work out. Price: 22 CHF/person Sign-up via the following form: https://espace.cern.ch/club-fitness/Lists/Zumba%20Subscription/NewForm.aspx For more info: fitness.club@cern.ch
Relationship between the magnitude of singular value and nonlinear stability
穆穆; 郭欢; 王佳峰; 李勇
2001-01-01
The relationship between the magnitude of singular value and nonlinear stability or instability of the basic flow is investigated. The results show that there is a good corresponding relationship between them. The magnitude of singular value decreases as the stability (or instability) of the basic flow increases (or decreases). In the stable case, the magnitude of the maximum singular value is much smaller than in the unstable case.
Asteroid absolute magnitudes and slope parameters
Tedesco, Edward F.
1991-01-01
A new listing of absolute magnitudes (H) and slope parameters (G) has been created and published in the Minor Planet Circulars; this same listing will appear in the 1992 Ephemerides of Minor Planets. Unlike previous listings, the values of the current list were derived from fits of data at the V band. All observations were reduced in the same fashion using, where appropriate, a single basis default value of 0.15 for the slope parameter. Distances and phase angles were computed for each observation. The data for 113 asteroids was of sufficiently high quality to permit derivation of their H and G. These improved absolute magnitudes and slope parameters will be used to deduce the most reliable bias-corrected asteroid size-frequency distribution yet made.
Integrated Circuit Stellar Magnitude Simulator
Blackburn, James A.
1978-01-01
Describes an electronic circuit which can be used to demonstrate the stellar magnitude scale. Six rectangular light-emitting diodes with independently adjustable duty cycles represent stars of magnitudes 1 through 6. Experimentally verifies the logarithmic response of the eye. (Author/GA)
Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.
2017-05-01
In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.
Fitness Club
2012-01-01
Open to All: http://cern.ch/club-fitness fitness.club@cern.ch Boxing Your supervisor makes your life too tough ! You really need to release the pressure you've been building up ! Come and join the fit-boxers. We train three times a week in Bd 216, classes for beginners and advanced available. Visit our website cern.ch/Boxing General Fitness Escape from your desk with our general fitness classes, to strengthen your heart, muscles and bones, improve you stamina, balance and flexibility, achieve new goals, be more productive and experience a sense of well-being, every Monday, Wednesday and Friday lunchtime, Tuesday mornings before work and Thursday evenings after work – join us for one of our monthly fitness workshops. Nordic Walking Enjoy the great outdoors; Nordic Walking is a great way to get your whole body moving and to significantly improve the condition of your muscles, heart and lungs. It will boost your energy levels no end. Pilates A body-conditioning technique de...
Magnitude Sensitive Competitive Neural Networks
Pelayo Campillos, Enrique; Buldain Pérez, David; Orrite Uruñuela, Carlos
2014-01-01
En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto de...
Loveday, J; Baldry, I K; Bland-Hawthorn, J; Brough, S; Brown, M J I; Driver, S P; Kelvin, L S; Phillipps, S
2015-01-01
We describe modifications to the joint stepwise maximum likelihood method of Cole (2011) in order to simultaneously fit the GAMA-II galaxy luminosity function (LF), corrected for radial density variations, and its evolution with redshift. The whole sample is reasonably well-fit with luminosity (Qe) and density (Pe) evolution parameters Qe, Pe = 1.0, 1.0 but with significant degeneracies characterized by Qe = 1.4 - 0.4Pe. Blue galaxies exhibit larger luminosity density evolution than red galaxies, as expected. We present the evolution-corrected r-band LF for the whole sample and for blue and red sub-samples, using both Petrosian and Sersic magnitudes. Petrosian magnitudes miss a substantial fraction of the flux of de Vaucouleurs profile galaxies: the Sersic LF is substantially higher than the Petrosian LF at the bright end.
EOP Current Magnitude and Direction
National Oceanic and Atmospheric Administration, Department of Commerce — These data contain shipboard current magnitudes and directions collected in the Pacific, both pelagic and near shore environments. Data is collected using an RD...
Bidirectional Modulation of Numerical Magnitude.
Arshad, Qadeer; Nigmatullina, Yuliya; Nigmatullin, Ramil; Asavarut, Paladd; Goga, Usman; Khan, Sarah; Sander, Kaija; Siddiqui, Shuaib; Roberts, R E; Cohen Kadosh, Roi; Bronstein, Adolfo M; Malhotra, Paresh A
2016-05-01
Numerical cognition is critical for modern life; however, the precise neural mechanisms underpinning numerical magnitude allocation in humans remain obscure. Based upon previous reports demonstrating the close behavioral and neuro-anatomical relationship between number allocation and spatial attention, we hypothesized that these systems would be subject to similar control mechanisms, namely dynamic interhemispheric competition. We employed a physiological paradigm, combining visual and vestibular stimulation, to induce interhemispheric conflict and subsequent unihemispheric inhibition, as confirmed by transcranial direct current stimulation (tDCS). This allowed us to demonstrate the first systematic bidirectional modulation of numerical magnitude toward either higher or lower numbers, independently of either eye movements or spatial attention mediated biases. We incorporated both our findings and those from the most widely accepted theoretical framework for numerical cognition to present a novel unifying computational model that describes how numerical magnitude allocation is subject to dynamic interhemispheric competition. That is, numerical allocation is continually updated in a contextual manner based upon relative magnitude, with the right hemisphere responsible for smaller magnitudes and the left hemisphere for larger magnitudes.
Wang, L; Aldering, G; Perlmutter, S; Wang, Lifan; Goldhaber, Gerson; Aldering, Greg; Perlmutter, Saul
2003-01-01
We show empirically that fits to the color-magnitude relation of Type Ia supernovae after optical maximum can provide accurate relative extragalactic distances. We report the discovery of an empirical color relation for Type Ia light curves: During much of the first month past maximum, the magnitudes of Type Ia supernovae defined at a given value of color index have a very small magnitude dispersion; moreover, during this period the relation between $B$ magnitude and $B-V$ color (or $B-R$ or $B-I$ color) is strikingly linear, to the accuracy of existing well-measured data. These linear relations can provide robust distance estimates, in particular, by using the magnitudes when the supernova reaches a given color. After correction for light curve strech factor or decline rate, the dispersion of the magnitudes taken at the intercept of the linear color-magnitude relation are found to be around 0$^m$.08 for the sub-sample of supernovae with \\BVm $\\le 0^m.05$, and around 0$^m$.11 for the sub-sample with \\BVm $\\le...
Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.
2012-01-01
of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...... to that seen in Denmark. We propose a significant fitness cost of resistance as the main bacteriological explanation for the disappearance of the multiresistant complex 83A MRSA in Denmark following a reduction in antibiotic usage.......Denmark and several other countries experienced the first epidemic of methicillin-resistant Staphylococcus aureus (MRSA) during the period 196575, which was caused by multiresistant isolates of phage complex 83A. In Denmark these MRSA isolates disappeared almost completely, being replaced by other...
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ⁎ 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ⁎ 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Kim, Leonard; Narra, Venkat; Yue, Ning
2013-01-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) * 0.930 (R(2) = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) * 0.955 (R(2) = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Fitness Club
2012-01-01
Get in Shape for Summer with the CERN Fitness Club Saturday 23 June 2012 from 14:30 to 16.30 (doors open at 14.00) Germana’s Fitness Workshop. Build strength and stamina, sculpt and tone your body and get your heart pumping with Germana’s workout mixture of Cardio Attack, Power Pump, Power Step, Cardio Combat and Cross-Training. Where: 216 (Pump room – equipped with changing rooms and showers). What to wear: comfortable clothes and indoor sports shoes + bring a drink! How much: 15 chf Sign up here: https://espace.cern.ch/club-fitness/Lists/Test_Subscription/NewForm.aspx? Join the Party and dance yourself into shape at Marco + Marials Zumba Masterclass. Saturday 30 June 2012 from 15:00 to 16:30 Marco + Mariel’s Zumba Masterclass Where: 216 (Pump room – equipped with changing rooms and showers). What to wear: comfortable clothes and indoor sports shoes + bring a drink! How much: 25 chf Sign up here: https://espace.cern.ch/club-fitness/Lists/Zumba%20...
Fitness Club
2012-01-01
The CERN Fitness Club is pleased to announce its new early morning class which will be taking place on: Tuesdays from 24th April 07:30 to 08:15 216 (Pump Hall, close to entrance C) – Facilities include changing rooms and showers. The Classes: The early morning classes will focus on workouts which will help you build not only strength and stamina, but will also improve your balance, and coordination. Our qualified instructor Germana will accompany you throughout the workout to ensure you stay motivated so you achieve the best results. Sign up and discover the best way to start your working day full of energy! How to subscribe? We invite you along to a FREE trial session, if you enjoy the activity, please sign up via our website: https://espace.cern.ch/club-fitness/Activities/SUBSCRIBE.aspx. * * * * * * * * Saturday 28th April Get in shape for the summer at our fitness workshop and zumba dance party: Fitness workshop with Germana 13:00 to 14:30 - 216 (Pump Hall) Price...
Fitness club
2013-01-01
Nordic Walking Classes Come join the Nordic walking classes and outings offered by the CERN Fitness Club starting September 2013. Our licensed instructor Christine offers classes for people who’ve never tried Nordic Walking and who would like to learn the technique, and outings for people who have completed the classes and enjoy going out as a group. Course 1: Tuesdays 12:30 - 13:30 24 September, 1 October, 8 October, 15 October Course 2: Tuesdays 12:30 - 13:30 5 November, 12 November, 19 November, 26 November Outings will take place on Thursdays (12:30 to 13:30) from 12 September 2013. We meet at the CERN Club Barracks car park (close to Entrance A) 10 minutes before departure. Prices: 50 CHF for 4 classes, including the 10 CHF Club membership. Payments made directly to instructor. Renting Poles: Poles can be rented from Christine at 5 CHF / hour. Subscription: Please subscribe at: http://cern.ch/club-fitness Looking forward to seeing you among us! Fitness Club FitnessClub@c...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Gilkey, Roderick; Kilts, Clint
2007-11-01
Recent neuroscientific research shows that the health of your brain isn't, as experts once thought, just the product of childhood experiences and genetics; it reflects your adult choices and experiences as well. Professors Gilkey and Kilts of Emory University's medical and business schools explain how you can strengthen your brain's anatomy, neural networks, and cognitive abilities, and prevent functions such as memory from deteriorating as you age. The brain's alertness is the result of what the authors call cognitive fitness -a state of optimized ability to reason, remember, learn, plan, and adapt. Certain attitudes, lifestyle choices, and exercises enhance cognitive fitness. Mental workouts are the key. Brain-imaging studies indicate that acquiring expertise in areas as diverse as playing a cello, juggling, speaking a foreign language, and driving a taxicab expands your neural systems and makes them more communicative. In other words, you can alter the physical makeup of your brain by learning new skills. The more cognitively fit you are, the better equipped you are to make decisions, solve problems, and deal with stress and change. Cognitive fitness will help you be more open to new ideas and alternative perspectives. It will give you the capacity to change your behavior and realize your goals. You can delay senescence for years and even enjoy a second career. Drawing from the rapidly expanding body of neuroscience research as well as from well-established research in psychology and other mental health fields, the authors have identified four steps you can take to become cognitively fit: understand how experience makes the brain grow, work hard at play, search for patterns, and seek novelty and innovation. Together these steps capture some of the key opportunities for maintaining an engaged, creative brain.
Subsidence crack closure: rate, magnitude and sequence
De Graff, J.V.; Romesburg, H.C.
1981-06-01
Tension cracks are a major surface disturbance resulting from subsidence and differential settlement above underground coal mines. Recent engineering studies of subsidence indicate that cracks may close where tensile stresses causing the cracks are reduced or relaxed. This stress reduction occurs as mining in the area is completed. Crack closure was confirmed by a study in the Wasatch Plateau coal field of central Utah. Cracks occurred in both exposed bedrock and regolith in an area with maximum subsidence of 3 m. Mean closure rate was 0.3 cm per week with individual crack closure rates between 0.2 cm and 1.0 cm per week. The mean crack closure magnitude was 80% with closure magnitudes varying between 31% and 100%. Actual magnitude values ranged from 0.6 cm to 6.5 cm with a mean value of 3.8 cm. Statistical analysis compared width change status among cracks over time. It was found that: 1) a 41% probability existed that a crack would exhibit decreasing width per weekly measurement, 2) closure state sequences seem random over time, and 3) real differences in closure state sequence existed among different cracks. (6 refs.) (In English)
Local magnitude scale for earthquakes in Turkey
Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.
2017-01-01
Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.
Algorithms for l2 and l-infinity transfer function curve fitting
Spanos, John T.
1991-01-01
In this paper algorithms for fitting transfer functions to frequency response data are developed. Given a complex vector representing the measured frequency response of a physical system, a transfer function of specified order is determined that minimizes either of the following criteria: (1) the sum of the magnitude-squared of the frequency response errors, and (2) the magnitude of the maximum error. Both of these criteria are nonlinear in the coefficients of the unknown transfer function, and iterative minimization algorithms are proposed. A numerical example demonstrates the effectiveness of the proposed algorithms.
Understanding Magnitudes to Understand Fractions
Gabriel, Florence
2016-01-01
Fractions are known to be difficult to learn and difficult to teach, yet they are vital for students to have access to further mathematical concepts. This article uses evidence to support teachers employing teaching methods that focus on the conceptual understanding of the magnitude of fractions.
Fitness Club
2012-01-01
Nordic Walking Classes Sessions of four classes of one hour each are held on Tuesdays. RDV barracks parking at Entrance A, 10 minutes before class time. Session 1 = 11.09 / 18.09 / 25.09 / 02.10, 18:15 - 19:15 Session 2 = 25.09 / 02.10 / 09.10 / 16.10, 12:30 - 13:30 Session 3 = 23.10 / 30.10 / 06.11 / 13.11, 12:30 - 13:30 Session 4 = 20.11 / 27.11 / 04.12 / 11.12, 12:30 - 13:30 Prices 40 CHF per session + 10 CHF club membership 5 CHF/hour pole rental Check out our schedule and enroll at http://cern.ch/club-fitness Hope to see you among us! fitness.club@cern.ch In spring 2012 there was a long-awaited progress in CERN Fitness club. We have officially opened a Powerlifting @ CERN, and the number of members of the new section has been increasing since then reaching 70+ people in less than 4 months. Powerlifting is a strength sport, which is simple as 1-2-3 and efficient. The "1-2-3" are the three basic lifts (bench press...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Singular statistics to model the distribution of large and small magnitude earthquakes
Maslov, Lev A
2014-01-01
The solution of the Generalized Logistic Equation is obtained to study earthquake statistics for large and small magnitudes. It is shown that the same solution fits the distribution of small magnitude earthquakes, m3, both qualitatively and quantitatively. The Gutenberg-Richter cumulative frequency-magnitude empirical formula is derived from the solution of this equation.
Tectonic stress - Models and magnitudes
Solomon, S. C.; Bergman, E. A.; Richardson, R. M.
1980-01-01
It is shown that global data on directions of principal stresses in plate interiors can serve as a test of possible plate tectonic force models. Such tests performed to date favor force models in which ridge pushing forces play a significant role. For such models the general magnitude of regional deviatoric stresses is comparable to the 200-300 bar compressive stress exerted by spreading ridges. An alternative approach to estimating magnitudes of regional deviatoric stresses from stress orientations is to seek regions of local stress either demonstrably smaller than or larger than the regional stresses. The regional stresses in oceanic intraplate regions are larger than the 100-bar compression exerted by the Ninetyeast Ridge and less than the bending stresses (not less than 1 kbar) beneath Hawaii.
Subject position affects EEG magnitudes.
Rice, Justin K; Rorden, Christopher; Little, Jessica S; Parra, Lucas C
2013-01-01
EEG (electroencephalography) has been used for decades in thousands of research studies and is today a routine clinical tool despite the small magnitude of measured scalp potentials. It is widely accepted that the currents originating in the brain are strongly influenced by the high resistivity of skull bone, but it is less well known that the thin layer of CSF (cerebrospinal fluid) has perhaps an even more important effect on EEG scalp magnitude by spatially blurring the signals. Here it is shown that brain shift and the resulting small changes in CSF layer thickness, induced by changing the subject's position, have a significant effect on EEG signal magnitudes in several standard visual paradigms. For spatially incoherent high-frequency activity the effect produced by switching from prone to supine can be dramatic, increasing occipital signal power by several times for some subjects (on average 80%). MRI measurements showed that the occipital CSF layer between the brain and skull decreases by approximately 30% in thickness when a subject moves from prone to supine position. A multiple dipole model demonstrated that this can indeed lead to occipital EEG signal power increases in the same direction and order of magnitude as those observed here. These results suggest that future EEG studies should control for subjects' posture, and that some studies may consider placing their subjects into the most favorable position for the experiment. These findings also imply that special consideration should be given to EEG measurements from subjects with brain atrophy due to normal aging or neurodegenerative diseases, since the resulting increase in CSF layer thickness could profoundly decrease scalp potential measurements.
Fitness club
2013-01-01
Nordic Walking Classes New session of 4 classes of 1 hour each will be held on Tuesdays in May 2013. Meet at the CERN barracks parking at Entrance A, 10 minutes before class time. Dates and time: 07.05, 14.05, 21.05 and 28.05, fom 12 h 30 to 13 h 30 Prices: 40 CHF per session + 10 CHF club membership – 5 CHF / hour pole rental Check out our schedule and enroll at http://cern.ch/club-fitness Hope to see you among us!
Regression between earthquake magnitudes having errors with known variances
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
The color-magnitude distribution of small Jupiter Trojans
Wong, Ian
2015-01-01
We present an analysis of survey observations targeting the leading L4 Jupiter Trojan cloud near opposition using the wide-field Suprime-Cam CCD camera on the 8.2 m Subaru Telescope. The survey covered about 38 deg$^2$ of sky and imaged 147 fields spread across a wide region of the L4 cloud. Each field was imaged in both the $g'$ and the $i'$ band, allowing for the measurement of $g-i$ color. We detected 557 Trojans in the observed fields, ranging in absolute magnitude from $H=10.0$ to $H = 20.3$. We fit the total magnitude distribution to a broken power law and show that the power-law slope rolls over from $0.45\\pm 0.05$ to $0.36^{+0.05}_{-0.09}$ at a break magnitude of $H_{b}=14.93^{+0.73}_{-0.88}$. Combining the best-fit magnitude distribution of faint objects from our survey with an analysis of the magnitude distribution of bright objects listed in the Minor Planet Center catalog, we obtain the absolute magnitude distribution of Trojans over the entire range from $H=7.2$ to $H=16.4$. We show that the $g-i...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Strong motion duration and earthquake magnitude relationships
Salmon, M.W.; Short, S.A. [EQE International, Inc., San Francisco, CA (United States); Kennedy, R.P. [RPK Structural Mechanics Consulting, Yorba Linda, CA (United States)
1992-06-01
Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ``strong motion duration`` has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions.
Zhang Hongzhi; Diao Guiling; Zhao Mingchun; Wang Qincai; Zhang Xiao; Huang Yuan
2008-01-01
Based on the earthquake catalog reported by the Chinese digital seismic network in recent years, we select the earthquakes with both surface wave magnitude and local magnitude and fit them into a relationship between the two magnitudes. The systematic difference is found from the formula which has been used for 30 years. Because of a large dynamic range and wide frequency range of the current digital observation system, in addition to a larger number of stations and earthquakes being used compared to before, the relation obtained in this paper seems more reliable. Our calculation shows that there is no significant difference before and after magnitude conversion so we suggest the abandonment of magnitude conversion. The site response of a station consists of amplification at different frequencies. The amplification is equal to about 1 and changes little with frequency at stations located on basement rock, and it is greater than 1 at low frequency ranges and less than 1 at high frequency ranges at stations located on sediment layers. The difference between magnitudes from single station located on sediment layer and the average magnitude from the whole network increases from negative to positive with period. It seems that there is no fixed station correction factor and the station correction method does not work to improve the accuracy and magnitude estimates.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Solar Variability Magnitudes and Timescales
Kopp, Greg
2015-08-01
The Sun’s net radiative output varies on timescales of minutes to many millennia. The former are directly observed as part of the on-going 37-year long total solar irradiance climate data record, while the latter are inferred from solar proxy and stellar evolution models. Since the Sun provides nearly all the energy driving the Earth’s climate system, changes in the sunlight reaching our planet can have - and have had - significant impacts on life and civilizations.Total solar irradiance has been measured from space since 1978 by a series of overlapping instruments. These have shown changes in the spatially- and spectrally-integrated radiant energy at the top of the Earth’s atmosphere from timescales as short as minutes to as long as a solar cycle. The Sun’s ~0.01% variations over a few minutes are caused by the superposition of convection and oscillations, and even occasionally by a large flare. Over days to weeks, changing surface activity affects solar brightness at the ~0.1% level. The 11-year solar cycle has comparable irradiance variations with peaks near solar maxima.Secular variations are harder to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Proxy models of the Sun based on cosmogenic isotope records and inferred from Earth climate signatures indicate solar brightness changes over decades to millennia, although the magnitude of these variations depends on many assumptions. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities.In this talk I will summarize the Sun’s variability magnitudes over different temporal ranges, showing examples relevant for climate studies as well as detections of exo-solar planets transiting Sun-like stars.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Astronomical Limiting Magnitude at Langkawi Observatory
Zainuddin, Mohd. Zambri; Loon, Chin Wei; Harun, Saedah
2010-07-01
Astronomical limiting magnitude is an indicator for astronomer to conduct astronomical measurement at a particular site. It gives an idea to astronomer of that site what magnitude of celestial object can be measured. Langkawi National Observatory (LNO) is situated at Bukit Malut with latitude 6°18' 25'' North and longitude 99°46' 52'' East in Langkawi Island. Sky brightness measurement has been performed at this site using the standard astronomical technique. The value of the limiting magnitude measured is V = 18.6+/-1.0 magnitude. This will indicate that astronomical measurement at Langkawi observatory can only be done for celestial objects having magnitude less than V = 18.6 magnitudes.
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
A finer view of the conditional galaxy luminosity function and magnitude-gap statistics
Trevisan, M.; Mamon, G. A.
2017-10-01
The gap between first- and second-ranked galaxy magnitudes in groups is often considered a tracer of their merger histories, which in turn may affect galaxy properties, and also serves to test galaxy luminosity functions (LFs). We remeasure the conditional luminosity function (CLF) of the Main Galaxy Sample of the SDSS in an appropriately cleaned subsample of groups from the Yang catalogue. We find that, at low group masses, our best-fitting CLF has steeper satellite high ends, yet higher ratios of characteristic satellite to central luminosities in comparison with the CLF of Yang et al. The observed fractions of groups with large and small magnitude gaps as well as the Tremaine & Richstone statistics are not compatible with either a single Schechter LF or with a Schechter-like satellite plus lognormal central LF. These gap statistics, which naturally depend on the size of the subsamples, and also on the maximum projected radius, Rmax, for defining the second brightest galaxy, can only be reproduced with two-component CLFs if we allow small gap groups to preferentially have two central galaxies, as expected when groups merge. Finally, we find that the trend of higher gap for higher group velocity dispersion, σv, at a given richness, discovered by Hearin et al., is strongly reduced when we consider σv in bins of richness, and virtually disappears when we use group mass instead of σv. This limits the applicability of gaps in refining cosmographic studies based on cluster counts.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Comparison of magnetic probe calibration at nano and millitesla magnitudes.
Pahl, Ryan A; Rovey, Joshua L; Pommerenke, David J
2014-01-01
mounted probe and 12.0% for the hand-wound probe. The maximum difference between relevant and low magnitude tests was 21.5%.
Effect of Bend Radius on Magnitude and Location of Erosion in S-Bend
Quamrul H. Mazumder
2015-01-01
Full Text Available Solid particle erosion is a mechanical process that removes material by the impact of solid particles entrained in the flow. Erosion is a leading cause of failure of oil and gas pipelines and fittings in fluid handling industries. Different approaches have been used to control or minimize damage caused by erosion in particulated gas-solid or liquid-solid flows. S-bend geometry is widely used in different fluid handling equipment that may be susceptible to erosion damage. The results of a computational fluid dynamic (CFD simulation of diluted gas-solid and liquid-solid flows in an S-bend are presented in this paper. In addition to particle impact velocity, the bend radius may have significant influence on the magnitude and the location of erosion. CFD analysis was performed at three different air velocities (15.24 m/s–45.72 m/s and three different water velocities (0.1 m/s–10 m/s with entrained solid particles. The particle sizes used in the analysis range between 50 and 300 microns. Maximum erosion was observed in water with 10 m/s, 250-micron particle size, and a ratio of 3.5. The location of maximum erosion was observed in water with 10 m/s, 300-micron particle size, and a ratio of 3.5. Comparison of CFD results with available literature data showed reasonable and good agreement.
AKLSQF - LEAST SQUARES CURVE FITTING
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
The absolute infrared magnitudes of type Ia supernovae
Meikle, W P S
2000-01-01
The absolute luminosities and homogeneity of early-time infrared (IR) light curves of type Ia supernovae are examined. Eight supernovae are considered. These are selected to have accurately known epochs of maximum blue light as well as having reliable distance estimates and/or good light curve coverage. Two approaches to extinction correction are considered. Owing to the low extinction in the IR, the differences in the corrections via the two methods are small. Absolute magnitude light curves in the J, H and K-bands are derived. Six of the events, including five established ``Branch-normal'' supernovae show similar coeval magnitudes. Two of these, SNe 1989B and 1998bu, were observed near maximum infrared light. This occurs about 5 days {\\it before} maximum blue light. Absolute peak magnitudes of about -19.0, -18.7 and -18.8 in J, H & K respectively were obtained. The two spectroscopically peculiar supernovae in the sample, SNe 1986G and 1991T, also show atypical IR behaviour. The light curves of the six s...
The discovery and comparison of symbolic magnitudes.
Chen, Dawn; Lu, Hongjing; Holyoak, Keith J
2014-06-01
Humans and other primates are able to make relative magnitude comparisons, both with perceptual stimuli and with symbolic inputs that convey magnitude information. Although numerous models of magnitude comparison have been proposed, the basic question of how symbolic magnitudes (e.g., size or intelligence of animals) are derived and represented in memory has received little attention. We argue that symbolic magnitudes often will not correspond directly to elementary features of individual concepts. Rather, magnitudes may be formed in working memory based on computations over more basic features stored in long-term memory. We present a model of how magnitudes can be acquired and compared based on BARTlet, a representationally simpler version of Bayesian Analogy with Relational Transformations (BART; Lu, Chen, & Holyoak, 2012). BARTlet operates on distributions of magnitude variables created by applying dimension-specific weights (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive features of objects. The resulting magnitude distributions, formed and maintained in working memory, are sensitive to contextual influences such as the range of stimuli and polarity of the question. By incorporating psychological reference points that control the precision of magnitudes in working memory and applying the tools of signal detection theory, BARTlet is able to account for a wide range of empirical phenomena involving magnitude comparisons, including the symbolic distance effect and the semantic congruity effect. We discuss the role of reference points in cognitive and social decision-making, and implications for the evolution of relational representations.
Determination of the meteor limiting magnitude
Kingery, A.; Blaauw, R. C.
2017-09-01
We present our method to calculate the meteor limiting magnitude. The limiting meteor magnitude defines the faintest magnitude at which all meteors are still detected by a given system. An accurate measurement of the limiting magnitude is important in order to calculate the meteoroid flux from a meteor shower or sporadic source. Since meteor brightness is linked to meteor mass, the limiting magnitude is needed to calculate the limiting mass of the meteor flux measurement. The mass distribution of meteoroids is thought to follow a power law, thus being slightly off in the limiting magnitude can have a significant effect on the measured flux. Sky conditions can change on fairly short timescales; therefore one must monitor the meteor limiting magnitude at regular intervals throughout the night, rather than just measuring it once. We use the stellar limiting magnitude as a proxy of the meteor limiting magnitude. Our method for determining the stellar limiting magnitude and how we transform it into the meteor limiting magnitude is presented. These methods are currently applied to NASA's wide-field meteor camera network to determine nightly fluxes, but are applicable to other camera networks.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Fit for purpose: Australia's National Fitness Campaign.
Collins, Julie A; Lekkas, Peter
2011-12-19
During a time of war, the federal government passed the National Fitness Act 1941 to improve the fitness of the youth of Australia and better prepare them for roles in the armed services and industry. Implementation of the National Fitness Act made federal funds available at a local level through state-based national fitness councils, which coordinated promotional campaigns, programs, education and infrastructure for physical fitness, with volunteers undertaking most of the work. Specifically focused on children and youth, national fitness councils supported the provision of children's playgrounds, youth clubs and school camping programs, as well as the development of physical education in schools and its teaching and research in universities. By the time the Act was repealed in 1994, fitness had become associated with leisure and recreation rather than being seen as equipping people for everyday life and work. The emergence of the Australian National Preventive Health Agency Act 2010 offers the opportunity to reflect on synergies with its historic precedent.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
2011-03-24
Program. As shown earlier, VO2max is a good indicator of cardio respiratory fitness. A maximum effort test on a treadmill is one of the most...accepted and accurate methods to determine VO2max , but the test is lengthy, complicated, and comes with risks. To avoid these drawbacks, the Air Force 9...adopted a sub-maximal cycle ergometry test (SCET) designed to estimate VO2max . These tests were accurate within the range of ten to twenty percent
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
THE RELATION BETWEEN MAGNITUDE AND FREQUENCY IN ROCKBURST AND ITS APPLICATION IN FORECASTIONG
潘一山; 刘建军; 仲玉; 徐方军; 徐秉业; 关杰
1998-01-01
The rockburst data of Huafeng and Da'anshan Mine were analysised in this paper. From the statistical results, we know that the relation between magnitude and frequency in rockburst is linear. Using this relation, the maximum magnitude and tendency of rockburst can be predicted.The theory can improve the quantitive prediction level in rockburst.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
ProFit: Bayesian galaxy fitting tool
Robotham, A. S. G.; Taranu, D.; Tobar, R.
2016-12-01
ProFit is a Bayesian galaxy fitting tool that uses the fast C++ image generation library libprofit (ascl:1612.003) and a flexible R interface to a large number of likelihood samplers. It offers a fully featured Bayesian interface to galaxy model fitting (also called profiling), using mostly the same standard inputs as other popular codes (e.g. GALFIT ascl:1104.010), but it is also able to use complex priors and a number of likelihoods.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Erickson, Tim
2008-01-01
We often look for a best-fit function to a set of data. This article describes how a "pretty good" fit might be better than a "best" fit when it comes to promoting conceptual understanding of functions. In a pretty good fit, students design the function themselves rather than choosing it from a menu; they use appropriate variable names; and they…
Jensen, Jens-Ole
2003-01-01
Artiklen redegør for udbredelsen af fitness blandt unge og diskuterer, hvor det er blevet så populært at dyrke fitness.......Artiklen redegør for udbredelsen af fitness blandt unge og diskuterer, hvor det er blevet så populært at dyrke fitness....
Magnitude Anomalies and Propagation of Local Phases
1983-01-31
statistically significant variation of magnitude anomalies versus one of this above parameters. A contrario, we observed a significant dependance between...enough to demand a more detailed analysis. III - Local dependance of magnitude anomalies. A smoothing of our data on all quakes originating in the same
Reward Magnitude Effects on Temporal Discrimination
Galtress, Tiffany; Kirkpatrick, Kimberly
2010-01-01
Changes in reward magnitude or value have been reported to produce effects on timing behavior, which have been attributed to changes in the speed of an internal pacemaker in some instances and to attentional factors in other cases. The present experiments therefore aimed to clarify the effects of reward magnitude on timing processes. In Experiment…
Local magnitudes of small contained explosions.
Chael, Eric Paul
2009-12-01
The relationship between explosive yield and seismic magnitude has been extensively studied for underground nuclear tests larger than about 1 kt. For monitoring smaller tests over local ranges (within 200 km), we need to know whether the available formulas can be extrapolated to much lower yields. Here, we review published information on amplitude decay with distance, and on the seismic magnitudes of industrial blasts and refraction explosions in the western U. S. Next we measure the magnitudes of some similar shots in the northeast. We find that local magnitudes ML of small, contained explosions are reasonably consistent with the magnitude-yield formulas developed for nuclear tests. These results are useful for estimating the detection performance of proposed local seismic networks.
Fitness World - Fremtidig overlevelse
Rice, Kasper; Klink, Nikolaj; Nielsen, Mie; Carlson, Andre; Boy, Mikkel; Hansen, Alexander
2015-01-01
Our project is a case study with Fitness World as a baseline. Our project will enhance Fitness Worlds penetration on their current position on the market. Our empiricism includes both qualitative and quantitative methodical approaches by the use of an expert interview and a questionnaire survey. These methods contribute and generate general knowledge about the fitness culture in Denmark and the customers in the fitness industry. We have stated a possible strategic opportunity for Fitness Worl...
Zero Magnitude Effect for the Productivity of Triggered Tsunami Sources
Geist, E. L.
2013-12-01
The Epidemic Type Aftershock Sequence (ETAS) model is applied to tsunami events to explain previously observed temporal clustering of tsunami sources. Tsunami events are defined by National Geophysical Data Center (NGDC) tsunami database. For the ETAS analysis, the earthquake magnitude associated with each tsunami event in the NGDC database is replaced by the primary magnitude listed in the Centennial catalog up until 1976 and in the Global CMT catalog from 1976 through 2010. Tsunamis with a submarine landslide or volcanic component are included if they are accompanied by an earthquake, which is most often the case. Tsunami size is used as a mark for determining a tsunami-generating event, according to a minimum completeness level. The tsunami catalog is estimated to be complete for tsunami sizes greater than 1 m since 1900 and greater than 0.1 m since 1960. Of the five parameters in the temporal ETAS model (Ogata, 1988), the parameter that scales the magnitude dependence in the productivity of triggered events is the one that is most different from ETAS parameters derived from similar earthquake catalogs. Maximum likelihood estimates of this magnitude effect parameter is essentially zero, within 95% confidence, for both the 0.1 m and 1.0 m tsunami completeness levels. To explain this result, parameter estimates are determined for the Global CMT catalog under three tsunamigenic conditions: (1) M≥7 and focal depth ≤50 km, (2) submarine location, and (3) dominant component of dip slip. Successive subcatalogs are formed from the Global CMT catalog according to each of these conditions. The high magnitude threshold for tsunamigenesis alone (subcatalog 1) does not explain the zero magnitude effect. The zero magnitude effect also does not appear to be caused the smaller number of tsunamigenic events analyzed in comparison to earthquake catalogs with a similar magnitude threshold. ETAS parameter estimates from the subcatalog (3) with all three tsunamigenic conditions
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Vang, Jakob Rabjerg; Zhou, Fan; Andreasen, Søren Juhl;
2015-01-01
A high temperature PEM (HTPEM) fuel cell model capable of simulating both steady state and dynamic operation is presented. The purpose is to enable extraction of unknown parameters from sets of impedance spectra and polarisation curves. The model is fitted to two polarisation curves and four...... impedance spectra measured on a Dapozol 77 MEA. The model is capable of achieving good agreement with the recorded curves. Except at OCV, where the voltage is overpredicted, the simulated polarisation curves deviate maximum 3.0% from the measurements. The impedance spectra deviate maximum 3.7%. The fitted...... parameter values are within the range reported in literature. The only exception is the catalyst layer acid content, which is an order of magnitude lower. This may derive from acid migration. The model is used to illustrate the effect of reactant dynamics on the impedance spectrum. The model can aid...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Discounting Behaviour and the Magnitude Effect
Andersen, Steffen; Harrison, Glenn W.; Lau, Morten Igel
2013-01-01
We evaluate the claim that individuals exhibit a magnitude effect in their discounting behaviour, where higher discount rates are inferred from choices made with lower principals, all else being equal. If the magnitude effect is quantitatively significant, it is not appropriate to use one discount...... rate that is independent of the scale of the project for cost–benefit analysis and capital budgeting. Using data from a field experiment in Denmark, we find statistically significant evidence of a magnitude effect that is much smaller than is claimed. This evidence surfaces only if one controls...
The Model Characteristics of Physical Fitness in CrossFit
Vasilii V. Volkov
2014-06-01
Full Text Available The aim of the study is to work out the model characteristics of the physical fitness of CrossFit athletes based on laboratory functional testing (n=10. The analysis of the body composition was conducted using the dual-energy absorptiometry method. The morpho-functional characteristics of the heart were explored using a high-resolution ultrasound scanner. Oxygen consumption at the aerobic-anaerobic threshold and maximum oxygen consumption were determined in a step test on arm and leg cycle ergometers using a gas-analyzer. The level of the physical fitness of leg muscles in the males and females who took part in the study was satisfactory. However, it was considerably higher than the norm for untrained people. The level of the physical fitness of arm muscles was higher than the average and matched the Master of Sport of International Class standards. The productivity of the cardio-vascular system was much higher than in healthy males and females who do not work out and comparable to the standards for advanced soccer players.
Chrastina, M.; Zejda, M.; Mikulášek, Z.
2010-12-01
The FITS standard allows arbitrary use of name-space for keywords, except some reserved keywords. Result of this freedom is that several keywords have the same meaning. Similar problem is that values of keywords have different physical units. These facts complicate automated data processing and also creation of FITS file archives with simple structure. MUNI-FITS-Utils is a package of Python scripts which have been developed in PyFITS, a Python FITS Module. Scripts are user-friendly and allow manipulating FITS headers to get uniform shape. Further functions will be added soon.
FIT3D: Fitting optical spectra
Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.
2016-09-01
FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.
Design of Second Order Recursive Digital Integrators with Matching Phase and Magnitude Response
K. Garg
2017-04-01
Full Text Available Location of poles and zeroes greatly affect phase response and magnitude response of a system. Recently, pole-zero optimization emerged as an effective approach to approximately match magnitude response of a system with that of an ideal one. In this brief, a methodology for the design of linear phase integrators and ones with constant phase of -90 degree is proposed. The aim of this method is to simultaneously attain multiple objectives of magnitude and phase optimization. In this method, magnitude response error is minimized under the constraint that the maximum passband phase-response error is below a prescribed level. Examples are included to illustrate the proposed design technique.
Seismic hazard in Greece. I. Magnitude recurrence
Makropoulos, Kostas C.; Burton, Paul W.
1985-08-01
Two different methods are applied to the earthquake catalogue for Greece (Makropoulos and Burton, 1981), MB catalogue, to evaluate Greek seismic hazard in terms of magnitude: earthquake strain energy release and Gumbel's third asymptotic distribution of extreme values. It is found that there is a close relationship between results from the two methods. In places where the cumulative strain energy release graphs include at least one well defined cycle of periodicity of strain release, then the parameters of the third type asymptote are well defined with small uncertainties. In almost all cases the magnitude distribution shows a remarkably good third type asymptotic behaviour. The results are presented in the form of graphs and contour maps of annual and 80-year modes, and magnitudes with 70% probability of not being exceeded in the next 50 and 100 years. For six of the most heavily industrial and highly populated centres of Greece magnitude hazard parameters are also derived and examined in more detail, thereby illustrating the direct applicability of the methods in terms of zoning. The close agreement between observed and predicted extreme magnitudes shows that the sample period considered (1900-1978), is long enough to obtain statistically stable estimates. For Athens the upper bound magnitude is found to be 6.7 ± 0.3 (within 100 km) and 6.8 ± 0.4 (100 km) from the two methods respectively, whereas for Corinth an earthquake of magnitude 6.5 has a mean return period of 43 years. Greece as a whole has an upper bound magnitude 8.7 ± 0.6 and earthquakes of a size similar to the 1903 Kithira event ( M ≈ 8.0) have a mean return period of about 200 years. The significantly different maps contouring magnitudes of the annual and 80-year modes result from the fact that each place has its own distribution curvature for magnitude occurrence, and thus they are not a linear extrapolation of each other. However, as longer return periods are considered, these differences
Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.
Larkin, Joshua D; Cook, Peter R
2012-07-30
Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.
Grosse, Susan J.
2009-01-01
This article discusses how families can increase family togetherness and improve physical fitness. The author provides easy ways to implement family friendly activities for improving and maintaining physical health. These activities include: walking, backyard games, and fitness challenges.
The Absolute Magnitude Distribution of Kuiper Belt Objects
Fraser, Wesley C; Morbidelli, Alessandro; Parker, Alex; Batygin, Konstantin
2014-01-01
Here we measure the absolute magnitude distributions (H-distribution) of the dynamically excited and quiescent (hot and cold) Kuiper Belt objects (KBOs), and test if they share the same H-distribution as the Jupiter Trojans. From a compilation of all useable ecliptic surveys, we find that the KBO H-distributions are well described by broken power-laws. The cold population has a bright-end slope, $\\alpha_{\\textrm{1}}=1.5_{-0.2}^{+0.4}$, and break magnitude, $H_{\\textrm{B}}=6.9_{-0.2}^{+0.1}$ (r'-band). The hot population has a shallower bright-end slope of, $\\alpha_{\\textrm{1}}=0.87_{-0.2}^{+0.07}$, and break magnitude $H_{\\textrm{B}}=7.7_{-0.5}^{+1.0}$. Both populations share similar faint end slopes of $\\alpha_2\\sim0.2$. We estimate the masses of the hot and cold populations are $\\sim0.01$ and $\\sim3\\times10^{-4} \\mbox{ M$_{\\bigoplus}$}$. The broken power-law fit to the Trojan H-distribution has $\\alpha_\\textrm{1}=1.0\\pm0.2$, $\\alpha_\\textrm{2}=0.36\\pm0.01$, and $H_{\\textrm{B}}=8.3$. The KS test reveals that...
Regional Variations of the ω-upper Bound Magnitude of GIII Distribution in the Iranian Plateau
Mohammadi, Hiwa; Bayrak, Yusuf
2016-08-01
The Iranian Plateau does not appear to be a single crustal block, but an assemblage of zones comprising the Alborz—Azerbaijan, Zagros, Kopeh—Dagh, Makran, and Central and East Iran. The Gumbel's III asymptotic distribution method (GIII) and maximum magnitude expected by Kijko—Sellevoll method is applied in order to check the potentiality of the each seismogenic zone in the Iranian Plateau for the future occurrence of maximum magnitude ( M max). For this purpose, a homogeneous and complete seismicity database of the instrumental period during 1900-2012 is used in 29 seismogenic zones of the examined region. The spatial mapping of hazard parameters (upper bound magnitude ( ω), most probable earthquake magnitude in next 100 years ( M 100) and maximum magnitude expected by maximum magnitude estimated by Kijko—Sellevoll method (max M K - S max) reveals that Central and East Iran, Alborz and Azerbaijan, Kopeh—Dagh and SE Zagros are a dangerous place for the next occurrence of a large earthquake.
Quasispecies on Fitness Landscapes.
Schuster, Peter
2016-01-01
Selection-mutation dynamics is studied as adaptation and neutral drift on abstract fitness landscapes. Various models of fitness landscapes are introduced and analyzed with respect to the stationary mutant distributions adopted by populations upon them. The concept of quasispecies is introduced, and the error threshold phenomenon is analyzed. Complex fitness landscapes with large scatter of fitness values are shown to sustain error thresholds. The phenomenological theory of the quasispecies introduced in 1971 by Eigen is compared to approximation-free numerical computations. The concept of strong quasispecies understood as mutant distributions, which are especially stable against changes in mutations rates, is presented. The role of fitness neutral genotypes in quasispecies is discussed.
Representations of the magnitudes of fractions.
Schneider, Michael; Siegler, Robert S
2010-10-01
We tested whether adults can use integrated, analog, magnitude representations to compare the values of fractions. The only previous study on this question concluded that even college students cannot form such representations and instead compare fraction magnitudes by representing numerators and denominators as separate whole numbers. However, atypical characteristics of the presented fractions might have provoked the use of atypical comparison strategies in that study. In our 3 experiments, university and community college students compared more balanced sets of single-digit and multi-digit fractions and consistently exhibited a logarithmic distance effect. Thus, adults used integrated, analog representations, akin to a mental number line, to compare fraction magnitudes. We interpret differences between the past and present findings in terms of different stimuli eliciting different solution strategies.
Achieving continuity: a story of stellar magnitude
Evans, Michael S.
2010-03-01
Scientists tell a story of 2,000 years of stellar magnitude research that traces back to Hipparchus. This story of continuity in practices serves an important role in scientific education and outreach. STS scholars point out many ways that stories of continuity, like many narratives about science, are disconnected from practices. Yet the story of continuity in stellar magnitude is a powerful scientific achievement precisely because of its connection to practice. The historical development of star catalogues shows how specific recording practices connected past and present in a useful way. The narrative of continuity in stellar magnitude, however else it might be subject to STS critique of narrative, maintains its power because of its connection to practice. I suggest that more attention be paid to connections between practice and narrative in STS, and in particular to the ways that historical practices sustain narratives by connecting past and present.
Magnitudes and timescales of total solar irradiance variability
Kopp, Greg
2016-07-01
The Sun's net radiative output varies on timescales of minutes to gigayears. Direct measurements of the total solar irradiance (TSI) show changes in the spatially- and spectrally-integrated radiant energy on timescales as short as minutes to as long as a solar cycle. Variations of ~0.01% over a few minutes are caused by the ever-present superposition of convection and oscillations with very large solar flares on rare occasion causing slightly-larger measurable signals. On timescales of days to weeks, changing photospheric magnetic activity affects solar brightness at the ~0.1% level. The 11-year solar cycle shows variations of comparable magnitude with irradiances peaking near solar maximum. Secular variations are more difficult to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Historical reconstructions of the Sun's irradiance based on indicators of solar-surface magnetic activity, such as sunspots, faculae, and cosmogenic isotope records, suggest solar brightness changes over decades to millennia, although the magnitudes of these variations have high uncertainties due to the indirect historical records on which they rely. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities. In this manuscript I summarize the Sun's variability magnitudes over different temporal regimes and discuss the irradiance record's relevance for solar and climate studies as well as for detections of exo-solar planets transiting Sun-like stars.
Saccadic compression of symbolic numerical magnitude.
Paola Binda
Full Text Available Stimuli flashed briefly around the time of saccadic eye movements are subject to complex distortions: compression of space and time; underestimate of numerosity. Here we show that saccadic distortions extend to abstract quantities, affecting the representation of symbolic numerical magnitude. Subjects consistently underestimated the results of rapidly computed mental additions and subtractions, when the operands were briefly displayed before a saccade. However, the recognition of the number symbols was unimpaired. These results are consistent with the hypothesis of a common, abstract metric encoding magnitude along multiple dimensions. They suggest that a surprising link exists between the preparation of action and the representation of abstract quantities.
Axiomatic approaches to Stevens' magnitude scaling
Zimmer, Karin; Ellermeier, Wolfgang
2006-01-01
In 1996, Narens showed that Stevens’ methods of magnitude scaling are based on but a few qualitative assumptions that are straightforward to evaluate empirically. Two crucial assumptions are commutativity (the outcome of a sequence of two assessments does not depend on their order) and multiplica......In 1996, Narens showed that Stevens’ methods of magnitude scaling are based on but a few qualitative assumptions that are straightforward to evaluate empirically. Two crucial assumptions are commutativity (the outcome of a sequence of two assessments does not depend on their order...
Argument on the magnitude-frequencyrelation
陈时军; 王丽凤; 马丽; 张红军
2002-01-01
The complexity of seismicity and the relation of magnitude and frequency are discussed in this paper on the basis of nonlinear dynamics and multifractal theory. We argue that seismic active systems normally have multifractal characteristics, either for the spatial-temporal distribution or the intensity distribution of events. In the view of multifractal theory the nonlinear characteristics of the magnitude-frequency relation are discussed and the formulation is revised. Also, one example of the variance of bq estimated based on the recent New Zealand catalogue is enumerated.
Absolute-Magnitude Distributions of Supernovae
Richardson, Dean; Wright, John; Maddox, Larry
2014-01-01
The absolute-magnitude distributions of seven supernova types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M_B -15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of -19.25. The IIP distribution was the dimmest at -16.75.
Determinants of Magnitude of Pseudohyperkalemia in Thrombocytosis
Kim, Ho-Jung; Chung, Choon-Hae; Moon, Chul-Oong; Park, Chan-Gook; Hong, Soon-Pyo; Oh, Man Seok; Carroll, Hugh J.
1990-01-01
The release of potassium from platelets is a well-known cause of pseudohyperkalemia in thrombocytosis. In predicting the magnitude of pseudohyperkalemia associated with thrombocytosis, previous investigations considered only the amount of potassium released from platelets during blood clotting, although the increment in serum potassium during blood clotting depends on the quantity of potassium released from platelets as well as the volume of distribution of the released potassium, which is inversely proportionate to the hematocrit. The present study proposes a new mathematical formula to predict the magnitude of increase in serum potassium during blood clotting, and accuracy of this formula has been tested in a patient with thrombocytosis. PMID:2098099
Improving Children's Knowledge of Fraction Magnitudes
Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.
2016-01-01
We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards' suggestions for teaching fractions, would improve children's fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played "Catch…
Gonzalez J, F. [UNAM, Facultad de Ciencias, Ciudad Universitaria, 04510 Mexico D. F. (Mexico); Alvarez R, J. T., E-mail: trinidad.alvarez@inin.gob.mx [ININ, Departamento de Metrologia de Radiaciones Ionizantes, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2014-10-15
In this work a historical revision of the exposure magnitude development and their roentgen unit (1905 - 2011) is made, noting that it had their origin in the electric methods for the detection of the ionizing radiation in the period of 1895 at 1937. However, the ionization is not who better characterizes the physical, chemical and biological effects of the ionizing radiations, but is the energy deposited by this radiation in the interest bodies, which led historically to the development of dosimetric magnitudes in energy terms like they are: the absorbed dose D (1950), the kerma K (1958) and the equivalent dose H (1962). These dosimetric magnitudes culminated with the definition of the effective equivalent dose or effective dose which is not measurable and should be considered with the operative magnitudes ICRU: H environmental equivalent dose and/or H directional equivalent dose, which can be determined by means of a conversion coefficient that is applied to the exposure, kerma in air, fluence, etc. (Author)
Schmeltz, Line
2017-01-01
Companies experience increasing legal and societal pressure to communicate about their corporate social responsibility (CSR) engagements from a number of different publics. One very important group is that of young consumers who are predicted to be the most important and influential consumer group...... in the near future. From a value- theoretical base, this article empirically explores the role and applicability of ‘fit’ in strategic CSR communication targeted at young consumers. Point of departure is taken in the well-known strategic fit (a logical link between a company’s CSR commitment and its core...... values) and is further developed by introducing two additional fits, the CSR- Consumer fit and the CSR-Consumer-Company fit (Triple Fit). Through a sequential design, the three fits are empirically tested and their potential for meeting young consumers’ expectations for corporate CSR messaging...
Fast Regional Magnitude Determination at INGV
Michelini, A.; Lomax, A.; Bono, A.; Amato, A.
2006-12-01
The recent, very large earthquakes in the Indian Ocean and Indonesia have shown the importance of rapid magnitude determination for tsunami warning. In the Mediterranean region, destructive tsunamis have occurred repeatedly in the past; however, because of the proximity of the tsunami sources to populated coasts, very rapid analysis is necessary for effective warning. Reliable estimates of the earthquake location and size should be available within tens of seconds after the first arriving P-waves are recorded at local and regional distances. Currently in Europe there is no centralized agency such as the PTWC for the Pacific Ocean dedicated to issue tsunami warnings, though, recent initiatives, such as the NEAMTWS (North-East Atlantic and Mediterranean Tsunami Warning System), aim toward the establishment of such an agency. Thus established seismic monitoring centers, such as INGV, Rome, are currently relied upon for rapid earthquake analysis and information dissemination. In this study, we describe the recent, experimental implementation at the INGV seismic center of a procedure for rapid magnitude determination at regional distances based on the Mwp methodology (Tsuboi et al., 1995), which exploits information in the P-wave train. For our Mwp determinations, we have implemented an automatic procedure that windows the relevant part of the seismograms and picks the amplitudes of the first two largest peaks, providing within seconds after each P arrival an estimate of earthquake size. Manual revision is completed using interactive software that presents an analysis with the seismograms, amplitude picks and magnitude estimates. We have compared our Mwp magnitudes for recent earthquakes within the Mediterranean region with Mw determined through the Harvard CMT procedure. For the majority of the events, the Mwp and Mw magnitudes agree closely, indicating that the rapid Mwp estimates forms a useful tool for effective tsunami warning on a regional scale.
2015-09-01
Fill cannot. NEC Fit NEC Fit measures more than the crew’s total skill sets. It also accounts for how these sailors are used by crediting an NEC...Abstract Navy enlisted classifications (NECs) denote special skills beyond those associated with a rating. They are used in defining manpower...requirements and in managing personnel by tracking sailors who have acquired these skills . NEC Fit is one of two primary metrics that Navy leadership
Justo Alpanes, J. L. de; Carrasco Romero, R.; Martin Martin, A. J.
1999-08-01
An usual procedure in seismic hazard studies is to associate the potential sources of earthquakes with relatively broad zones, called seismogenetic zones, with homogeneous seismic and tectonic characteristics. The process of tremors generation is, in each zone, homogeneous in space and time. Seventeen zones are supposed to affect the Andalusian region, associated with the main tectonic structures of the Ibero-Maghrebian region. Both log-linear and log-quadratic magnitude-frequency laws have been considered. This last relationship has usually a better fit with the magnitude data, although in some zones a linear relationship is clearly established. (Author) 9 refs.
Corneal topography and soft contact lens fit.
Young, Graeme; Schnider, Cristina; Hunt, Chris; Efron, Suzanne
2010-05-01
To determine which ocular topography variables affect soft contact lens fit. Fifty subjects each wore three pairs of soft lenses in random succession (Vistakon Acuvue 2, Vistakon Acuvue Advance, Ciba Vision Night & Day), and various aspects of lens fit were evaluated. The steeper base curves of each type were worn in one eye and the flatter base curves in the other eye. Corneal topography data were collected using a Medmont E300 corneal topographer (Camberwell, Australia). Corneal curvature, shape factor (SF), and corneal height were measured over a 10 mm chord and also over the maximum measurable diameter. These were measured in the horizontal, vertical, steepest, and flattest meridians. With each lens type, the steeper base curve provided the best fit on the greatest proportion of eyes and the significant differences in various aspects of fit were noted between base curves. For each lens type, there was no significant difference in mean K-reading between those eyes best fit with the steeper base curve and those eyes best fit with the flatter base curve. Two of the lenses showed a positive correlation between centration and horizontal corneal height (maximum), whereas one lens showed a negative correlation between centration and horizontal SF (SF = e). Several lenses showed a positive correlation between post-blink movement and horizontal or vertical corneal SF. The measurement of corneal topography using current Placido disc instrumentation allows a better prediction of soft lens fit than by keratometry, but it is not reliable enough to enable accurate selection of the best fitting base curve. Some correlations are evident between corneal measurements; however, trial fitting remains the method of choice for selection of soft lens base curve.
Leibovich, Tali; Katzin, Naama; Harel, Maayan; Henik, Avishai
2016-08-17
In this review, we are pitting two theories against each other: the more accepted theory-the 'number sense' theory-suggesting that a sense of number is innate and non-symbolic numerosity is being processed independently of continuous magnitudes (e.g., size, area, density); and the newly emerging theory suggesting that (1) both numerosities and continuous magnitudes are processed holistically when comparing numerosities, and (2) a sense of number might not be innate. In the first part of this review, we discuss the 'number sense' theory. Against this background, we demonstrate how the natural correlation between numerosities and continuous magnitudes makes it nearly impossible to study non-symbolic numerosity processing in isolation from continuous magnitudes, and therefore the results of behavioral and imaging studies with infants, adults and animals can be explained, at least in part, by relying on continuous magnitudes. In the second part, we explain the 'sense of magnitude' theory and review studies that directly demonstrate that continuous magnitudes are more automatic and basic than numerosities. Finally, we present outstanding questions. Our conclusion is that there is not enough convincing evidence to support the number sense theory anymore. Therefore, we encourage researchers not to assume that number sense is simply innate, but to put this hypothesis to the test, and to consider if such an assumption is even testable in light of the correlation of numerosity and continuous magnitudes.
The Ml Magnitude Scale In Italy
Gasperini, P.; Lolli, B.; Filippucci, M.; de Simoni, B.
To improve the reliability of Ml magnitude estimates in Italy, we have updated the database of real Wood-Anderson (WA) and of simulated Wood Anderson (SWA) am- plitudes recently revised by Gasperini (2002). This was done by the re-reading of orig- inal WA seismograms, made available by the SISMOS Project of the Istituto Nazionale di Geofisica (INGV), as well as by the analysis of further Very Broad Band (VBB) recordings of the MEDNET network of INGV for the period from 1996 to 1998. The full operability, in the last five years, of a VBB station located exactly at the same site (TRI) of a former WA instrument allowed us to reliably infer a new attenuation function from the joined WA and SWA dataset. We found a significant deviation of the attenuation law from the standard Richter table at distances larger than 400 km where the latter overestimates the magnitude up to about 0.3 units. We also computed regionalized attenuation functions accounting for the differences in the propagation properties of seismic waves between the Adriatic (less attenuating) and Tyrrhenian (more attenuating) sides of the Italian peninsula. Using this improved Ml magnitude database we were also able to further improve the computation of duration (Md) and amplitude (Ma) magnitudes computed from short period vertical seismometers of the INGV as well as to analyze the time variation of the station calibrations. We found that the absolute amplification of INGV stations is underestimated almost exactly by a factor 2 starting from the entering upon in operation of the digital acquisition system at INGV in middle 1984.
Toward Order-of-Magnitude Cascade Prediction
Guo, Ruocheng; Shaabani, Elham; Bhatnagar, Abhinav; Shakarian, Paulo
2015-01-01
When a piece of information (microblog, photograph, video, link, etc.) starts to spread in a social network, an important question arises: will it spread to "viral" proportions -- where "viral" is defined as an order-of-magnitude increase. However, several previous studies have established that cascade size and frequency are related through a power-law - which leads to a severe imbalance in this classification problem. In this paper, we devise a suite of measurements based on "structural dive...
Giardino, P. P.; Kannike, K.; Masina, I.
2014-01-01
Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...
Maiorano, Joseph J.
2001-01-01
Fit 2-B FATHERS is a parenting-skills education program for incarcerated adult males. The goals of this program are for participants to have reduced recidivism rates and a reduced risk of their children acquiring criminal records. These goals are accomplished by helping participants become physically, practically, and socially fit for the demands…
2004-01-01
The mineralogy of 'Bounce' rock was determined by fitting spectra from a library of laboratory minerals to the spectrum of Bounce taken by the Mars Exploration Rover Opportunity's miniature thermal emission spectrometer. The minerals that give the best fit include pyroxene, plagioclase and olivine -- minerals commonly found in basaltic volcanic rocks -- and typical martian dust produced by the rover's rock abrasion tool.
Karen; Clark
2005-01-01
Summer is a time to exercise and keep fit.Ask yourself these quick questions and check your score below.How fit are you? 1.What is your pulse[脉搏]?Find your pulse in your wrist[手腕], count the number of beats[跳动] in one minute,Now
Vertiz, Virginia C.; Downey, Carolyn J.
This paper proposes a two-pronged approach for examining an educational program's "quality of fit." The American Association of School Administrators' (AASA's) Curriculum Management Audit for quality indicators is reviewed, using the Downey Quality Fit Framework and Deming's 4 areas of profound knowledge and 14 points. The purpose is to…
Log-scaling magnitude modulated watermarking scheme
LING HeFei; YUAN WuGang; ZOU FuHao; LU ZhengDing
2007-01-01
A real-time watermarking scheme with high robustness and security has been proposed based on modulating the log-scaling magnitudes of DCT coefficients,which is most suitable for JPEG images and MPEG streams. The watermark bit is encoded as the sign of the difference between the individual log-scaling magnitude of a group-region and the average one of all group-regions. The log-scaling magnitude can be modulated by modifying the low and middle frequency DCT coefficients imperceptibly. The robustness of scheme is not only dependent on those largest coefficients, but also on the other coefficients with the same proportion. It can embed 512 bits into an image with a size of 512×512, which can satisfy the payload requirement of most video watermarking applications. Moreover, the watermark embedding process only requires one-sixth of the time consumed during normal playing of video, and the watermark detection only requires one-twelfth of that, which can meet the real-time requirements of most video watermarking applications. Furthermore, the experimental results show that the presented scheme is transparent and robust to significant valumetric distortions (including additive noise, low-pass filtering, lossy compression and valumetric scaling) and a part of geometric distortions. It performs much better than the EMW algorithm in resisting all kinds of distortions except Gaussian noise with a larger deviation.
Bethmann, F.
2011-03-22
Theoretical considerations and empirical regressions show that, in the magnitude range between 3 and 5, local magnitude, ML, and moment magnitude, Mw, scale 1:1. Previous studies suggest that for smaller magnitudes this 1:1 scaling breaks down. However, the scatter between ML and Mw at small magnitudes is usually large and the resulting scaling relations are therefore uncertain. In an attempt to reduce these uncertainties, we first analyze the ML versus Mw relation based on 195 events, induced by the stimulation of a geothermal reservoir below the city of Basel, Switzerland. Values of ML range from 0.7 to 3.4. From these data we derive a scaling of ML ~ 1:5Mw over the given magnitude range. We then compare peak Wood-Anderson amplitudes to the low-frequency plateau of the displacement spectra for six sequences of similar earthquakes in Switzerland in the range of 0:5 ≤ ML ≤ 4:1. Because effects due to the radiation pattern and to the propagation path between source and receiver are nearly identical at a particular station for all events in a given sequence, the scatter in the data is substantially reduced. Again we obtain a scaling equivalent to ML ~ 1:5Mw. Based on simulations using synthetic source time functions for different magnitudes and Q values estimated from spectral ratios between downhole and surface recordings, we conclude that the observed scaling can be explained by attenuation and scattering along the path. Other effects that could explain the observed magnitude scaling, such as a possible systematic increase of stress drop or rupture velocity with moment magnitude, are masked by attenuation along the path.
Magnitudes and Timescales of Total Solar Irradiance Variability
Kopp, Greg
2016-01-01
The Sun's net radiative output varies on timescales of minutes to gigayears. Direct measurements of the total solar irradiance (TSI) show changes in the spatially- and spectrally-integrated radiant energy on timescales as short as minutes to as long as a solar cycle. Variations of ~0.01 % over a few minutes are caused by the ever-present superposition of convection and oscillations with very large solar flares on rare occasion causing slightly-larger measureable signals. On timescales of days to weeks, changing photospheric magnetic activity affects solar brightness at the ~0.1 % level. The 11-year solar cycle shows variations of comparable magnitude with irradiances peaking near solar maximum. Secular variations are more difficult to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Historical reconstructions of the Sun's irradiance based on indicators of solar-surface magnetic activity, such as sunspots, faculae, and cosmogenic isotope records, sugge...
Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments
Amina Shahzadi
2013-02-01
Full Text Available The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997. The study is based on 23 sites of rainfall which are divided into three homogeneous regions. The collective results of L-moment ratio diagram, Z-statistic and AWD values show the GLO, GEV and GNO to be best fit for all three regions and in addition PE3 for region 3. On the basis of relative RMSE, for region 1 and region 2, GLO, GEV and GNO are producing approximately the same relative RMSE for return periods upto 100. While GNO is producing less relative RMSE for large return periods of 500 and 1000. So for large return periods GNO could be best distribution. For region 3 GLO, GEV, GNO and PE3 are having approximately the same relative RMSE for return periods upto 100. While for large return periods of 500 and 1000 PE3 could be best on basis of less relative RMSE.
Limitations of inclusive fitness.
Allen, Benjamin; Nowak, Martin A; Wilson, Edward O
2013-12-10
Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.
Apparent magnitude of earthshine: a simple calculation
Agrawal, Dulli Chandra
2016-05-01
The Sun illuminates both the Moon and the Earth with practically the same luminous fluxes which are in turn reflected by them. The Moon provides a dim light to the Earth whereas the Earth illuminates the Moon with somewhat brighter light which can be seen from the Earth and is called earthshine. As the amount of light reflected from the Earth depends on part of the Earth and the cloud cover, the strength of earthshine varies throughout the year. The measure of the earthshine light is luminance, which is defined in photometry as the total luminous flux of light hitting or passing through a surface. The expression for the earthshine light in terms of the apparent magnitude has been derived for the first time and evaluated for two extreme cases; firstly, when the Sun’s rays are reflected by the water of the oceans and secondly when the reflector is either thick clouds or snow. The corresponding values are -1.30 and -3.69, respectively. The earthshine value -3.22 reported by Jackson lies within these apparent magnitudes. This paper will motivate the students and teachers of physics to look for the illuminated Moon by earthlight during the waning or waxing crescent phase of the Moon and to reproduce the expressions derived here by making use of the inverse-square law of radiation, Planck’s expression for the power in electromagnetic radiation, photopic spectral luminous efficiency function and expression for the apparent magnitude of a body in terms of luminous fluxes.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Blanquart, François; Bataillon, Thomas
2016-06-01
The fitness landscape defines the relationship between genotypes and fitness in a given environment and underlies fundamental quantities such as the distribution of selection coefficient and the magnitude and type of epistasis. A better understanding of variation in landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigate the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing nine diverse biological systems. Despite uncertainty owing to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness-of-fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible in only three of nine biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes-including the mean and SD of selection and epistasis coefficients-it was often unable to explain the full structure of fitness landscapes.
Estimating station noise thresholds for seismic magnitude bias elimination
Peacock, Sheila
2014-05-01
To eliminate the upward bias of seismic magnitude caused by censoring of signal hidden by noise, noise level at each station in a network must be estimated. Where noise levels are not measured directly, the method of Kelly and Lacoss (1969) has been used to infer them from bulletin data (Lilwall and Douglas 1984). To verify this estimate of noise level, noise thresholds of International Monitoring System (IMS) stations inferred from the International Data Centre (IDC) Reviewed Event Bulletin (REB) by the Kelly and Lacoss method for 2005-2013 are compared with direct measurements on (i) noise preceding first arrivals in filtered (0.8-4.5 Hz) IMS seismic data, and (ii) noise preceding the expected time of arrival of signals from events, where signal was not actually seen (values gathered by the IDC for maximum-likelihood magnitude calculation). For most stations the direct pre-signal noise measurements are ~0.25 units of log A/T lower than the Kelly&Lacoss thresholds; because the IDC automatic system declares a detection only when the short-term-average-to-long-term-average ratio threshold, which varies with station and frequency band between ~3-6, is exceeded. The noise values at expected times of non-observed signal arrival are ~0.15 units lower than the Kelly and Lacoss thresholds. Exceptions are caused by faulty channels being used for the direct noise or body-wave magnitude (mb) measurements or, for station ARCES and possibly FINES, SPITS and HFS, the wider filter used for signal amplitude than for signal detection admitting noise that swamped the signal. Abrupt changes in thresholds might show mis-documented sensor sensitivity changes at individual stations.
... 2011 -- Exercise for Special Populations 2011 -- Behavior Change & Exercise Adherence 2011 -- Nutrition 2011 -- Winter Health 2010 -- Healthy Aging 2010 -- Weight Loss & Weight Management 2010 -- Fitness Assessment & Injury Prevention 2009 -- Strength Training 2009 -- Menopause ...
... Global Map Premature birth report card Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... virus and pregnancy Folic acid Medicine safety and pregnancy Birth defects prevention Learn how to help reduce ...
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2011-07-01
Structural parameters are normally extracted from observed galaxies by fitting analytic light profiles to the observations. Obtaining accurate fits to high-resolution images is a computationally expensive task, requiring many model evaluations and convolutions with the imaging point spread function. While these algorithms contain high degrees of parallelism, current implementations do not exploit this property. With ever-growing volumes of observational data, an inability to make use of advances in computing power can act as a constraint on scientific outcomes. This is the motivation behind our work, which aims to implement the model-fitting procedure on a graphics processing unit (GPU). We begin by analysing the algorithms involved in model evaluation with respect to their suitability for modern many-core computing architectures like GPUs, finding them to be well-placed to take advantage of the high memory bandwidth offered by this hardware. Following our analysis, we briefly describe a preliminary implementation of the model fitting procedure using freely-available GPU libraries. Early results suggest a speed-up of around 10× over a CPU implementation. We discuss the opportunities such a speed-up could provide, including the ability to use more computationally expensive but better-performing fitting routines to increase the quality and robustness of fits.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate
Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-10-01
We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts
Kagan, Yan Y.; Jackson, David D.
2016-07-01
We have obtained new results in the statistical analysis of global earthquake catalogues with special attention to the largest earthquakes, and we examined the statistical behaviour of earthquake rate variations. These results can serve as an input for updating our recent earthquake forecast, known as the `Global Earthquake Activity Rate 1' model (GEAR1), which is based on past earthquakes and geodetic strain rates. The GEAR1 forecast is expressed as the rate density of all earthquakes above magnitude 5.8 within 70 km of sea level everywhere on earth at 0.1 × 0.1 degree resolution, and it is currently being tested by the Collaboratory for Study of Earthquake Predictability. The seismic component of the present model is based on a smoothed version of the Global Centroid Moment Tensor (GCMT) catalogue from 1977 through 2013. The tectonic component is based on the Global Strain Rate Map, a `General Earthquake Model' (GEM) product. The forecast was optimized to fit the GCMT data from 2005 through 2012, but it also fit well the earthquake locations from 1918 to 1976 reported in the International Seismological Centre-Global Earthquake Model (ISC-GEM) global catalogue of instrumental and pre-instrumental magnitude determinations. We have improved the recent forecast by optimizing the treatment of larger magnitudes and including a longer duration (1918-2011) ISC-GEM catalogue of large earthquakes to estimate smoothed seismicity. We revised our estimates of upper magnitude limits, described as corner magnitudes, based on the massive earthquakes since 2004 and the seismic moment conservation principle. The new corner magnitude estimates are somewhat larger than but consistent with our previous estimates. For major subduction zones we find the best estimates of corner magnitude to be in the range 8.9 to 9.6 and consistent with a uniform average of 9.35. Statistical estimates tend to grow with time as larger earthquakes occur. However, by using the moment conservation
Magnitude of Interfractional Vaginal Cuff Movement: Implications for External Irradiation
Ma, Daniel J. [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO (United States); Michaletz-Lorenz, Martha [Department of Education and Training, Elekta, Maryland Heights, MO (United States); Goddu, S. Murty [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO (United States); Grigsby, Perry W., E-mail: pgrigsby@wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO (United States); Division of Nuclear Medicine, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO (United States); Department of Obstetrics and Gynecology, Washington University School of Medicine, St. Louis, MO (United States)
2012-03-15
Purpose: To quantify the extent of interfractional vaginal cuff movement in patients receiving postoperative irradiation for cervical or endometrial cancer in the absence of bowel/bladder instruction. Methods and Materials: Eleven consecutive patients with cervical or endometrial cancer underwent placement of three gold seed fiducial markers in the vaginal cuff apex as part of standard of care before simulation. Patients subsequently underwent external irradiation and brachytherapy treatment based on institutional guidelines. Daily megavoltage CT imaging was performed during each external radiation treatment fraction. The daily positions of the vaginal apex fiducial markers were subsequently compared with the original position of the fiducial markers on the simulation CT. Composite dose-volume histograms were also created by summing daily target positions. Results: The average ({+-} standard deviation) vaginal cuff movement throughout daily pelvic external radiotherapy when referenced to the simulation position was 16.2 {+-} 8.3 mm. The maximum vaginal cuff movement for any patient during treatment was 34.5 mm. In the axial plane the mean vaginal cuff movement was 12.9 {+-} 6.7 mm. The maximum vaginal cuff axial movement was 30.7 mm. In the craniocaudal axis the mean movement was 10.3 {+-} 7.6 mm, with a maximum movement of 27.0 mm. Probability of cuff excursion outside of the clinical target volume steadily dropped as margin size increased (53%, 26%, 4.2%, and 1.4% for 1.0, 1.5, 2.0, and 2.5 cm, respectively.) However, rectal and bladder doses steadily increased with larger margin sizes. Conclusions: The magnitude of vaginal cuff movement is highly patient specific and can impact target coverage in patients without bowel/bladder instructions at simulation. The use of vaginal cuff fiducials can help identify patients at risk for target volume excursion.
Evaluation of the magnitude of EBT Gafchromic film polarization effects.
Butson, M J; Cheung, T; Yu, P K N
2009-03-01
Gafchromic EBT film, has become a main dosimetric tools for quantitative evaluation of radiation doses in radiation therapy application. One aspect of variability using EBT Gafchromic film is the magnitude of the orientation effect when analysing the film in landscape or portrait mode. This work has utilized a > 99% plane polarized light source and a non-polarized diffuse light source to investigate the absolute magnitude of EBT Gafchromic films polarization or orientation effects. Results have shown that using a non-polarized light source produces a negligible orientation effect for EBT Gafchromic film and thus the angle of orientation is not important. However, the film exhibits a significant variation in transmitted optical density with angle of orientation to polarized light producing more than 100% increase, or over a doubling of measured OD for films irradiated with x-rays up to dose levels of 5 Gy. The maximum optical density was found to be in a plane at an angle of 14 degrees +/- 7 degrees (2 SD) when the polarizing sheet is turned clockwise with respect to the film. As the magnitude of the orientation effect follows a sinusoidal shape it becomes more critical for alignment accuracy of the film with respect to the polarizing direction in the anticlockwise direction as this will place the alignment of the polarizing axes on the steeper gradient section of the sinusoidal pattern. An average change of 4.5% per 5 degrees is seen for an anticlockwise polarizer rotation where as the effect is 1.2% per 5 degrees for an clockwise polarizer rotation. This may have consequences to the positional accuracy of placement of the EBT Gafchromic film on a scanner as even a 1 degree alignment error can cause an approximate 1% error in analysis. The magnitude of the orientation effect is therefore dependant on the degree of polarization of the scanning light source and can range from negligible (diffuse LED light source) through to more than 100% or doubling of OD variation
Violence against women: global scope and magnitude.
Watts, Charlotte; Zimmerman, Cathy
2002-04-06
An increasing amount of research is beginning to offer a global overview of the extent of violence against women. In this paper we discuss the magnitude of some of the most common and most severe forms of violence against women: intimate partner violence; sexual abuse by non-intimate partners; trafficking, forced prostitution, exploitation of labour, and debt bondage of women and girls; physical and sexual violence against prostitutes; sex selective abortion, female infanticide, and the deliberate neglect of girls; and rape in war. There are many potential perpetrators, including spouses and partners, parents, other family members, neighbours, and men in positions of power or influence. Most forms of violence are not unique incidents but are ongoing, and can even continue for decades. Because of the sensitivity of the subject, violence is almost universally under-reported. Nevertheless, the prevalence of such violence suggests that globally, millions of women are experiencing violence or living with its consequences.
Nonlinear susceptibility magnitude imaging of magnetic nanoparticles
Ficko, Bradley W., E-mail: Bradley.W.Ficko@Dartmouth.edu; Giacometti, Paolo; Diamond, Solomon G.
2015-03-15
This study demonstrates a method for improving the resolution of susceptibility magnitude imaging (SMI) using spatial information that arises from the nonlinear magnetization characteristics of magnetic nanoparticles (mNPs). In this proof-of-concept study of nonlinear SMI, a pair of drive coils and several permanent magnets generate applied magnetic fields and a coil is used as a magnetic field sensor. Sinusoidal alternating current (AC) in the drive coils results in linear mNP magnetization responses at primary frequencies, and nonlinear responses at harmonic frequencies and intermodulation frequencies. The spatial information content of the nonlinear responses is evaluated by reconstructing tomographic images with sequentially increasing voxel counts using the combined linear and nonlinear data. Using the linear data alone it is not possible to accurately reconstruct more than 2 voxels with a pair of drive coils and a single sensor. However, nonlinear SMI is found to accurately reconstruct 12 voxels (R{sup 2}=0.99, CNR=84.9) using the same physical configuration. Several time-multiplexing methods are then explored to determine if additional spatial information can be obtained by varying the amplitude, phase and frequency of the applied magnetic fields from the two drive coils. Asynchronous phase modulation, amplitude modulation, intermodulation phase modulation, and frequency modulation all resulted in accurate reconstruction of 6 voxels (R{sup 2}>0.9) indicating that time multiplexing is a valid approach to further increase the resolution of nonlinear SMI. The spatial information content of nonlinear mNP responses and the potential for resolution enhancement with time multiplexing demonstrate the concept and advantages of nonlinear SMI. - Highlights: • Development of a nonlinear susceptibility magnitude imaging model • Demonstration of nonlinear SMI with primary and harmonic frequencies • Demonstration of nonlinear SMI with primary and intermodulation
Jaspreet Kaur
Full Text Available Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm. To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.
Ipsen, Andreas; Ebbels, Timothy M D
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time-tasks that may be best addressed through engineering efforts.
Ipsen, Andreas; Ebbels, Timothy M. D.
2014-10-01
In a recent article, we derived a probability distribution that was shown to closely approximate that of the data produced by liquid chromatography time-of-flight mass spectrometry (LC/TOFMS) instruments employing time-to-digital converters (TDCs) as part of their detection system. The approach of formulating detailed and highly accurate mathematical models of LC/MS data via probability distributions that are parameterized by quantities of analytical interest does not appear to have been fully explored before. However, we believe it could lead to a statistically rigorous framework for addressing many of the data analytical problems that arise in LC/MS studies. In this article, we present new procedures for correcting for TDC saturation using such an approach and demonstrate that there is potential for significant improvements in the effective dynamic range of TDC-based mass spectrometers, which could make them much more competitive with the alternative analog-to-digital converters (ADCs). The degree of improvement depends on our ability to generate mass and chromatographic peaks that conform to known mathematical functions and our ability to accurately describe the state of the detector dead time—tasks that may be best addressed through engineering efforts.
1990-05-01
Paz, Peru-Bolivia border LPS 50,27,3 -0.071±0.036 -89.161942 14.292222 La Palma , Quatemala LUB 40,30,3 0.214±0.037 -101.866669 33.583332 Lubbock...Forrestal Building 1000 Independence Avenue Washington, DC 20585 Mr. Jeff Duncan Dr. W.H.K. Lee Office of Congressman Markey Office of Earthquakes, Volcanoes
Maximum-likelihood cluster recontruction
Bartelmann, M; Seitz, S; Schneider, P J; Bartelmann, Matthias; Narayan, Ramesh; Seitz, Stella; Schneider, Peter
1996-01-01
We present a novel method to recontruct the mass distribution of galaxy clusters from their gravitational lens effect on background galaxies. The method is based on a least-chisquare fit of the two-dimensional gravitational cluster potential. The method combines information from shear and magnification by the cluster lens and is designed to easily incorporate possible additional information. We describe the technique and demonstrate its feasibility with simulated data. Both the cluster morphology and the total cluster mass are well reproduced.
Alternative Astronomical FITS imaging
Varsaki, Eleni E; Fotopoulos, Vassilis; Skodras, Athanassios N
2012-01-01
Astronomical radio maps are presented mainly in FITS format. Astronomical Image Processing Software (AIPS) uses a set of tables attached to the output map to include all sorts of information concerning the production of the image. However this information together with information on the flux and noise of the map is lost as soon as the image of the radio source in fits or other format is extracted from AIPS. This information would have been valuable to another astronomer who just uses NED, for example, to download the map. In the current work, we show a method of data hiding inside the radio map, which can be preserved under transformations, even for example while the format of the map is changed from fits to other lossless available image formats.
Barsdell, Benjamin R; Fluke, Christopher J
2011-01-01
Structural parameters are normally extracted from observed galaxies by fitting analytic light profiles to the observations. Obtaining accurate fits to high-resolution images is a computationally expensive task, requiring many model evaluations and convolutions with the imaging point spread function. While these algorithms contain high degrees of parallelism, current implementations do not exploit this property. With evergrowing volumes of observational data, an inability to make use of advances in computing power can act as a constraint on scientific outcomes. This is the motivation behind our work, which aims to implement the model-fitting procedure on a graphics processing unit (GPU). We begin by analysing the algorithms involved in model evaluation with respect to their suitability for modern many-core computing architectures like GPUs, finding them to be well-placed to take advantage of the high memory bandwidth offered by this hardware. Following our analysis, we briefly describe a preliminary implementa...
Fitting the Phenomenological MSSM
AbdusSalam, S S; Quevedo, F; Feroz, F; Hobson, M
2010-01-01
We perform a global Bayesian fit of the phenomenological minimal supersymmetric standard model (pMSSM) to current indirect collider and dark matter data. The pMSSM contains the most relevant 25 weak-scale MSSM parameters, which are simultaneously fit using `nested sampling' Monte Carlo techniques in more than 15 years of CPU time. We calculate the Bayesian evidence for the pMSSM and constrain its parameters and observables in the context of two widely different, but reasonable, priors to determine which inferences are robust. We make inferences about sparticle masses, the sign of the $\\mu$ parameter, the amount of fine tuning, dark matter properties and the prospects for direct dark matter detection without assuming a restrictive high-scale supersymmetry breaking model. We find the inferred lightest CP-even Higgs boson mass as an example of an approximately prior independent observable. This analysis constitutes the first statistically convergent pMSSM global fit to all current data.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Giardino, P. P.; Kannike, K.; Masina, I.;
2014-01-01
We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite H...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....
The absolute magnitude distribution of Kuiper Belt objects
Fraser, Wesley C. [Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Brown, Michael E. [Division of Geological and Planetary Sciences, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Morbidelli, Alessandro [Laboratoire Lagrange, UMR7293, Université de Nice Sophia-Antipolis, CNRS, Observatoire de la Côte d' Azur, BP 4229, F-06304 Nice (France); Parker, Alex [Department of Astronomy, University of California at Berkeley, Berkeley, CA 94720 (United States); Batygin, Konstantin, E-mail: wesley.fraser@nrc.ca [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 51, Cambridge, MA 02138 (United States)
2014-02-20
Here we measure the absolute magnitude distributions (H-distribution) of the dynamically excited and quiescent (hot and cold) Kuiper Belt objects (KBOs), and test if they share the same H-distribution as the Jupiter Trojans. From a compilation of all useable ecliptic surveys, we find that the KBO H-distributions are well described by broken power laws. The cold population has a bright-end slope, α{sub 1}=1.5{sub −0.2}{sup +0.4}, and break magnitude, H{sub B}=6.9{sub −0.2}{sup +0.1} (r'-band). The hot population has a shallower bright-end slope of, α{sub 1}=0.87{sub −0.2}{sup +0.07}, and break magnitude H{sub B}=7.7{sub −0.5}{sup +1.0}. Both populations share similar faint-end slopes of α{sub 2} ∼ 0.2. We estimate the masses of the hot and cold populations are ∼0.01 and ∼3 × 10{sup –4} M {sub ⊕}. The broken power-law fit to the Trojan H-distribution has α{sub 1} = 1.0 ± 0.2, α{sub 2} = 0.36 ± 0.01, and H {sub B} = 8.3. The Kolmogorov-Smirnov test reveals that the probability that the Trojans and cold KBOs share the same parent H-distribution is less than 1 in 1000. When the bimodal albedo distribution of the hot objects is accounted for, there is no evidence that the H-distributions of the Trojans and hot KBOs differ. Our findings are in agreement with the predictions of the Nice model in terms of both mass and H-distribution of the hot and Trojan populations. Wide-field survey data suggest that the brightest few hot objects, with H{sub r{sup ′}}≲3, do not fall on the steep power-law slope of fainter hot objects. Under the standard hierarchical model of planetesimal formation, it is difficult to account for the similar break diameters of the hot and cold populations given the low mass of the cold belt.
Linking the Fits, Fitting the Links: Connecting Different Types of PO Fit to Attitudinal Outcomes
Leung, Aegean; Chaturvedi, Sankalp
2011-01-01
In this paper we explore the linkages among various types of person-organization (PO) fit and their effects on employee attitudinal outcomes. We propose and test a conceptual model which links various types of fits--objective fit, perceived fit and subjective fit--in a hierarchical order of cognitive information processing and relate them to…
Luís R. A. Gabriel Filho
2012-08-01
Full Text Available The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I-100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.A eletrificação rural é caracterizada pela dispersão geográfica da população, baixo consumo, alto investimento por consumidor e elevado custo operacional. Por outro lado, a radiação solar constitui-se numa inesgotável fonte energética, e para sua conversão em energia elétrica são utilizados painéis fotovoltaicos. Neste trabalho, foram determinadas equações ajustadas às condições de campo apresentadas pelo fabricante para corrente e potência de sistemas fotovoltaicos de pequeno porte. A análise matemática foi feita sobre o sistema fotovoltaico rural I-100 da
Extended arrays for nonlinear susceptibility magnitude imaging
Ficko, Bradley W.; Giacometti, Paolo; Diamond, Solomon G.
2016-01-01
This study implements nonlinear susceptibility magnitude imaging (SMI) with multifrequency intermodulation and phase encoding. An imaging grid was constructed of cylindrical wells of 3.5-mm diameter and 4.2-mm height on a hexagonal two-dimensional 61-voxel pattern with 5-mm spacing. Patterns of sample wells were filled with 40-μl volumes of Fe3O4 starch-coated magnetic nanoparticles (mNPs) with a hydrodynamic diameter of 100 nm and a concentration of 25 mg/ml. The imaging hardware was configured with three excitation coils and three detection coils in anticipation that a larger imaging system will have arrays of excitation and detection coils. Hexagonal and bar patterns of mNP were successfully imaged (R2 > 0.9) at several orientations. This SMI demonstration extends our prior work to feature a larger coil array, enlarged field-of-view, effective phase encoding scheme, reduced mNP sample size, and more complex imaging patterns to test the feasibility of extending the method beyond the pilot scale. The results presented in this study show that nonlinear SMI holds promise for further development into a practical imaging system for medical applications. PMID:26124044
Childhood Cataract: Magnitude, Management, Economics and Impact
BR Shamanna
2004-01-01
Full Text Available The prevalence of blindness among children in different regions varies from 0.2/1000 children to over 1.5/1000 children with a global figure estimated at 0.7/1000. This means that there are an estimated 1.4 million blind children worldwide.1 The proportion of blindness in children due to cataract varies considerably between regions from 10%-30% with a global average estimated at 14%, giving 190,000 children blind from cataract. 2 While the magnitude of childhood cataracts varies from place to place, it is a priority within all blindness control programmes for children. Children who are blind have to overcome a lifetime of emotional, social and economic difficulties which affect the child, the family and society.3 Loss of vision in children influences their education, employment and social life. The numbers blind with cataract do not reflect the years of disability and lost quality of life. Childhood blindness is second only to adult cataract as a cause of blind-person years. Approximately 70 million blind-person years are caused by childhood blindness of which about 10 million blind-person years (14% is due to childhood cataract. Timely recognition and intervention can eliminate blind-years due to childhood cataract, as the condition is treatable.
Evolution and Magnitudes of Candidate Planet Nine
Linder, Esther F
2016-01-01
Context. Given the recently renewed interest in a possible additional major body in the outer Solar System, the thermodynamic evolution of such an object was studied, assuming that it is a smaller version of Uranus and Neptune. Aims. We have modeled the temporal evolution of the radius, temperature, intrinsic luminosity, and the black body spectrum of distant ice giants. The aim is to provide also estimates of the magnitudes in different bands to assess the object's detectability. Methods. Simulations of the cooling and contraction were conducted for ice giants with masses of 5, 10, 20, and 50 Mearth containing 10, 14, 21, and 37 % H/He in mass that are located at 280, 700, and 1120 AU from the Sun. The core composition was varied from purely rocky to purely icy as well as 50% rock and 50% ice. The atmospheric opacity was set to 1, 50, and 100 times solar metallicity. Results. We find for the nominal 10 Mearth planet at 700 AU at the current age of the Solar System an effective temperature of 47 K, much more ...
Understanding the magnitude dependence of PGA and PGV in NGA-West 2 data
Baltay, Annemarie S.; Hanks, Thomas C.
2014-01-01
The Next Generation Attenuation‐West 2 (NGA‐West 2) 2014 ground‐motion prediction equations (GMPEs) model ground motions as a function of magnitude and distance, using empirically derived coefficients (e.g., Bozorgniaet al., 2014); as such, these GMPEs do not clearly employ earthquake source parameters beyond moment magnitude (M) and focal mechanism. To better understand the magnitude‐dependent trends in the GMPEs, we build a comprehensive earthquake source‐based model to explain the magnitude dependence of peak ground acceleration and peak ground velocity in the NGA‐West 2 ground‐motion databases and GMPEs. Our model employs existing models (Hanks and McGuire, 1981; Boore, 1983, 1986; Anderson and Hough, 1984) that incorporate a point‐source Brune model, including a constant stress drop and the high‐frequency attenuation parameter κ0, random vibration theory, and a finite‐fault assumption at the large magnitudes to describe the data from magnitudes 3 to 8. We partition this range into four different magnitude regions, each of which has different functional dependences on M. Use of the four magnitude partitions separately allows greater understanding of what happens in any one subrange, as well as the limiting conditions between the subranges. This model provides a remarkably good fit to the NGA data for magnitudes from 3ground‐motion models and data, which play an important role in understanding small‐magnitude data, for which the corner frequency is masked by the attenuation of high frequencies. That this simple, source‐based model matches the NGA‐West 2 GMPEs and data so well suggests that considerable simplicity underlies the parametrically complex NGA GMPEs.
Dixon-Watmough, Rebecca; Keogh, Brenda; Naylor, Stuart
2012-01-01
For some time the Association for Science Education (ASE) has been aware that it would be useful to have some resources available to get children talking and thinking about issues related to health, sport and fitness. Some of the questions about pulse, breathing rate and so on are pretty obvious to everyone, and there is a risk of these being…
Donovan, Edward P.
The major objective of this module is to help students understand how water from a source such as a lake is treated to make it fit to drink. The module, consisting of five major activities and a test, is patterned after Individualized Science Instructional System (ISIS) modules. The first activity (Planning) consists of a brief introduction and a…
Vail, Kathleen
1999-01-01
Children who hate gym grow into adults who associate physical activity with ridicule and humiliation. Physical education is reinventing itself, stressing enjoyable activities that continue into adulthood: aerobic dance, weight training, fitness walking, mountain biking, hiking, inline skating, karate, rock-climbing, and canoeing. Cooperative,…
Casey, Stephanie A.
2016-01-01
Statistical association between two variables is one of the fundamental statistical ideas in school curricula. Reasoning about statistical association has been deemed one of the most important cognitive activities that humans perform. Students are typically introduced to statistical association through the study of the line of best fit because it…
Ph.H.B.F. Franses (Philip Hans)
1994-01-01
textabstractIn this paper, a simple Gompertz curve-fitting procedure is proposed. Its advantages include the facts that the stability of the saturation level over the sample period can be checked, and that no knowledge of its value is necessary for forecasting. An application to forecasting the stoc
Maione, Mary Jane
A description is given of a program that provides preventive measures to check obesity in children and young people. The 24-week program is divided into two parts--a nutrition component and an exercise component. At the start and end of the program, tests are given to assess the participants' height, weight, body composition, fitness level, and…
Coleman, A. E.
1981-01-01
Training manual used for preflight conditioning of NASA astronauts is written for audience with diverse backgrounds and interests. It suggests programs for various levels of fitness, including sample starter programs, safe progression schedules, and stretching exercises. Related information on equipment needs, environmental coonsiderations, and precautions can help readers design safe and effective running programs.
Maximum Oxygen Uptake and Post-Exercise Recovery in Professional Road Cyclists
Rutkowski Łukasz
2016-09-01
Full Text Available Purpose. The aim was to investigate the relationship between aerobic fitness as ascribed by maximum oxygen uptake (VO2max and post-exercise recovery after incremental exercise to volitional exhaustion.
The absolute magnitudes of RR Lyrae stars from Hipparcos parallaxes
Groenewegen, M A T
1999-01-01
Using the method of ``reduced parallaxes'' for the Halo RR Lyrae stars in the Hipparcos catalogue we derive a zero point of 0.77 $\\pm$ 0.26 mag for an assumed slope of 0.18 in the $M_{\\rm V}$-[Fe/H] relation. This is 0.28 magnitude brighter than the value Fernley et al. (1998a) derived by employing the method of statistical parallax for the {\\it identical} sample and using the same slope. We point out that a similar difference exists between the ``reduced parallaxes'' method and the statistical parallax method for the Cepheids in the Hipparcos catalogue. We also determine the zero point for the $M_{\\rm K}$-$\\log P_{0}$ relation, and obtain a value of -1.16 $\\pm$ 0.27 mag (for a slope of -2.33). The distance moduli to the Hipparcos RR Lyrae stars derived from the two relations agree well. The derived distance scale is in good agreement with the results from the Main Sequence fitting distances of Galactic globular clusters and with the results of theoretical Horizontal Branch models, and implies a distance modu...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Armstrong, J. T.; Hummel, C. A.; Quirrenbach, A.; Buscher, D. F.; Mozurkewich, D.; Vivekanand, M.; Simon, R. S.; Denison, C. S.; Johnston, K. J.; Pan, X.-P.
1992-01-01
The orbit of the double-lined spectroscopic binary Phi Cygni, the distance to the system, and the masses and absolute magnitudes of its components are presented via measurements with the Mar III Optical Interferometer. On the basis of a reexamination of the spectroscopic data of Rach & Herbig (1961), the values and uncertainties are adopted for the period and the projected semimajor axes from the present fit to the spectroscopic data and the values of the remaining elements from the present fit to the Mark III data. The elements of the true orbit are derived, and the masses and absolute magnitudes of the components, and the distance to the system are calculated.
Extensive fitness and human cooperation.
van Hateren, J H
2015-12-01
Evolution depends on the fitness of organisms, the expected rate of reproducing. Directly getting offspring is the most basic form of fitness, but fitness can also be increased indirectly by helping genetically related individuals (such as kin) to increase their fitness. The combined effect is known as inclusive fitness. Here it is argued that a further elaboration of fitness has evolved, particularly in humans. It is called extensive fitness and it incorporates producing organisms that are merely similar in phenotype. The evolvability of this mechanism is illustrated by computations on a simple model combining heredity and behaviour. Phenotypes are driven into the direction of high fitness through a mechanism that involves an internal estimate of fitness, implicitly made within the organism itself. This mechanism has recently been conjectured to be responsible for producing agency and goals. In the model, inclusive and extensive fitness are both implemented by letting fitness increase nonlinearly with the size of subpopulations of similar heredity (for the indirect part of inclusive fitness) and of similar phenotype (for the phenotypic part of extensive fitness). Populations implementing extensive fitness outcompete populations implementing mere inclusive fitness. This occurs because groups with similar phenotype tend to be larger than groups with similar heredity, and fitness increases more when groups are larger. Extensive fitness has two components, a direct component where individuals compete in inducing others to become like them and an indirect component where individuals cooperate and help others who are already similar to them.
Vladislav Nachev
Full Text Available Weber's law quantifies the perception of difference between stimuli. For instance, it can explain why we are less likely to detect the removal of three nuts from a bowl if the bowl is full than if it is nearly empty. This is an example of the magnitude effect - the phenomenon that the subjective perception of a linear difference between a pair of stimuli progressively diminishes when the average magnitude of the stimuli increases. Although discrimination performances of both human and animal subjects in various sensory modalities exhibit the magnitude effect, results sometimes systematically deviate from the quantitative predictions based on Weber's law. An attempt to reformulate the law to better fit data from acoustic discrimination tasks has been dubbed the "near-miss to Weber's law". Here, we tested the gustatory discrimination performance of nectar-feeding bats (Glossophaga soricina, in order to investigate whether the original version of Weber's law accurately predicts choice behavior in a two-alternative forced choice task. As expected, bats either preferred the sweeter of the two options or showed no preference. In 4 out of 6 bats the near-miss to Weber's law provided a better fit and Weber's law underestimated the magnitude effect. In order to test the generality of this observation in nectar-feeders, we reviewed previously published data on bats, hummingbirds, honeybees, and bumblebees. In all groups of animals the near-miss to Weber's law provided better fits than Weber's law. Furthermore, whereas the magnitude effect was stronger than predicted by Weber's law in vertebrates, it was weaker than predicted in insects. Thus nectar-feeding vertebrates and insects seem to differ in how their choice behavior changes as sugar concentration is increased. We discuss the ecological and evolutionary implications of the observed patterns of sugar concentration discrimination.
Nonlinear susceptibility magnitude imaging of magnetic nanoparticles
Ficko, Bradley W.; Giacometti, Paolo; Diamond, Solomon G.
2015-03-01
This study demonstrates a method for improving the resolution of susceptibility magnitude imaging (SMI) using spatial information that arises from the nonlinear magnetization characteristics of magnetic nanoparticles (mNPs). In this proof-of-concept study of nonlinear SMI, a pair of drive coils and several permanent magnets generate applied magnetic fields and a coil is used as a magnetic field sensor. Sinusoidal alternating current (AC) in the drive coils results in linear mNP magnetization responses at primary frequencies, and nonlinear responses at harmonic frequencies and intermodulation frequencies. The spatial information content of the nonlinear responses is evaluated by reconstructing tomographic images with sequentially increasing voxel counts using the combined linear and nonlinear data. Using the linear data alone it is not possible to accurately reconstruct more than 2 voxels with a pair of drive coils and a single sensor. However, nonlinear SMI is found to accurately reconstruct 12 voxels (R2=0.99, CNR=84.9) using the same physical configuration. Several time-multiplexing methods are then explored to determine if additional spatial information can be obtained by varying the amplitude, phase and frequency of the applied magnetic fields from the two drive coils. Asynchronous phase modulation, amplitude modulation, intermodulation phase modulation, and frequency modulation all resulted in accurate reconstruction of 6 voxels (R2>0.9) indicating that time multiplexing is a valid approach to further increase the resolution of nonlinear SMI. The spatial information content of nonlinear mNP responses and the potential for resolution enhancement with time multiplexing demonstrate the concept and advantages of nonlinear SMI.
Mallakpour, Iman; Villarini, Gabriele
2016-08-01
Auc(bstract) Gridded daily precipitation observations over the contiguous USA are used to investigate the past observed changes in the frequency and magnitude of heavy precipitation, and to examine its seasonality. Analyses are based on the Climate Prediction Center (CPC) daily precipitation data from 1948 to 2012. We use a block maxima approach to identify changes in the magnitude of heavy precipitation and a peak-over-threshold (POT) approach for the changes in the frequency. The results of this study show that there is a stronger signal of change in the frequency rather than in the magnitude of heavy precipitation events. Also, results show an increasing trend in the frequency of heavy precipitation over large areas of the contiguous USA with the most notable exception of the US Northwest. These results indicate that over the last 65 years, the stronger storms are not getting stronger, but a larger number of heavy precipitation events have been observed. The annual maximum precipitation and annual frequency of heavy precipitation reveal a marked seasonality over the contiguous USA. However, we could not find any evidence suggesting shifting in the seasonality of annual maximum precipitation by investigating whether the day of the year at which the maximum precipitation occurs has changed over time. Furthermore, we examine whether the year-to-year variations in the frequency and magnitude of heavy precipitation can be explained in terms of climate variability driven by the influence of the Atlantic and Pacific Oceans. Our findings indicate that the climate variability of both the Atlantic and Pacific Oceans can exert a large control on the precipitation frequency and magnitude over the contiguous USA. Also, the results indicate that part of the spatial and temporal features of the relationship between climate variability and heavy precipitation magnitude and frequency can be described by one or more of the climate indices considered here.
The moment magnitude M w and the energy magnitude M e: common roots and differences
Bormann, Peter; di Giacomo, Domenico
2011-04-01
Starting from the classical empirical magnitude-energy relationships, in this article, the derivation of the modern scales for moment magnitude M w and energy magnitude M e is outlined and critically discussed. The formulas for M w and M e calculation are presented in a way that reveals, besides the contributions of the physically defined measurement parameters seismic moment M 0 and radiated seismic energy E S, the role of the constants in the classical Gutenberg-Richter magnitude-energy relationship. Further, it is shown that M w and M e are linked via the parameter Θ = log( E S/ M 0), and the formula for M e can be written as M e = M w + (Θ + 4.7)/1.5. This relationship directly links M e with M w via their common scaling to classical magnitudes and, at the same time, highlights the reason why M w and M e can significantly differ. In fact, Θ is assumed to be constant when calculating M w. However, variations over three to four orders of magnitude in stress drop Δ σ (as well as related variations in rupture velocity V R and seismic wave radiation efficiency η R) are responsible for the large variability of actual Θ values of earthquakes. As a result, for the same earthquake, M e may sometimes differ by more than one magnitude unit from M w. Such a difference is highly relevant when assessing the actual damage potential associated with a given earthquake, because it expresses rather different static and dynamic source properties. While M w is most appropriate for estimating the earthquake size (i.e., the product of rupture area times average displacement) and thus the potential tsunami hazard posed by strong and great earthquakes in marine environs, M e is more suitable than M w for assessing the potential hazard of damage due to strong ground shaking, i.e., the earthquake strength. Therefore, whenever possible, these two magnitudes should be both independently determined and jointly considered. Usually, only M w is taken as a unified magnitude in many
Sobolev, Stephan; Muldashev, Iskander
2017-04-01
The common thinking is that the magnitude of a great subduction earthquake correlates with the strength of mechanical coupling between slab and overriding plate. Based on this idea, Ruff and Kanamori (1980) suggested that maximum earthquake's magnitude is controlled by two parameters: age of subducting plate and plate convergence rate, when the youngest and the fastest slabs generate the largest earthquakes. This view was supported by many researches since then. However, since 1980 a number of great earthquakes, and particularly two largest earthquakes of the last 12 years, i.e. Great Sumatra/Andaman 2004 Earthquake and Tohoku 2011 earthquake, have violated the suggested correlation. We address the relation between strength of mechanical coupling and earthquake magnitude directly by cross-scale geodynamic modeling of seismic cycles of great subduction earthquakes. This modeling technique employs elasticity, non-linear transient viscous rheology, and rate-and-state friction at slab interface. It generates spontaneous earthquake sequences, and, by using an adaptive time-step algorithm, recreates the deformation process as observed naturally over single and multiple seismic cycles. We model seismic cycles for the great subduction earthquakes with different geometries of subducting plates, different static friction coefficients in subduction channels and different subduction velocities. Under the assumption that rupture length scales with the rupture width, our models demonstrate that maximum magnitudes of the earthquakes are exclusively controlled by the factors that increase rupture width. These factors are: low slab's dipping angle (the largest effect), low friction coefficient in subduction channel (smaller effect) and high subduction velocity (the smallest effect). Models suggest that maximum magnitudes of earthquakes do not correlate significantly with the magnitudes of normal and shear stresses at subduction interface. In agreement with observations, our models
STANDARDIZING TYPE Ia SUPERNOVA ABSOLUTE MAGNITUDES USING GAUSSIAN PROCESS DATA REGRESSION
Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Childress, M.; Fakhouri, H. K.; Nordin, J. [Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Thomas, R. C. [Computational Cosmology Center, Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road MS 50B-4206, Berkeley, CA 94720 (United States); Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Guy, J. [Laboratoire de Physique Nucleaire et des Hautes Energies, Universite Pierre et Marie Curie Paris 6, Universite Denis Diderot Paris 7, CNRS-IN2P3, 4 place Jussieu, F-75252 Paris Cedex 05 (France); Baltay, C. [Department of Physics, Yale University, New Haven, CT 06250-8121 (United States); Buton, C.; Kerschhaggl, M.; Kowalski, M. [Physikalisches Institut, Universitaet Bonn, Nussallee 12, D-53115 Bonn (Germany); Chotard, N. [Tsinghua Center for Astrophysics, Tsinghua University, Beijing 100084 (China); Copin, Y.; Gangler, E. [Universite de Lyon, F-69622 Lyon (France); and others
2013-04-01
We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SEDs) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak B brightness are calibrated to 0.13 mag in the g band and to as low as 0.09 mag in the z = 0.25 blueshifted i band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.
Standardizing Type Ia Supernova Absolute Magnitudes Using Gaussian Process Data Regression
Kim, A G; Aldering, G; Antilogus, P; Aragon, C; Bailey, S; Baltay, C; Bongard, S; Buton, C; Canto, A; Cellier-Holzem, F; Childress, M; Chotard, N; Copin, Y; Fakhouri, H K; Gangler, E; Guy, J; Kerschhaggl, M; Kowalski, M; Nordin, J; Nugent, P; Paech, K; Pain, R; Pécontal, E; Pereira, R; Perlmutter, S; Rabinowitz, D; Rigault, M; Runge, K; Saunders, C; Scalzo, R; Smadja, G; Tao, C; Weaver, B A; Wu, C
2013-01-01
We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SED) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak $B$ brightness are calibrated to 0.13 mag in the $g$-band and to as low as 0.09 mag in the $z=0.25$ blueshifted $i$-band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.
Chi-square Fitting When Overall Normalization is a Fit Parameter
Roe, Byron
2015-01-01
The problem of fitting an event distribution when the total expected number of events is not fixed, keeps appearing in experimental studies. In a chi-square fit, if overall normalization is one of the parameters parameters to be fit, the fitted curve may be seriously low with respect to the data points, sometimes below all of them. This problem and the solution for it are well known within the statistics community, but, apparently, not well known among some of the physics community. The purpose of this note is didactic, to explain the cause of the problem and the easy and elegant solution. The solution is to use maximum likelihood instead of chi-square. The essential difference between the two approaches is that maximum likelihood uses the normalization of each term in the chi-square assuming it is a normal distribution, 1/sqrt(2 pi sigma-square). In addition, the normalization is applied to the theoretical expectation not to the data. In the present note we illustrate what goes wrong and how maximum likeliho...
Structure of the Large Magellanic Cloud from the Near Infrared magnitudes of Red Clump stars
Subramanian, Smitha
2013-01-01
The structural parameters, like the inclination, i and the position angle of the line of nodes (PA_lon) of the disk of the Large Magellanic Cloud (LMC) are estimated using the JH photometric data of red clump stars from the Infrared Survey Facility - Magellanic Cloud Point Source Catalog (IRSF-MCPSC). The observed LMC region is divided into several sub-regions and stars in each region are cross identified with the optically identified red clump stars to obtain the near infrared magnitudes. The peak values of H magnitude and (J-H) colour of the observed red clump distribution are obtained by fitting a profile to the distributions and also by taking the average value of magnitude and colour of the red clump stars in the bin with largest number. Then the dereddened peak H0 magnitude of the red clump stars in each sub-region is obtained. The RA, Dec and relative distance from the center of each sub-region are converted into x, y & z Cartesian coordinates. A weighted least square plane fitting method is applie...
Extensive fitness and human cooperation
van Hateren, J. H.
2015-01-01
Evolution depends on the fitness of organisms, the expected rate of reproducing. Directly getting offspring is the most basic form of fitness, but fitness can also be increased indirectly by helping genetically related individuals (such as kin) to increase their fitness. The combined effect is known
Haller, Toomas; Leitsalu, Liis; Fischer, Krista
2017-01-01
Ancestry information at the individual level can be a valuable resource for personalized medicine, medical, demographical and history research, as well as for tracing back personal history. We report a new method for quantitatively determining personal genetic ancestry based on genome-wide data....... Numerical ancestry component scores are assigned to individuals based on comparisons with reference populations. These comparisons are conducted with an existing analytical pipeline making use of genotype phasing, similarity matrix computation and our addition-multidimensional best fitting by Mix......Fit. The method is demonstrated by studying Estonian and Finnish populations in geographical context. We show the main differences in the genetic composition of these otherwise close European populations and how they have influenced each other. The components of our analytical pipeline are freely available...
Dinubile, Nicholas A
2008-12-01
The cornerstone of personal health is prevention. The concept of exercise as medicine is a lesson I have preached throughout my career, both with my patients in my private practice as well as through my years working with athletes at all levels including the Philadelphia 76ers basketball team and the Pennsylvania Ballet. It is also a message I relayed as a Special Advisor to the President's Council on Physical Fitness and Sports (PCPFS) during the first Bush administration, working closely with my old friend-and fitness advocate and visionary himself-Governor Arnold Schwarzenegger, who served as Chairman to the PCPFS. Arnold's impact on our nation's health was an extremely positive one that was felt in communities from coast-to-coast. Exercise, activity, and prevention were key components of his prescription for change and improved health for our country. He has also always personally inspired me to see my role as a physician and "healer" in a much broader context.
Becerra Rodríguez, Carlos Alfredo
2016-01-01
In the last decade a considerable number of efforts have been devoted into developing Virtual Fitting Rooms (VFR) due to the great popularity of Virtual Reality (VR) and Augmented Reality (AR) in the fashion design industry. The existence of new technologies such as Kinect, powerful web cameras and smartphones permit us to examine new ways to try on clothes without doing it physically in a store center. This research is primarily dedicated to review some important aspects about...
WANG Ji-Ke; MAO Ze-Pu; BIAN Jian-Ming; CAO Guo-Fu; CAO Xue-Xiang; CHEN Shen-Jian; DENG Zi-Yan; FU Cheng-Dong; GAO Yuan-Ning; HE Kang-Lin; HE Miao; HUA Chun-Fei; HUANG Bin; HUANG Xing-Tao; JI Xiao-Sin; LI Fei; LI Hai-Bo; LI Wei-Dong; LIANG Yu-Tie; LIU Chun-Xiu; LIU Huai-Min; LIU Suo; LIU Ying-Jie; MA Qiu-Mei; MA Xiang; MAO Ya-Jun; MO Xiao-Hu; PAN Ming-Hua; PANG Cai-Ying; PING Rong-Gang; QIN Ya-Hong; QIU Jin-Fa; SUN Sheng-Sen; SUN Yong-Zhao; WANG Liang-Liang; WEN Shuo-Pin; WU Ling-Hui; XIE Yu-Guang; XU Min; YAN Liang; YOU Zheng-Yun; YUAN Chang-Zheng; YUAN Ye; ZHANG Bing-Yun; ZHANG Chang-Chun; ZHANG Jian-Yong; ZHANG Xue-Yao; ZHANG Yao; ZHENG Yang-Heng; ZHU Ke-Jun; ZHU Yong-Sheng; ZHU Zhi-Li; ZOU Jia-Heng
2009-01-01
A track fitting algorithm based on the Kalman filter method has been developed for BESⅢ of BEPCⅡ.The effects of multiple scattering and energy loss when the charged particles go through the detector,non-uniformity of magnetic field (NUMF) and wire sag, etc., have been carefully handled.This algorithm works well and the performance satisfies the physical requirements tested by the simulation data.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Bjerck, Mari; Klepp, Ingun Grimstad; Skoland, Eli
Denne rapporten formidler funn fra en litteraturstudie, brukerundersøkelse og markedsundersøkelse gjort i prosjektet Made to Fit. Rapporten svarer på prosjektets hovedmål og delmål som retter seg mot å formidle kunnskap om tilpasning og fremstilling av funksjonelle og gode produkter for handikapp......Denne rapporten formidler funn fra en litteraturstudie, brukerundersøkelse og markedsundersøkelse gjort i prosjektet Made to Fit. Rapporten svarer på prosjektets hovedmål og delmål som retter seg mot å formidle kunnskap om tilpasning og fremstilling av funksjonelle og gode produkter...... for handikappede. Herunder potensialet for å utvikle spesialtilpassede klær i konseptet «Made to Fit», utprøving av metoder og identifisering av kunnskapsstatus på feltet. Rapporten er således delt inn i tre hoveddeler. Første delen bygger videre på prosjektnotatet til Vestvik, Hebrok og Klepp (2013) fra...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Near-infrared absolute magnitudes of Type Ia Supernovae
Avelino, Arturo; Friedman, Andrew S.; Mandel, Kaisey; Kirshner, Robert; Challis, Peter
2017-01-01
Type Ia Supernovae light curves (SN Ia) in the near infrared (NIR) exhibit low dispersion in their peak luminosities and are less vulnerable to extinction by interstellar dust in their host galaxies. The increasing number of high quality NIR SNe Ia light curves, including the recent CfAIR2 sample obtained with PAIRITEL, provides updated evidence for their utility as standard candles for cosmology. Using NIR YJHKs light curves of ~150 nearby SNe Ia from the CfAIR2 and CSP samples, and from the literature, we determine the mean value and dispersion of the absolute magnitude in the range between -10 to 50 rest-frame days after the maximum luminosity in B band. We present the mean light-curve templates and Hubble diagram for YJHKs bands. This work contributes to a firm local anchor for supernova cosmology studies in the NIR which will help to reduce the systematic uncertainties due to host galaxy dust present in optical-only studies. This research is supported by NSF grants AST-156854, AST-1211196, Fundacion Mexico en Harvard, and CONACyT.
Southern-Tyrrhenian seismicity in space-time-magnitude domain
D. Luzio
2006-06-01
Full Text Available An analysis is conducted on a catalogue containing more than 2000 seismic events occurred in the southern Tyrrhenian Sea between 1988 and October 2002, as an attempt to characterise the main seismogenetic processes active in the area in space, time and magnitude domain by means of the parameters of phenomenological laws. We chose to adopt simple phenomenological models, since the low number of data did not allow to use more complex laws. The two main seismogenetic volumes present in the area were considered for the purpose of this work. The first includes a nearly homogeneous distribution of hypocentres in a NW steeply dipping layer as far as about 400 km depth. This is probably the seismological expression of the Ionian lithospheric slab subducting beneath the Calabrian Arc. The second contains hypocentres concentrated about a sub-horizontal plane lying at an average depth of about 10 km. It is characterised by a background seismicity spread all over the area and by clusters of events that generally show a direction of maximum elongation. The parameters of the models describing seismogenetically homogeneous subsets of the earthquake catalogue in the three analysis domains, along with their confidence intervals, are estimated and analysed to establish whether they can be regarded as representative of a particular subset.
Comparison between different earthquake magnitudes determined by China Seismograph Network
LIU Rui-feng; CHEN Yun-tai; REN Xiao; XU Zhi-guo; SUN Li; YANG Hui; LIANG Jian-hong; REN Ke-xin
2007-01-01
By linear regression and orthogonal regression methods, comparisons are made between different magnitudes (local magnitude ML, surface wave magnitudes MS and MS7, long-period body wave magnitude mB and short-period body wave magnitude mb) determined by Institute of Geophysics, China Earthquake Administration, on the basis of observation data collected by China Seismograph Network between 1983 and 2004. Empirical relations between different magnitudes have been obtained. The result shows that: ①As different magnitude scales reflect radiated energy by seismic waves within different periods, earthquake magnitudes can be described more objectively by using different scales for earthquakes of different magnitudes. When the epicentral distance is less than 1 000 km, local magnitude ML can be a preferable scale; In case MMS, i.e., MS underestimates magnitudes of such events, therefore, mB can be a better choice; In case M>6.0, MS>mB>mb, both mB and mb underestimate the magnitudes, so MS is a preferable scale for determining magnitudes of such events (6.08.5, a saturation phenomenon appears in MS, which cannot give an accurate reflection of the magnitudes of such large events; ②In China, when the epicentral distance is less than 1 000 km, there is almost no difference between ML and MS, and thus there is no need to convert between the two magnitudes in practice; ③Although MS and MS7 are both surface wave magnitudes, MS is in general greater than MS7 by 0.2～0.3 magnitude, because different instruments and calculation formulae are used; ④mB is almost equal to mb for earthquakes around mB4.0, but mB is larger than mb for those of mB(4.5, because the periods of seismic waves used for measuring mB and mb are different though the calculation formulae are the same.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Exploring the relationship between the magnitudes of seismic events
Spassiani, Ilaria
2015-01-01
The distribution of the magnitudes of seismic events is generally assumed to be independent on past seismicity. However, by considering events in causal relation, for example mother-daughter, it seems natural to assume that the magnitude of a daughter event is conditionally dependent on the one of the corresponding mother event. In order to find experimental evidence supporting this hypothesis, we analyze different catalogs, both real and simulated, in two different ways. From each catalog, we obtain the law of triggered events' magnitude by kernel density. The results obtained show that the distribution density of triggered events' magnitude varies with the magnitude of their corresponding mother events. As the intuition suggests, an increase of mother events' magnitude induces an increase of the probability of having "high" values of triggered events' magnitude. In addition, we see a statistically significant increasing linear dependence of the magnitude means.
Invasion fitness, inclusive fitness, and reproductive numbers in heterogeneous populations.
Lehmann, Laurent; Mullon, Charles; Akçay, Erol; Van Cleve, Jeremy
2016-08-01
How should fitness be measured to determine which phenotype or "strategy" is uninvadable when evolution occurs in a group-structured population subject to local demographic and environmental heterogeneity? Several fitness measures, such as basic reproductive number, lifetime dispersal success of a local lineage, or inclusive fitness have been proposed to address this question, but the relationships between them and their generality remains unclear. Here, we ascertain uninvadability (all mutant strategies always go extinct) in terms of the asymptotic per capita number of mutant copies produced by a mutant lineage arising as a single copy in a resident population ("invasion fitness"). We show that from invasion fitness uninvadability is equivalently characterized by at least three conceptually distinct fitness measures: (i) lineage fitness, giving the average individual fitness of a randomly sampled mutant lineage member; (ii) inclusive fitness, giving a reproductive value weighted average of the direct fitness costs and relatedness weighted indirect fitness benefits accruing to a randomly sampled mutant lineage member; and (iii) basic reproductive number (and variations thereof) giving lifetime success of a lineage in a single group, and which is an invasion fitness proxy. Our analysis connects approaches that have been deemed different, generalizes the exact version of inclusive fitness to class-structured populations, and provides a biological interpretation of natural selection on a mutant allele under arbitrary strength of selection.
Numerical Magnitude Processing in Children with Mild Intellectual Disabilities
Brankaer, Carmen; Ghesquiere, Pol; De Smedt, Bert
2011-01-01
The present study investigated numerical magnitude processing in children with mild intellectual disabilities (MID) and examined whether these children have difficulties in the ability to represent numerical magnitudes and/or difficulties in the ability to access numerical magnitudes from formal symbols. We compared the performance of 26 children…
Symbolic Magnitude Modulates Perceptual Strength in Binocular Rivalry
Paffen, Chris L. E.; Plukaard, Sarah; Kanai, Ryota
2011-01-01
Basic aspects of magnitude (such as luminance contrast) are directly represented by sensory representations in early visual areas. However, it is unclear how symbolic magnitudes (such as Arabic numerals) are represented in the brain. Here we show that symbolic magnitude affects binocular rivalry: perceptual dominance of numbers and objects of…
48 CFR 1852.236-74 - Magnitude of requirement.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Magnitude of requirement... 1852.236-74 Magnitude of requirement. As prescribed in 1836.570(d), insert the following provision: Magnitude of Requirement (DEC 1988) The Government estimated price range of this project is...
Hirose, Hideo
1998-01-01
TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...
Hirose, Hideo
1998-01-01
TYPES OF THE DISTRIBUTION:13;Normal distribution (2-parameter)13;Uniform distribution (2-parameter)13;Exponential distribution ( 2-parameter)13;Weibull distribution (2-parameter)13;Gumbel Distribution (2-parameter)13;Weibull/Frechet Distribution (3-parameter)13;Generalized extreme-value distribution (3-parameter)13;Gamma distribution (3-parameter)13;Extended Gamma distribution (3-parameter)13;Log-normal distribution (3-parameter)13;Extended Log-normal distribution (3-parameter)13;Generalized ...
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Brorholt, Grete
"Fit for work - Attraktiv sundhed og sikkerhed på en hospitalsafdeling i Region Hovedstaden" undersøger hvorledes sundhedsvæsenets forandringer påvirker medarbejdere, ledere og organisation. Udgangspunktet for afhandlingen er en interesse for psykisk arbejdsmiljø, og hvordan reformerne i kølvandet...... også belastende arbejdsmiljø. Afhandlingen er baseret på 8 måneders deltagerobservation og interviews på en anæstesiologisk afdeling kombineret med omfattende dokumentlæsning....
Frühwirth, R; Vanlaer, Pascal
2007-01-01
Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described, using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Detected fluctuations in SDSS LRG magnitudes: Bulk flow signature or systematic?
Abate, Alexandra
2011-01-01
In this paper we search for a signature of a large scale bulk flow by looking for fluctuations in the magnitudes of distant LRGs. We take a sample of LRGs from the Sloan Digital Sky Survey with redshifts of z>0.08 over a contiguous area of sky. Neighboring LRG magnitudes are averaged together to find the fluctuation in magnitudes as a function of R.A.. The result is a fluctuation of a few percent in flux across roughly 100 degrees. The source of this fluctuation could be from a large scale bulk flow or a systematic in our treatment of the data set, or the data set itself. A bulk flow model is fitted to the observed fluctuation, and the three bulk flow parameters, its direction and magnitude: alpha_b, delta_b, v_b are constrained. We find that the bulk flow direction is consistent with the direction found by other authors, with alpha_b~180, delta_b~-50. The bulk flow magnitude however was found to be anomalously large with v_b>4000km/s. The LRG angular selection function cannot be sufficiently taken into accou...
Haptic perception of force magnitude and its relation to postural arm dynamics in 3D.
van Beek, Femke E; Bergmann Tiest, Wouter M; Mugge, Winfred; Kappers, Astrid M L
2015-12-08
In a previous study, we found the perception of force magnitude to be anisotropic in the horizontal plane. In the current study, we investigated this anisotropy in three dimensional space. In addition, we tested our previous hypothesis that the perceptual anisotropy was directly related to anisotropies in arm dynamics. In experiment 1, static force magnitude perception was studied using a free magnitude estimation paradigm. This experiment revealed a significant and consistent anisotropy in force magnitude perception, with forces exerted perpendicular to the line between hand and shoulder being perceived as 50% larger than forces exerted along this line. In experiment 2, postural arm dynamics were measured using stochastic position perturbations exerted by a haptic device and quantified through system identification. By fitting a mass-damper-spring model to the data, the stiffness, damping and inertia parameters could be characterized in all the directions in which perception was also measured. These results show that none of the arm dynamics parameters were oriented either exactly perpendicular or parallel to the perceptual anisotropy. This means that endpoint stiffness, damping or inertia alone cannot explain the consistent anisotropy in force magnitude perception.
Length of adaptive walk on uncorrelated and correlated fitness landscapes.
Seetharaman, Sarada; Jain, Kavita
2014-09-01
We consider the adaptation dynamics of an asexual population that walks uphill on a rugged fitness landscape which is endowed with a large number of local fitness peaks. We work in a parameter regime where only those mutants that are a single mutation away are accessible, as a result of which the population eventually gets trapped at a local fitness maximum and the adaptive walk terminates. We study how the number of adaptive steps taken by the population before reaching a local fitness peak depends on the initial fitness of the population, the extreme value distribution of the beneficial mutations, and correlations among the fitnesses. Assuming that the relative fitness difference between successive steps is small, we analytically calculate the average walk length for both uncorrelated and correlated fitnesses in all extreme value domains for a given initial fitness. We present numerical results for the model where the fitness differences can be large and find that the walk length behavior differs from that in the former model in the Fréchet domain of extreme value theory. We also discuss the relevance of our results to microbial experiments.
GOSSIP, a new VO compliant tool for SED fitting
Franzetti, P; Garilli, B; Fumana, M; Paioro, L
2008-01-01
We present GOSSIP (Galaxy Observed-Simulated SED Interactive Program), a new tool developed to perform SED fitting in a simple, user friendly and efficient way. GOSSIP automatically builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions. User defined models can be used, but GOSSIP is also able to load models produced by the most commonly used synthesis population codes. GOSSIP can be used interactively with other visualization tools using the PLASTIC protocol for communications. Moreover, since it has been developed with large data sets applications in mind, it will be extended to operate within the Virtual Observatory framework. GOSSIP is distributed t...
Kirkpatrick Mark
2005-01-01
Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Methodology review: evaluating person fit
Meijer, R.R.; Sijtsma, Klaas
2001-01-01
Person-fit methods based on classical test theory-and item response theory (IRT), and methods investigating particular types of response behavior on tests, are examined. Similarities and differences among person-fit methods and their advantages and disadvantages are discussed. Sound person-fit
Chen, Yongkang; Weislogel, Mark; Schaeffer, Ben; Semerjian, Ben; Yang, Lihong; Zimmerli, Gregory
2012-01-01
The mathematical theory of capillary surfaces has developed steadily over the centuries, but it was not until the last few decades that new technologies have put a more urgent demand on a substantially more qualitative and quantitative understanding of phenomena relating to capillarity in general. So far, the new theory development successfully predicts the behavior of capillary surfaces for special cases. However, an efficient quantitative mathematical prediction of capillary phenomena related to the shape and stability of geometrically complex equilibrium capillary surfaces remains a significant challenge. As one of many numerical tools, the open-source Surface Evolver (SE) algorithm has played an important role over the last two decades. The current effort was undertaken to provide a front-end to enhance the accessibility of SE for the purposes of design and analysis. Like SE, the new code is open-source and will remain under development for the foreseeable future. The ultimate goal of the current Surface Evolver Fluid Interface Tool (SEFIT) development is to build a fully integrated front-end with a set of graphical user interface (GUI) elements. Such a front-end enables the access to functionalities that are developed along with the GUIs to deal with pre-processing, convergence computation operation, and post-processing. In other words, SE-FIT is not just a GUI front-end, but an integrated environment that can perform sophisticated computational tasks, e.g. importing industry standard file formats and employing parameter sweep functions, which are both lacking in SE, and require minimal interaction by the user. These functions are created using a mixture of Visual Basic and the SE script language. These form the foundation for a high-performance front-end that substantially simplifies use without sacrificing the proven capabilities of SE. The real power of SE-FIT lies in its automated pre-processing, pre-defined geometries, convergence computation operation
Exact parallel maximum clique algorithm for general and protein graphs.
Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka
2013-09-23
A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.
Wang, Dun; Kawakatsu, Hitoshi; Zhuang, Jiancang; Mori, Jim; Maeda, Takuto; Tsuruoka, Hiroshi; Zhao, Xu
2017-06-01
Fast estimates of magnitude and source extent of large earthquakes are fundamental for disaster mitigation. However, resolving these estimates within 10-20 min after origin time remains challenging. Here we propose a robust algorithm to resolve magnitude and source length of large earthquakes using seismic data recorded by regional arrays and global stations. We estimate source length and source duration by backprojecting seismic array data. Then the source duration and the maximum amplitude of the teleseismic P wave displacement waveforms are used jointly to estimate magnitude. We apply this method to 74 shallow earthquakes that occurred within epicentral distances of 30-85° to Hi-net (2004-2014). The estimated magnitudes are similar to moment magnitudes estimated from W-phase inversions (U.S. Geological Survey), with standard deviations of 0.14-0.19 depending on the global station distributions. Application of this method to multiple regional seismic arrays could benefit tsunami warning systems and emergency response to large global earthquakes.
Cardiorespiratory Fitness and Cognitive Function in Midlife: Neuroprotection or Neuroselection?
Belsky, Daniel W.; Caspi, Avshalom; Israel, Salomon; Blumenthal, James A.; Poulton, Richie; Moffitt, Terrie E.
2015-01-01
Objective To determine if better cognitive functioning at midlife among more physically fit individuals reflects “neuroprotection,” in which fitness protects against age-related cognitive decline, or “neuroselection,” in which children with higher cognitive functioning select into more active lifestyles. Methods Children in the Dunedin Longitudinal Study (N=1,037) completed the Wechsler Intelligence Scales and the Trail-Making, Rey-Delayed-Recall, and Grooved-Pegboard tasks as children and again at midlife (age-38). Adult cardiorespiratory fitness was assessed using a submaximal exercise test to estimate maximum-oxygen-consumption-adjusted-for-body-weight in milliliters/minute/kilogram (VO2max). We tested if more-fit individuals had better cognitive functioning than their less-fit counterparts (which could be consistent with neuroprotection), and if better childhood cognitive functioning predisposed to better adult cardiorespiratory fitness (neuroselection). Finally, we examined possible mechanisms of neuroselection. Results Participants with better cardiorespiratory fitness had higher cognitive test scores at midlife. However, fitness-associated advantages in cognitive functioning were present already in childhood. After accounting for childhood-baseline performance on the same cognitive tests, there was no association between cardiorespiratory fitness and midlife cognitive functioning. Socioeconomic and health advantages in childhood, and healthier lifestyles during young adulthood explained most of the association between childhood cognitive functioning and adult cardiorespiratory fitness. Interpretation We found no evidence for a neuroprotective effect of cardiorespiratory fitness as of midlife. Instead, children with better cognitive functioning are selecting into healthier lives. Fitness interventions may enhance cognitive functioning. But, observational and experimental studies testing neuroprotective effects of physical fitness should consider
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Janeras, Marc; Domènech, Guillem; Pons, Judit; Prat, Elisabet; Buxó, Pere
2016-04-01
Montserrat Massif is located about 50 km North-West of Barcelona (Catalonia, North-Eastern Spain). The rock massif is constituted by an intercalation of conglomerate and fine layers of siltstones due to the Montserrat fan-delta sedimentation within the Eocene age. The current relief is consequence of the several depositional episodes and the later tectonic uplift, leading to stepped slopes up to 250 m high, and a total height difference close to 1000 m. Montserrat Mountain has been a pilgrimage place since the settlement of the monastery, around the year 1025, and a spot of touristic interest, mostly within the last 150 years, when the first rack railway was inaugurated to reach the sanctuary. The amount of 2.4 M visitors in 2014 reveals the potential risk derived from rockfalls. To assess and mitigate this risk, a plan funded by the Catalan government is currently under development. Three rockfall mechanisms and magnitude ranges have been identified (Janeras et al. 2011): 1) physicochemical weathering causing the detachment of pebbles and aggregates (0.0001 - 0.1 m3); 2) thermic-induced tensions responsible for the generation of slabs and plates (0.1 - 10 m3); and 3) intersection of structural joints within the rock mass resulting in blocks of 10 - 10,000 m3. In order to quantify the rockfall hazard, a magnitude-frequency analysis has been performed starting from an event-based inventory gathered from field surveillance and historical research. A methodology has been applied to take the maximum profit of only 30 registers with information on volume and date. The massif has been split into several domains with sampling homogeneity. For each one, there have been defined several periods of time during which, all the rockfall events of a given volume have been recorded. Thus, the magnitude-frequency relationship, for each domain, has been calculated. Results show that the curves are well fitted by a power law with exponents ranging from -0.59 to -0.68 for magnitudes
Ramakrushna Reddy; Rajesh R Nair
2013-10-01
This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source–receiver distance of up to 150 km during the period 1998–2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.
Reinforcement Magnitude: An Evaluation of Preference and Reinforcer Efficacy
Trosclair-Lasserre, Nicole M.; Lerman, Dorothea C; Call, Nathan A; Addison, Laura R; Kodak, Tiffany
2008-01-01
Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current study was to evaluate the relations among reinforcer magnitude, preference, and efficacy by drawing on the procedures and results of basic experimenta...
Maximum host survival at intermediate parasite infection intensities.
Martin Stjernman
Full Text Available BACKGROUND: Although parasitism has been acknowledged as an important selective force in the evolution of host life histories, studies of fitness effects of parasites in wild populations have yielded mixed results. One reason for this may be that most studies only test for a linear relationship between infection intensity and host fitness. If resistance to parasites is costly, however, fitness may be reduced both for hosts with low infection intensities (cost of resistance and high infection intensities (cost of parasitism, such that individuals with intermediate infection intensities have highest fitness. Under this scenario one would expect a non-linear relationship between infection intensity and fitness. METHODOLOGY/PRINCIPAL FINDINGS: Using data from blue tits (Cyanistes caeruleus in southern Sweden, we investigated the relationship between the intensity of infection of its blood parasite (Haemoproteus majoris and host survival to the following winter. Presence and intensity of parasite infections were determined by microscopy and confirmed using PCR of a 480 bp section of the cytochrome-b-gene. While a linear model suggested no relationship between parasite intensity and survival (F = 0.01, p = 0.94, a non-linear model showed a significant negative quadratic effect (quadratic parasite intensity: F = 4.65, p = 0.032; linear parasite intensity F = 4.47, p = 0.035. Visualization using the cubic spline technique showed maximum survival at intermediate parasite intensities. CONCLUSIONS/SIGNIFICANCE: Our results indicate that failing to recognize the potential for a non-linear relationship between parasite infection intensity and host fitness may lead to the potentially erroneous conclusion that the parasite is harmless to its host. Here we show that high parasite intensities indeed reduced survival, but this effect was masked by reduced survival for birds heavily suppressing their parasite intensities. Reduced survival among hosts with low
Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator
Xin, Zhen; Yoon, Changwoo; Zhao, Rende
2016-01-01
-signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...
Evaluation of alternatives for best-fit paraboloid for deformed antenna surfaces
Baruch, Menahem; Haftka, Raphael T.
1990-01-01
Paraboloid antenna surfaces suffer performance degradation due to structural deformation. A first step in the prediction of the performance degradation is to find the best-fit paraboloid to the deformed surface. Examined here is the question of whether rigid body translations perpendicular to the axis of the paraboloid should be included in the search for the best-fit paraboloid. It is shown that if these translations are included the problem is ill-conditioned, and small structural deformation can result in large translations of the best-fit paraboloid with respect to the original surface. The magnitude of these translations then requires nonlinear analysis for finding the best-fit paraboloid. On the other hand, if these translations are excluded, or if they are limited in magnitude, the errors with respect to the restricted not-so-best-fit paraboloid can be much greater than the errors with respect to the true best-fit paraboloid.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
Locomotion Strategy and Magnitude of Ground Reaction Forces During Treadmill Training on ISS.
Fomina, Elena; Savinkina, Alexandra
2017-09-01
Creation of the cosmonaut in-flight physical training process is currently based on the leading role of support afferents in the development of hypogravity changes in the motor system. We assume that the strength of support afferents is related to the magnitude of the ground reaction forces (GRF). For this purpose it was necessary to compare the GRF magnitude on the Russian BD-2 treadmill for different locomotion types (walking and running), modes (active and passive), and subjects. Relative GRF values were analyzed while subjects performed walking and running during active and passive modes of treadmill belt movement under 1 G (N = 6) and 0 G (N = 4) conditions. For different BD-2 modes and both types of locomotion, maximum GRF values varied in both 0 G and 1 G. Considerable individual variations were also found in the locomotion strategies, as well as in maximum GRF values. In 0 G, the smallest GRF values were observed for walking in active mode, and the largest during running in passive mode. In 1 G, GRF values were higher during running than while walking, but the difference between active and passive modes was not observed; we assume this was due to the uniqueness of the GRF profile. The maximum GRF recorded during walking and running in active and passive modes depended on the individual pattern of locomotion. The maximum GRF values that we recorded on BD-2 were close to values found by other researchers. The observations from this study could guide individualized countermeasures prescriptions for microgravity.Fomina E, Savinkina A. Locomotion strategy and magnitude of ground reaction forces during treadmill training on ISS. Aerosp Med Hum Perform. 2017; 88(9):841-849.
Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data
Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J
2014-01-01
We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.
4D MR phase and magnitude segmentations with GPU parallel computing.
Bergen, Robert V; Lin, Hung-Yu; Alexander, Murray E; Bidinosti, Christopher P
2015-01-01
The increasing size and number of data sets of large four dimensional (three spatial, one temporal) magnetic resonance (MR) cardiac images necessitates efficient segmentation algorithms. Analysis of phase-contrast MR images yields cardiac flow information which can be manipulated to produce accurate segmentations of the aorta. Phase contrast segmentation algorithms are proposed that use simple mean-based calculations and least mean squared curve fitting techniques. The initial segmentations are generated on a multi-threaded central processing unit (CPU) in 10 seconds or less, though the computational simplicity of the algorithms results in a loss of accuracy. A more complex graphics processing unit (GPU)-based algorithm fits flow data to Gaussian waveforms, and produces an initial segmentation in 0.5 seconds. Level sets are then applied to a magnitude image, where the initial conditions are given by the previous CPU and GPU algorithms. A comparison of results shows that the GPU algorithm appears to produce the most accurate segmentation.
An empirical evolutionary magnitude estimation for earthquake early warning
Wu, Yih-Min; Chen, Da-Yi
2016-04-01
For earthquake early warning (EEW) system, it is a difficult mission to accurately estimate earthquake magnitude in the early nucleation stage of an earthquake occurrence because only few stations are triggered and the recorded seismic waveforms are short. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters within the initial portion of waveform after P-wave arrival. However, a large-magnitude earthquake (Mw > 7.0) may take longer time to complete the whole ruptures of the causative fault. Instead of adopting amplitude contents in fixed-length time window, that may underestimate magnitude for large-magnitude events, we suppose a fast, robust and unsaturated approach to estimate earthquake magnitudes. In this new method, the EEW system can initially give a bottom-bund magnitude in a few second time window and then update magnitude without saturation by extending the time window. Here we compared two kinds of time windows for adopting amplitudes. One is pure P-wave time widow (PTW); the other is whole-wave time window after P-wave arrival (WTW). The peak displacement amplitude in vertical component were adopted from 1- to 10-s length PTW and WTW, respectively. Linear regression analysis were implemented to find the empirical relationships between peak displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that using WTW to estimate magnitudes accompanies with smaller standard deviation. In addition, large uncertainties exist in the 1-second time widow. Therefore, for magnitude estimations we suggest the EEW system need to progressively adopt peak displacement amplitudes form 2- to 10-s WTW.
Quantifying in situ stress magnitudes and orientations for Forsmark. Forsmark stage 2.2
Martin, C. Derek (Univ. of Alberta (Canada))
2007-11-15
Stephansson et al. concluded that in the Fennoscandia shield: (1) there is a large horizontal stress component in the uppermost 1,000 m of bedrock, and (2) the maximum and minimum horizontal stresses exceed the vertical stress assuming the vertical stress is estimated from the weight of the overburden. Several stress campaigns involving both overcoring and hydraulic fracturing, including the hydraulic testing of pre-existing fractures (HTPF), have been carried out at Forsmark to establish the in situ stress state. The results from the initial campaigns were summarised by Sjoeberg et al. which formed the bases for the stresses provided in the Site Descriptive Model version 1.2. Since then additional stress measurement campaigns have been completed. The results from these stress measurement campaigns support the conclusions from Stephansson et al. In addition to these in situ stress measurements the following additional studies were undertaken to aid in assessing the stress state at Forsmark. 1. A detailed televiewer survey of approximately 6,900 m of borehole walls to depths of 1,000 m was carried out to assess borehole wall damage, i.e. borehole breakouts. 2. Evaluation of nonlinear strains in laboratory samples to depths of approximately 800 m to assess if stress magnitudes were sufficient to create stress-induced microcracking. 3. Assessment of the magnitudes required to cause core disking and survey of core disking observed at Forsmark. The magnitudes and orientations from the stress measurement campaigns were analysed to establish the most likely stress magnitudes and orientations for Design Step D2 within the Target Area of the Complete Site Investigations. The maximum and minimum horizontal stress components are essentially the same as the maximum and intermediate principal stresses, sigma1 and sigma2, respectively. The minimum principal stress (sigma3) is synonymous with the vertical stress. The most likely range in values to be used in the design is also
Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups
无
2007-01-01
The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.
Magnitude Knowledge: The Common Core of Numerical Development
Siegler, Robert S.
2016-01-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…
Some Effects of Magnitude of Reinforcement on Persistence of Responding
McComas, Jennifer J.; Hartman, Ellie C.; Jimenez, Angel
2008-01-01
The influence of magnitude of reinforcement was examined on both response rate and behavioral persistence. During Phase 1, a multiple schedule of concurrent reinforcement was implemented in which reinforcement for one response option was held constant at VI 30 s across both components, while magnitude of reinforcement for the other response option…
Reinforcement Magnitude: An Evaluation of Preference and Reinforcer Efficacy
Trosclair-Lasserre, Nicole M.; Lerman, Dorothea C.; Call, Nathan A.; Addison, Laura R.; Kodak, Tiffany
2008-01-01
Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current…
Fitting Equilibrium Search Models to Labour Market Data
Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.
1996-01-01
Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....
Fitting ARMA Time Series by Structural Equation Models.
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
Rapid Earthquake Magnitude Estimation for Early Warning Applications
Goldberg, Dara; Bock, Yehuda; Melgar, Diego
2017-04-01
Earthquake magnitude is a concise metric that provides invaluable information about the destructive potential of a seismic event. Rapid estimation of magnitude for earthquake and tsunami early warning purposes requires reliance on near-field instrumentation. For large magnitude events, ground motions can exceed the dynamic range of near-field broadband seismic instrumentation (clipping). Strong motion accelerometers are designed with low gains to better capture strong shaking. Estimating earthquake magnitude rapidly from near-source strong-motion data requires integration of acceleration waveforms to displacement. However, integration amplifies small errors, creating unphysical drift that must be eliminated with a high pass filter. The loss of the long period information due to filtering is an impediment to magnitude estimation in real-time; the relation between ground motion measured with strong-motion instrumentation and magnitude saturates, leading to underestimation of earthquake magnitude. Using station displacements from Global Navigation Satellite System (GNSS) observations, we can supplement the high frequency information recorded by traditional seismic systems with long-period observations to better inform rapid response. Unlike seismic-only instrumentation, ground motions measured with GNSS scale with magnitude without saturation [Crowell et al., 2013; Melgar et al., 2015]. We refine the current magnitude scaling relations using peak ground displacement (PGD) by adding a large GNSS dataset of earthquakes in Japan. Because it does not suffer from saturation, GNSS alone has significant advantages over seismic-only instrumentation for rapid magnitude estimation of large events. The earthquake's magnitude can be estimated within 2-3 minutes of earthquake onset time [Melgar et al., 2013]. We demonstrate that seismogeodesy, the optimal combination of GNSS and seismic data at collocated stations, provides the added benefit of improving the sensitivity of
LONG Jiangping
2017-01-01
Full Text Available The complex coherence of polarimetric synthetic aperture radar interferometry (PolInSAR includes the magnitude and phase. The magnitude of coherence is used to measure the quality of the interference phase, and phase center represents the position of the scattering. So, how to improve the accuracy of the coherence magnitude and phase is very important for the forest parameters inversion. Maximum difference of the coherence magnitude or maximum separation of the phase, based on the coherence region, is considered partial information of the complex coherence. In this paper, a new method of coherence optimization, combined with the coherence magnitude and phase information, is established with relational degree. Applied the new approach to estimate the optimal coherence, the optimal polarimetric state of the scattering can be obtained to estimate the optimization coherence. Experimental results show that the optimal coherence criterion, jointed coherence magnitude and phase, can effectively distinguish the phase center of surface scattering and the forest canopy, and improve the reliability of the forest height inversion.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Mandel, Kaisey S.; Scolnic, Daniel M.; Shariff, Hikmatali; Foley, Ryan J.; Kirshner, Robert P.
2017-06-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M B versus B - V) slope {β }{int} differs from the host galaxy dust law R B , this convolution results in a specific curve of mean extinguished absolute magnitude versus apparent color. The derivative of this curve smoothly transitions from {β }{int} in the blue tail to R B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope {β }{app} between {β }{int} and R B . We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a data set of SALT2 optical light curve fits of 248 nearby SNe Ia at zlinear fit gives {β }{app}≈ 3. Our model finds {β }{int}=2.3+/- 0.3 and a distinct dust law of {R}B=3.8+/- 0.3, consistent with the average for Milky Way dust, while correcting a systematic distance bias of ˜0.10 mag in the tails of the apparent color distribution. Finally, we extend our model to examine the SN Ia luminosity-host mass dependence in terms of intrinsic and dust components.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
An empirical evolutionary magnitude estimation for early warning of earthquakes
Chen, Da-Yi; Wu, Yih-Min; Chin, Tai-Lin
2017-03-01
The earthquake early warning (EEW) system is difficult to provide consistent magnitude estimate in the early stage of an earthquake occurrence because only few stations are triggered and few seismic signals are recorded. One of the feasible methods to measure the size of earthquakes is to extract amplitude parameters using the initial portion of the recorded waveforms after P-wave arrival. However, for a large-magnitude earthquake (Mw > 7.0), the time to complete the whole ruptures resulted from the corresponding fault may be very long. The magnitude estimations may not be correctly predicted by the initial portion of the seismograms. To estimate the magnitude of a large earthquake in real-time, the amplitude parameters should be updated with ongoing waveforms instead of adopting amplitude contents in a predefined fixed-length time window, since it may underestimate magnitude for large-magnitude events. In this paper, we propose a fast, robust and less-saturated approach to estimate earthquake magnitudes. The EEW system will initially give a lower-bound of the magnitude in a time window with a few seconds and then update magnitude with less saturation by extending the time window. Here we compared two kinds of time windows for measuring amplitudes. One is P-wave time window (PTW) after P-wave arrival; the other is whole-wave time window after P-wave arrival (WTW), which may include both P and S wave. One to ten second time windows for both PTW and WTW are considered to measure the peak ground displacement from the vertical component of the waveforms. Linear regression analysis are run at each time step (1- to 10-s time interval) to find the empirical relationships among peak ground displacement, hypocentral distances, and magnitudes using the earthquake records from 1993 to 2012 in Taiwan with magnitude greater than 5.5 and focal depth less than 30 km. The result shows that considering WTW to estimate magnitudes has smaller standard deviation than PTW. The
The empirical formula determination of local magnitude for North Moluccas region
Kamaruddin, Basri; Suardi, Iman; Heryandoko, Nova; Bunaga, I. Gusti Ketut Satria
2016-05-01
The energy of local and regional earthquake is usually expressed by local magnitude. In addition, local magnitude is also useful for seismic hazard assessment. The aims of this study are to determine the empirical formula of local magnitude and the correction distance function, -log A 0, applied for North Moluccas region. This study used waveform data from the MCGA seismic network located around North Moluccas region. We collected 148 maximum amplitude data of 40 earthquake events which are recorded by 6 seismometers with range of time from December 1, 2013 till January 31, 2014, hypocentral distance from 25km till 550 km, and depth below 70 km. The results of this study are the empirical formula of local magnitude, ML = log A + 0.651logR + 0.0037R 1.3568, and the correction distance function, logA0 = 0.651logR + 0.0037R 1.3568, respectively. Also we found that the station correction values of the GLMI, LBMI, MNI, SANI, TMSI, and TNTI seismic stations were -0.057, -0.216, -0.322, 0.088, -0.494, and 0.180, respectively. Low amplification is indicated by the positive value of station correction; meanwhile high amplification is by the negative. The correction distance function of North Moluccas region is similar to the Central California region. It means that the attenuation characteristics of the two regions have similarities.
Magnitude and frequency of heat and cold waves in recent decades: the case of South America
G. Ceccherini
2015-12-01
Full Text Available In recent decades there has been an increase in magnitude and occurrence of heat waves and a decrease of cold waves which are possibly related to the anthropogenic influence (Solomon et al., 2007. This study describes the extreme temperature regime of heat waves and cold waves across South America over recent years (1980–2014. Temperature records come from the Global Surface Summary of the Day (GSOD, a climatological dataset produced by the National Climatic Data Center that provides records of daily maximum and minimum temperatures acquired worldwide. The magnitude of heat waves and cold waves for each GSOD station are quantified on annual basis by means of the Heat Wave Magnitude Index (Russo et al., 2014 and the Cold Wave Magnitude Index (CWMI, Forzieri et al., 2015. Results indicate an increase in intensity and in frequency of heat waves, with up to 75 % more events occurring only in the last 10 years. Conversely, no significant changes are detected for cold waves. In addition, the trend of the annual temperature range (i.e., yearly mean of Tmax – yearly mean of Tmin is positive – up to 1 °C decade−1 – over the extra-tropics and negative – up to 0.5 °C decade−1 – over the tropic. This dichotomous behaviour indicates that the annual mean of Tmax is generally increasing more than the annual mean of Tmin in the extra-tropics and vice versa in the tropics.
Lee, Dasom; Chun, Joohyung; Cho, Soohyun
2016-05-01
The Spatial-Numerical Association of Response Codes (SNARC) effect refers to the phenomenon that small versus large numbers are responded to faster in the left versus right side of space, respectively. Using a pairwise comparison task, Shaki et al. found that task instruction influences the pattern of SNARC effects of certain types of magnitudes which are less rigid in their space-magnitude association .The present study examined the generalizability of this instruction effect using pairwise comparison of nonsymbolic and symbolic stimuli within a wide range of magnitudes. We contrasted performance between trials in which subjects were instructed to select the stimulus representing the smaller versus larger magnitude within each pair. We found an instruction-dependent pattern of SNARC effects for both nonsymbolic and symbolic magnitudes. Specifically, we observed a SNARC effect for the "Select Smaller" instruction, but a reverse SNARC effect for the "Select Larger" instruction. Considered together with previous studies, our findings suggest that nonsymbolic magnitudes and relatively large symbolic magnitudes have greater flexibility in their space-magnitude association.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
McCaskey, Ursina; von Aster, Michael; O’Gorman Tuura, Ruth; Kucian, Karin
2017-01-01
The link between number and space has been discussed in the literature for some time, resulting in the theory that number, space and time might be part of a generalized magnitude system. To date, several behavioral and neuroimaging findings support the notion of a generalized magnitude system, although contradictory results showing a partial overlap or separate magnitude systems are also found. The possible existence of a generalized magnitude processing area leads to the question how individuals with developmental dyscalculia (DD), known for deficits in numerical-arithmetical abilities, process magnitudes. By means of neuropsychological tests and functional magnetic resonance imaging (fMRI) we aimed to examine the relationship between number and space in typical and atypical development. Participants were 16 adolescents with DD (14.1 years) and 14 typically developing (TD) peers (13.8 years). In the fMRI paradigm participants had to perform discrete (arrays of dots) and continuous magnitude (angles) comparisons as well as a mental rotation task. In the neuropsychological tests, adolescents with dyscalculia performed significantly worse in numerical and complex visuo-spatial tasks. However, they showed similar results to TD peers when making discrete and continuous magnitude decisions during the neuropsychological tests and the fMRI paradigm. A conjunction analysis of the fMRI data revealed commonly activated higher order visual (inferior and middle occipital gyrus) and parietal (inferior and superior parietal lobe) magnitude areas for the discrete and continuous magnitude tasks. Moreover, no differences were found when contrasting both magnitude processing conditions, favoring the possibility of a generalized magnitude system. Group comparisons further revealed that dyscalculic subjects showed increased activation in domain general regions, whilst TD peers activate domain specific areas to a greater extent. In conclusion, our results point to the existence of a
Quantifying Heartbeat Dynamics by Magnitude and Sign Correlations
Ivanov, Plamen Ch.; Ashkenazy, Yosef; Kantelhardt, Jan W.; Stanley, H. Eugene
2003-05-01
We review a recently developed approach for analyzing time series with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that time series with identical long-range correlations can exhibit different time organization for the magnitude and sign. We apply our approach to series of time intervals between consecutive heartbeats. Using the detrended fluctuation analysis method we find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications. Further, we study the heartbeat magnitude and sign series during different sleep stages — light sleep, deep sleep, and REM sleep. For the heartbeat sign time series we find short-range anticorrelations, which are strong during deep sleep, weaker during light sleep and even weaker during REM sleep. In contrast, for the heartbeat magnitude time series we find long-range positive correlations, which are strong during REM sleep and weaker during light sleep. Thus, the sign and the magnitude series provide information which is also useful for distinguishing between different sleep stages.
Does residual force enhancement increase with increasing stretch magnitudes?
Hisey, Brandon; Leonard, Tim R; Herzog, Walter
2009-07-22
It is generally accepted that force enhancement in skeletal muscles increases with increasing stretch magnitudes. However, this property has not been tested across supra-physiological stretch magnitudes and different muscle lengths, thus it is not known whether this is a generic property of skeletal muscle, or merely a property that holds for small stretch magnitudes within the physiological range. Six cat soleus muscles were actively stretched with magnitudes varying from 3 to 24 mm at three different parts of the force-length relationship to test the hypothesis that force enhancement increases with increasing stretch magnitude, independent of muscle length. Residual force enhancement increased consistently with stretch amplitudes on the descending limb of the force-length relationship up to a threshold value, after which it reached a plateau. Force enhancement did not increase with stretch amplitude on the ascending limb of the force-length relationship. Passive force enhancement was observed for all test conditions, and paralleled the behavior of the residual force enhancement. Force enhancement increased with stretch magnitude when stretching occurred at lengths where there was natural passive force within the muscle. These results suggest that force enhancement does not increase unconditionally with increasing stretch magnitude, as is generally accepted, and that increasing force enhancement with stretch appears to be tightly linked to that part of the force-length relationship where there is naturally occurring passive force.
Derivation of Johnson-Cousins Magnitudes from DSLR Camera Observations
Park, Woojin; Pak, Soojong; Shim, Hyunjin; Le, Huynh Anh N.; Im, Myungshin; Chang, Seunghyuk; Yu, Joonkyu
2016-01-01
The RGB Bayer filter system consists of a mosaic of R, G, and B filters on the grid of the photo sensors which typical commercial DSLR (Digital Single Lens Reflex) cameras and CCD cameras are equipped with. Lot of unique astronomical data obtained using an RGB Bayer filter system are available, including transient objects, e.g. supernovae, variable stars, and solar system bodies. The utilization of such data in scientific research requires that reliable photometric transformation methods are available between the systems. In this work, we develop a series of equations to convert the observed magnitudes in the RGB Bayer filter system (RB, GB, and BB) into the Johnson-Cousins BVR filter system (BJ, VJ, and RC). The new transformation equations derive the calculated magnitudes in the Johnson-Cousins filters (BJcal, VJcal, and RCcal) as functions of RGB magnitudes and colors. The mean differences between the transformed magnitudes and original magnitudes, i.e. the residuals, are (BJ - BJcal) = 0.064 mag, (VJ - VJcal) = 0.041 mag, and (RC - RCcal) = 0.039 mag. The calculated Johnson-Cousins magnitudes from the transformation equations show a good linear correlation with the observed Johnson-Cousins magnitudes.
Liuska Fernández-Diéguez
2016-05-01
Full Text Available The objective of this investigation was to define the zoning of soil liquefaction potential for the Guillermón Moncada Popular Council in the municipality of Santiago de Cuba. The engineering and geological conditions and seismic peculiarities favoring a seism to take place were assessed. The safety factor was re-calculated after determining possible maximum intensity values based on seismic magnitudes that can trigger the soil of the investigated area to liquefy. A scheme of the area´s soil susceptibility to liquefaction was obtained. Based on this result, it was concluded that the sectors that are most likely to experience soil liquefaction if an earthquake of magnitudes ranging between 7,75 and 8 occurs are located towards the center-east of the Popular Council with sandy-clayey soils being predominant. This information is very useful for the location and planning of engineering construction works in the area.
The effects of control of resources on magnitudes of sex differences in human mate preferences.
Moore, Fhionna; Cassidy, Clare; Perrett, David I
2010-12-03
We tested the hypothesis that magnitudes of sex differences in human mate preferences would be inversely related to control of resources. Specifically, we predicted that the ideal partner age, maximum and minimum partner ages tolerated and preferences for "physical attractiveness" over "good financial prospects" of female participants would approach parity with that of men with increasing control of resources. In a sample of 3770 participants recruited via an online survey, the magnitudes of sex differences in age preferences increased with resource control whereas the sex difference in preferences for "physical attractiveness" over "good financial prospects" disappeared when resource control was high. Results are inconsistent, and are discussed in the context of adaptive tradeoff and biosocial models of sex differences in human mate preferences.
The Effects of Control of Resources on Magnitudes of Sex Differences in Human Mate Preferences
Fhionna Moore
2010-10-01
Full Text Available We tested the hypothesis that magnitudes of sex differences in human mate preferences would be inversely related to control of resources. Specifically, we predicted that the ideal partner age, maximum and minimum partner ages tolerated and preferences for “physical attractiveness” over “good financial prospects” of female participants would approach parity with that of men with increasing control of resources. In a sample of 3770 participants recruited via an online survey, the magnitudes of sex differences in age preferences increased with resource control whereas the sex difference in preferences for “physical attractiveness” over “good financial prospects” disappeared when resource control was high. Results are inconsistent, and are discussed in the context of adaptive tradeoff and biosocial models of sex differences in human mate preferences.
Magnitude and Correlates of Low Birth Weight at Term in Rural Wardha, Central India
Kumar V
2016-05-01
Full Text Available Introduction: Birth weight is one of the most important determinant of the neonatal and infant survival. The goal of reducing low birth weight incidence by at least one third between 2000 and 2010 was one of the major goals in ‘A World Fit for Children’. The prevention of low birth weight is a public health priority, particularly in developing countries with high magnitude. Knowledge regarding magnitude and correlates help prevent the condition. Hence, the present study was carried out to study the magnitude and the correlates of low birth weight. Methodology: Two hundred and six newborn babies were recruited on a birth cohort from two Primary Health Centres (PHC of Wardha district to study growth in first year of life. Here, we present the baseline analysis of 172 children who were born full term to study the correlates of low birth weight babies born full term. The children were recruited within first week of their birth. Data was collected on socio-demographic profile, birth history, and maternal characteristics. Proportion of low birth weight was expressed in percentage along with 95% confidence interval. Univariate and multivariate logistic regression was used to study the correlates. Findings are expressed in odds ratios with their 95% confidence intervals. Results: The magnitude of low birth weight at term was found to be 33.1% (95% CI: 26.4%-40.4%. On univariate analysis, significant correlates of low birth weight were consumption of less than 50 iron-folic acid tables and being born to than mother. On multivariate analysis, the significant correlates were female sex of child (OR=2.856, being born to thin mother (OR=5.320, consumption of less than 50 tablets (OR=4.648, and complications of pregnancy (OR=2.917. Conclusions: The magnitude of low birth weight is very high and modifiable correlates of low birth weight are nutritional status of mother, lower consumption of IFA tablets and complications of pregnancy.
Focal fits during chlorambucil therapy
Naysmith, A.; Robson, R. H.
1979-01-01
An elderly man receiving chlorambucil for chronic lymphatic leukaemia developed focal fits. The onset and frequency were dose related. There was no evidence of metabolic disturbance or of meningeal leukaemia. Although reported in children and well recognized in animals, chlorambucil-induced fits in an adult have not been previously recorded. PMID:118440
Optimization of military garment fit
Daanen, H.A.M.
2014-01-01
In the Dutch armed forces clothing sizes are determined using 3D body scans. To evaluate if the predicted size based on the scan analysis matches the best fit, 35 male soldiers fitted a combat jacket and combat pants. It was shown that the predicted jacket size was slightly too large. Therefore, an
Focke, E.S.
2007-01-01
If it would be possible to install Tight Fit Pipe by means of reeling, it would be an attractive new option for the exploitation of offshore oil and gas fields containing corrosive hydrocarbons. Tight Fit Pipe is a mechanically bonded double walled pipe where a corrosion resistant alloy liner pipe
Definitions of Health Terms: Fitness
... this page: https://medlineplus.gov/definitions/fitnessdefinitions.html Definitions of Health Terms: Fitness To use the sharing features on ... the most of your exercise routine. Find more definitions on Fitness | General Health | Minerals | Nutrition | Vitamins Activity Count Physical activity is ...
OCEAN-WIDE TSUNAMIS, MAGNITUDE THRESHOLDS, AND 1946 TYPE EVENTS
Daniel A. Walker
2005-01-01
Full Text Available An analysis of magnitudes and runups in Hawaii for more than 200 tsunamigenic earthquakes along the margins of the Pacific reveals that all of the earthquakes with moment magnitudes of 8.6 or greater produced significant Pacific-wide tsunamis. Such findings can be used as a basis for early warnings of significant ocean-wide tsunamis as a supplement to, or in the absence of, more comprehensive data from other sources. Additional analysis of magnitude and runup data suggests that 1946 type earthquakes and tsunamis may be more common than previously believed.
How important are direct fitness benefits of sexual selection?
Møller, A. P.; Jennions, M. D.
2001-10-01
Females may choose mates based on the expression of secondary sexual characters that signal direct, material fitness benefits or indirect, genetic fitness benefits. Genetic benefits are acquired in the generation subsequent to that in which mate choice is performed, and the maintenance of genetic variation in viability has been considered a theoretical problem. Consequently, the magnitude of indirect benefits has traditionally been considered to be small. Direct fitness benefits can be maintained without consideration of mechanisms sustaining genetic variability, and they have thus been equated with the default benefits acquired by choosy females. There is, however, still debate as to whether or not males should honestly advertise direct benefits such as their willingness to invest in parental care. We use meta-analysis to estimate the magnitude of direct fitness benefits in terms of fertility, fecundity and two measures of paternal care (feeding rate in birds, hatching rate in male guarding ectotherms) based on an extensive literature survey. The mean coefficients of determination weighted by sample size were 6.3%, 2.3%, 1.3% and 23.6%, respectively. This compares to a mean weighted coefficient of determination of 1.5% for genetic viability benefits in studies of sexual selection. Thus, for several fitness components, direct benefits are only slightly more important than indirect ones arising from female choice. Hatching rate in male guarding ectotherms was by far the most important direct fitness component, explaining almost a quarter of the variance. Our analysis also shows that male sexual advertisements do not always reliably signal direct fitness benefits.
ProFit: Bayesian Profile Fitting of Galaxy Images
Robotham, A S G; Tobar, R; A,; Moffett,; Driver, S P
2016-01-01
We present ProFit, a new code for Bayesian two-dimensional photometric galaxy profile modelling. ProFit consists of a low-level C++ library (libprofit), accessible via a command-line interface and documented API, along with high-level R (ProFit) and Python (PyProFit) interfaces (available at github.com/ICRAR/ libprofit, github.com/ICRAR/ProFit, and github.com/ICRAR/pyprofit respectively). R ProFit is also available pre-built from CRAN, however this version will be slightly behind the latest GitHub version. libprofit offers fast and accurate two- dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. We show detailed comparisons between libprofit and GALFIT. libprofit is both faster and more accurate than GALFIT at integrating the ubiquitous Serrsic profile for the most common values of the Serrsic index n (0.5 < n < 8). The high-level fitting code Pr...
Female Fitness in the Blogosphere
Jesper Andreasson
2013-07-01
Full Text Available This article analyzes self-portrayals and gender constructions among female personal trainers within an Internet-mediated framework of fitness culture. The empirical material comes from a close examination of three strategically selected blogs. The result shows that some of the blogs clearly build upon what Connell calls emphasized femininity, as a means of legitimizing and constructing appropriate female fitness. In addition, there are also tendencies of sexualization in text and imagery present. As such, these self-representations are framed within a cultural history of body fitness dominated by stereotypical ways of perceiving masculinity and femininity. However, this does not capture the entire presentation of the self among the analyzed fitness bloggers. The blogs also point in the direction of ongoing negotiations and subversions of traditional gender norms. Among other things, they show how irony and humor are used as a means of questioning normative gender constructions while empowering female fitness and bodyliness.
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Estimating magnitude and frequency of floods for Wisconsin urban streams
Conger, D.H.
1986-01-01
Equations for estimating magnitude and frequency of floods for Wisconsin streams with drainage basins containing various amounts of existing or projected urban development were developed by flood-frequency and multiple-regression analyses.
magnitude and pattern of injury in jimma university
GB
2011-11-03
Nov 3, 2011 ... CONCLUSION: The magnitude of injury in the hospital was considerably high. Age and ... 1Jimma University, College of Public Health and Medical Sciences, Department of ..... conducted in Canadian hospitals reported more.
Numerical and physical magnitudes are mapped into time.
Ben-Meir, Shachar; Ganor-Stern, Dana; Tzelgov, Joseph
2012-01-01
In two experiments we investigated mapping of numerical and physical magnitudes with temporal order. Pairs of digits were presented sequentially for a size comparison task. An advantage for numbers presented in ascending order was found when participants were comparing the numbers' physical and numerical magnitudes. The effect was more robust for comparisons of physical size, as it was found using both select larger and select smaller instructions, while for numerical comparisons it was found only for select larger instructions. Varying both the digits' numerical and physical sizes resulted in a size congruity effect, indicating automatic processing of the irrelevant magnitude dimension. Temporal order and the congruency between numerical and physical magnitudes affected comparisons in an additive manner, thus suggesting that they affect different stages of the comparison process.
Validity of electrical stimulus magnitude matching in chronic pain
Persson, Ann L; Westermark, Sofia; Merrick, Daniel
2009-01-01
OBJECTIVE: To examine the validity of the PainMatcher in chronic pain. DESIGN: Comparison of parallel pain estimates from visual analogue scales with electrical stimulus magnitude matching. PATIENTS: Thirty-one patients with chronic musculoskeletal pain. METHODS: Twice a day ongoing pain was rated...... on a standard 100-mm visual analogue scale, and thereafter magnitude matching was performed using a PainMatcher. The sensory threshold to electrical stimulation was tested twice on separate occasions. RESULTS: In 438 observations visual analogue scale ranged from 3 to 95 (median 41) mm, and Pain......Matcher magnitudes from 2.67 to 27.67 (median 6.67; mean 7.78) steps. There was little correlation between visual analogue scale and magnitude data (r = 0.29; p
When Should Zero Be Included on a Scale Showing Magnitude?
Kozak, Marcin
2011-01-01
This article addresses an important problem of graphing quantitative data: should one include zero on the scale showing magnitude? Based on a real time series example, the problem is discussed and some recommendations are proposed.
I love my baffling, backward, counterintuitive, overly complicated magnitudes
Sirola, Christopher
2017-02-01
All professions have their jargon. But astronomy goes the extra parsec. Here's an example. Vega, one of the brighter stars in the night sky, has an apparent magnitude (i.e., an apparent brightness) of approximately zero. Polaris, the North Star, has an apparent magnitude of about +2. Despite this, Vega appears brighter than Polaris, and not by two, but by a factor of about six times.
Some explicit expressions for the probability distribution of force magnitude
Saralees Nadarajah
2008-08-01
Recently, empirical investigations have suggested that the components of contact forces follow the exponential distribution. However, explicit expressions for the probability distribution of the corresponding force magnitude have not been known and only approximations have been used in the literature. In this note, for the ﬁrst time, I provide explicit expressions for the probability distribution of the force magnitude. Both two-dimensional and three-dimensional cases are considered.
An Overview of the Study on Stress Magnitude
Sheng Shuzhong; Wan Yongge
2009-01-01
Crustal stress field holds an important position in geodynamics research, such as in plate motion simulations, uplift of the Qinghai-Xizang (Tibet) Plateau and earthquake preparation and occurrence. However, most of the crustal stress studies emphasize particularly on the determination of stress direction, with little study being done on stress magnitude at present. After reviewing ideas on a stress magnitude study from geological, geophysical and various other aspects, a method to estimate the stress magnitude in the source region according to the deflection of stress direction before and after large earthquakes and the stress drop tensor of earthquake rupture has been developed. The proposed method can also be supplemented by the average apparent stress before and after large earthquakes. The stress direction deflection before and after large earthquakes can be inverted by massive focal mechanisms of foreshocks and aftershocks and the stress drop field generated by the seismic source can be calculated by the detailed distribution of the earthquake's rupture. The mathematical relationship can then be constructed between the stress drop field, where its magnitude and direction are known and the stress tensor before and after large earthquakes, where its direction is known but magnitude is unknown, thereby obtaining the stress magnitude. The average apparent stress before and after large earthquakes can be obtained by using the catalog of broadband radiated energy and seismic moment tensor of foreshocks and aftershocks and the different responses to stress drops. This relationship leads to another estimation of stress magnitude before a large earthquake. The stress magnitude and its error are constrained by combining the two methods, which provide new constraints for the geodyuamics study.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
The magnitude of innovation and its evolution in social animals.
Arbilly, Michal; Laland, Kevin N
2017-02-08
Innovative behaviour in animals, ranging from invertebrates to humans, is increasingly recognized as an important topic for investigation by behavioural researchers. However, what constitutes an innovation remains controversial, and difficult to quantify. Drawing on a broad definition whereby any behaviour with a new component to it is an innovation, we propose a quantitative measure, which we call the magnitude of innovation, to describe the extent to which an innovative behaviour is novel. This allows us to distinguish between innovations that are a slight change to existing behaviours (low magnitude), and innovations that are substantially different (high magnitude). Using mathematical modelling and evolutionary computer simulations, we explored how aspects of social interaction, cognition and natural selection affect the frequency and magnitude of innovation. We show that high-magnitude innovations are likely to arise regularly even if the frequency of innovation is low, as long as this frequency is relatively constant, and that the selectivity of social learning and the existence of social rewards, such as prestige and royalties, are crucial for innovative behaviour to evolve. We suggest that consideration of the magnitude of innovation may prove a useful tool in the study of the evolution of cognition and of culture.
Magnitude knowledge: the common core of numerical development.
Siegler, Robert S
2016-05-01
The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development.
Numerical magnitude processing in children with mild intellectual disabilities.
Brankaer, Carmen; Ghesquière, Pol; De Smedt, Bert
2011-01-01
The present study investigated numerical magnitude processing in children with mild intellectual disabilities (MID) and examined whether these children have difficulties in the ability to represent numerical magnitudes and/or difficulties in the ability to access numerical magnitudes from formal symbols. We compared the performance of 26 children with MID on a symbolic (digits) and a non-symbolic (dot-arrays) comparison task with the performance of two control groups of typically developing children: one group matched on chronological age and one group matched on mathematical ability level. Findings revealed that children with MID performed more poorly than their typically developing chronological age-matched peers on both the symbolic and non-symbolic comparison tasks, while their performance did not substantially differ from the ability-matched control group. These findings suggest that the development of numerical magnitude representation in children with MID is marked by a delay. This performance pattern was observed for both symbolic and non-symbolic comparison tasks, although difficulties on the former task were more prominent. Interventions in children with MID should therefore foster both the development of magnitude representations and the connections between symbols and the magnitudes they represent.
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe
2016-01-01
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
The Color–Magnitude Distribution of Hilda Asteroids: Comparison with Jupiter Trojans
Wong, Ian; Brown, Michael E.
2017-02-01
Current models of solar system evolution posit that the asteroid populations in resonance with Jupiter are comprised of objects scattered inward from the outer solar system during a period of dynamical instability. In this paper, we present a new analysis of the absolute magnitude and optical color distribution of Hilda asteroids, which lie in 3:2 mean-motion resonance with Jupiter, with the goal of comparing the bulk properties with previously published results from an analogous study of Jupiter Trojans. We report an updated power-law fit of the Hilda magnitude distribution through H = 14. Using photometric data listed in the Sloan Moving Object Catalog, we confirm the previously reported strong bimodality in visible spectral slope distribution, indicative of two subpopulations with differing surface compositions. When considering collisional families separately, we find that collisional fragments follow a unimodal color distribution with spectral slope values consistent with the bluer of the two subpopulations. The color distributions of Hildas and Trojans are comparable and consistent with a scenario in which the color bimodality in both populations developed prior to emplacement into their present-day locations. We propose that the shallower magnitude distribution of the Hildas is a result of an initially much larger Hilda population, which was subsequently depleted as smaller bodies were preferentially ejected from the narrow 3:2 resonance via collisions. Altogether, these observations provide a strong case supporting a common origin for Hildas and Trojans as predicted by current dynamical instability theories of solar system evolution.
Effects of Perceived Fitness Level of Exercise Partner on Intensity of Exertion
Thomas G. Plante
2010-01-01
Full Text Available Problem statement: Social comparison theory was used to examine if exercising with a research confederate posing as either high fit or low fit would increase the exertion in exercising. Approach: 91 college students were randomly assigned to one of three conditions: Biking alone, biking with a high fit confederate, or biking with a low fit confederate. All participants were instructed to complete 20 min of exercise at 60-70% of their maximum target heart rate. Results: Results indicated that participants in the high fit condition exercised harder than those in the low fit condition. However, no mood differences emerged between conditions. Conclusion: Social comparison theory predicts exercise outcome such that participants gravitate towards the behavior (high fit or low fit of those around them.
Fitness self-perception and Vo2max in firefighters.
Peate, W F; Lundergan, Linda; Johnson, Jerry J
2002-06-01
Firefighters work at maximal levels of exertion. Fitness for such duty requires adequate aerobic capacity (maximum oxygen consumption [Vo2max]). Aerobic fitness can both improve a worker's ability to perform and offer resistance to cardiopulmonary conditions. Inactive firefighters have a 90% greater risk of myocardial infarction than those who are aerobically fit. Participants (101 firefighters) completed a questionnaire that asked them to rank their fitness level from 0 to 7; e.g., Level 0 was low fitness: "I avoid walking or exertion, e.g., always use elevator, drive whenever possible." The level of activity rating increased to Level 7: "I run over 10 miles per week or spend 3 hours per week in comparable physical activity." Each participant then completed two measures of Vo2max: a 5-minute step test and a submaximal treadmill test. There was no association between the firefighters' self-perception of their level of fitness and their aerobic capacity as measured by either step test or submaximal treadmill. Because of the critical job demands of firefighting and the negative consequences of inadequate fitness and aerobic capacity, periodic aerobic capacity testing with individualized exercise prescriptions and work--community support may be advisable for all active-duty firefighters.
Greedy adaptive walks on a correlated fitness landscape.
Park, Su-Chan; Neidhart, Johannes; Krug, Joachim
2016-05-21
We study adaptation of a haploid asexual population on a fitness landscape defined over binary genotype sequences of length L. We consider greedy adaptive walks in which the population moves to the fittest among all single mutant neighbors of the current genotype until a local fitness maximum is reached. The landscape is of the rough mount Fuji type, which means that the fitness value assigned to a sequence is the sum of a random and a deterministic component. The random components are independent and identically distributed random variables, and the deterministic component varies linearly with the distance to a reference sequence. The deterministic fitness gradient c is a parameter that interpolates between the limits of an uncorrelated random landscape (c=0) and an effectively additive landscape (c→∞). When the random fitness component is chosen from the Gumbel distribution, explicit expressions for the distribution of the number of steps taken by the greedy walk are obtained, and it is shown that the walk length varies non-monotonically with the strength of the fitness gradient when the starting point is sufficiently close to the reference sequence. Asymptotic results for general distributions of the random fitness component are obtained using extreme value theory, and it is found that the walk length attains a non-trivial limit for L→∞, different from its values for c=0 and c=∞, if c is scaled with L in an appropriate combination.
Knaapi, Matti
2014-01-01
CrossFit on laji, joka pyrkii edistämään terveyttä ja kuntoa. CrossFit saleja löytyy mailmalta yli 10 000 kappaletta. CrossFit -harjoittelussa pyritään parantamaan ihmisen kuntoa mahdollisimman laajalla skaalalla kehittämällä mm. voimaa, kestävyyttä, tarkkuutta, tasapainoa ja eri aineenvaihduntareittejä samanaikaisesti. Terveyden ja kunnon kehittämiseen kuuluu kuntoilun lisäksi myös muita osa-alueita. Ruokavalio ja kehonhuolto ovat tärkeitä osa-alueita hyvän kunnon saavuttamiseksi. Ruokav...
Knaapi, Matti
2014-01-01
CrossFit on laji, joka pyrkii edistämään terveyttä ja kuntoa. CrossFit saleja löytyy mailmalta yli 10 000 kappaletta. CrossFit -harjoittelussa pyritään parantamaan ihmisen kuntoa mahdollisimman laajalla skaalalla kehittämällä mm. voimaa, kestävyyttä, tarkkuutta, tasapainoa ja eri aineenvaihduntareittejä samanaikaisesti. Terveyden ja kunnon kehittämiseen kuuluu kuntoilun lisäksi myös muita osa-alueita. Ruokavalio ja kehonhuolto ovat tärkeitä osa-alueita hyvän kunnon saavuttamiseksi. Ruokav...
Mandel, Kaisey S; Shariff, Hikmatali; Foley, Ryan J; Kirshner, Robert P
2016-01-01
Conventional Type Ia supernova (SN Ia) cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the distribution of dusty extinguished absolute magnitudes and apparent colors as a convolution of the intrinsic SN Ia color-magnitude distribution and the host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (M_B vs. B-V) slope beta_int differs from the host galaxy dust law R_B, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from beta_int in the blue tail to R_B in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve at this transition near the average apparent color, resultin...
LIU Rui-feng; CHEN Yun-tai; Peter Bormann; REN Xiao; HOU Jian-min; ZOU Li-ye; YANG Hui
2005-01-01
By using orthogonal regression method, a systematic comparison is made between body wave magnitudes determined by Institute of Geophysics of China Earthquake Administration (IGCEA) and National Earthquake Information Center of US Geological Survey (USGS/NEIC) on the basis of observation data from China and US seismograph networks between 1983 and 2004. The result of orthogonal regression shows no systematic error between body wave magnitude mb determined by IGCEA and mb (NEIC). Provided that mb (NEIC) is taken as the benchmark, body wave magnitude determined by IGCEA is greater by 0.2～0.1 than the magnitude determined by NEIC for M=3.5～4.5 earthquakes; for M=5.0～5.5 earthquakes, there is no difference; and for M≥6.0 earthquakes, it is smaller by no more than 0.2. This is consistent with the result of comparison by IDC (International Data Center).
Fitness-related patterns of genetic variation in rhesus macaques.
Blomquist, Gregory E
2009-03-01
The patterning of quantitative genetic descriptions of genetic and residual variation for 15 skeletal and six life history traits was explored in a semi-free-ranging group of rhesus macaques (Macaca mulatta Zimmerman 1780). I tested theoretical predictions that explain the magnitude of genetic and residual variation as a result of 1. strength of a trait's association with evolutionary fitness, or 2. developmental and physiological relationships among traits. I found skeletal traits had higher heritabilities and lower coefficients of residual variation than more developmentally and physiologically dependent life history traits. Total lifetime fertility had a modest heritability (0.336) in this population, and traits with stronger correlations to fitness had larger amounts of residual variance. Censoring records of poorly-performing individuals on lifetime fertility and lifespan substantially reduced their heritabilities. These results support models for the fitness-related patterning of genetic variation based on developmental and physiological relationships among traits rather than the action of selection eroding variation.
The fitness value of information
Bergstrom, Carl T
2007-01-01
Biologists measure information in different ways. Neurobiologists and researchers in bioinformatics often measure information using information-theoretic measures such as Shannon's entropy or mutual information. Behavioral biologists and evolutionary ecologists more commonly use decision-theoretic measures, such the value of information, which assess the worth of information to a decision maker. Here we show that these two kinds of measures are intimately related in the context of biological evolution. We present a simple model of evolution in an uncertain environment, and calculate the increase in Darwinian fitness that is made possible by information about the environmental state. This fitness increase -- the fitness value of information -- is a composite of both Shannon's mutual information and the decision-theoretic value of information. Furthermore, we show that in certain cases the fitness value of responding to a cue is exactly equal to the mutual information between the cue and the environment. In gen...
Strength Training: For Overall Fitness
... Accessed Jan. 11, 2016. Quantity and quality of exercise for developing and maintaining cardiorespiratory, musculoskeletal, and neuromotor fitness in apparently healthy adults: Guidance for prescribing exercise. American College of ...
... Better heart health Stronger muscles Better balance and coordination Stronger bones Lower risk of dementia Improved memory Reduced stress More energy Improved mood Types of Dance There are dance styles to fit almost anyone ...
Kinematic Fitting of Detached Vertices
Mattione, Paul [Rice Univ., Houston, TX (United States)
2007-05-01
The eg3 experiment at the Jefferson Lab CLAS detector aims to determine the existence of the $\\Xi_{5}$ pentaquarks and investigate the excited $\\Xi$ states. Specifically, the exotic $\\Xi_{5}^{--}$ pentaquark will be sought by first reconstructing the $\\Xi^{-}$ particle through its weak decays, $\\Xi^{-}\\to\\pi^{-}\\Lambda$ and $\\Lambda\\to\\pi^{-}$. A kinematic fitting routine was developed to reconstruct the detached vertices of these decays, where confidence level cuts on the fits are used to remove background events. Prior to fitting these decays, the exclusive reaction $\\gamma D\\rightarrow pp\\pi^{-}$ was studied in order to correct the track measurements and covariance matrices of the charged particles. The $\\Lambda\\rightarrow p\\pi^{-}$ and $\\Xi^{-}\\to\\pi^{-}\\Lambda$ decays were then investigated to demonstrate that the kinematic fitting routine reconstructs the decaying particles and their detached vertices correctly.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
30 CFR 72.710 - Selection, fit, use, and maintenance of approved respirators.
2010-07-01
... approved respirators. 72.710 Section 72.710 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... Selection, fit, use, and maintenance of approved respirators. In order to ensure the maximum amount of respiratory protection, approved respirators shall be selected, fitted, used, and maintained in...
Rapid fitting of particle cascade development data from X-ray film densitometry measurements
Roberts, E.; Benson, Carl M.; Fountain, Walter F.
1989-01-01
A semiautomatic method of fitting transition curves to X-ray film optical density measurements of electromagnetic particle cascades is described. Several hundred singly and multiple interacting cosmic ray events from the JACEE 8 balloon flights were analyzed using this procedure. In addition to greatly increased speed compared to the previous manual method, the semiautomatic method offers increased accuracy through maximum likelihood fitting.
1977-03-17
situations, deal with the day to day duty demands, and maintain a trim physical appearance. But if it has been years since you exercised , do not run right...how much exercise is required to achieve and maintain a fit unit and secondly, how is this fitness requirement modified by age and sex . The Training...as the activity in which you are involved becomes easier, increase the amount. You can do 7 this by varying the intensity (how hard you exercise as
Selecting series size where the generalized Pareto distribution best fits
Ben-Zvi, Arie
2016-10-01
Rates of arrival and magnitudes of hydrologic variables are frequently described by the Poisson and the generalized Pareto (GP) distributions. Variations of their goodness-of-fit to nested series are studied here. The variable employed is depth of rainfall events at five stations of the Israel Meteorological Service. Series sizes range from about 50 (number of years on records) to about 1000 (total number of recorded events). The goodness-of-fit is assessed by the Anderson-Darling test. Three versions of this test are applied here. These are the regular two-sided test (of which the statistic is designated here by A2), the upper one-sided test (UA2) and the adaptation to the Poisson distribution (PA2). Very good fits, with rejection significance levels higher than 0.5 for A2 and higher than 0.25 for PA2, are found for many series of different sizes. Values of the shape parameter of the GP distribution and of the predicted rainfall depths widely vary with series size. Small coefficients of variation are found, at each station, for the 100-year rainfall depths, predicted through the series with very good fit of the GP distribution. Therefore, predictions through series of very good fit appear more consistent than through other selections of series size. Variations of UA2, with series size, are found narrower than those of A2. Therefore, it is advisable to predict through the series of low UA2. Very good fits of the Poisson distribution to arrival rates are found for series with low UA2. But, a reversed relation is not found here. Thus, the model of Poissonian arrival rates and GP distribution of magnitudes suits here series with low UA2. It is recommended to predict through the series, to which the lowest UA2 is obtained.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Magnitude Characterization Using Complex Networks in Central Chile
Pasten, D.; Comte, D.; Munoz, V.
2013-12-01
Studies using complex networks are applied to many systems, like traffic, social networks, internet and earth science. In this work we make an analysis using complex networks applied to magnitude of seismicity in the central zone of Chile, we use the preferential attachment in order to construct a seismic network using local magnitudes and the hypocenters of a seismic data set in central Chile. In order to work with a complete catalogue in magnitude, the data associated with the linear part of the Gutenberg-Richter law, with magnitudes greater than 2.7, were taken. We then make a grid in space, so that each seismic event falls into a certain cell, depending on the location of its hypocenter. Now the network is constructed: the first node corresponds to the cell where the first seismic event occurs. The node has an associated number which is the magnitude of the event which occured in it, and a probability is assigned to the node. The probability is a nonlinear mapping of the magnitude (a Gaussian function was taken), so that nodes with lower magnitude events are more likely to be attached to. Each time a new node is added to the network, it is attached to the previous node which has the larger probability; the link is directed from the previous node to the new node. In this way, a directed network is constructed, with a ``preferential attachment''-like growth model, using the magnitudes as the parameter to determine the probability of attachment to future nodes. Several events could occur in the same node. In this case, the probability is calculated using the average of the magnitudes of the events occuring in that node. Once the directed network is finished, the corresponding undirected network is constructed, by making all links symmetric, and eliminating the loops which may appear when two events occur in the same cell. The resulting directed network is found to be scale free (with very low values of the power-law distribution exponent), whereas the undirected
Robust Computation of Error Vector Magnitude for Wireless Standards
Jensen, Tobias Lindstrøm; Larsen, Torben
2013-01-01
The modulation accuracy described by an error vector magnitude is a critical parameter in modern communication systems — defined originally as a performance metric for transmitters but now also used in receiver design and for more general signal analysis. The modulation accuracy is a measure of how...... far a test signal is from a reference signal at the symbol values when some parameters in a reconstruction model are optimized for best agreement. This paper provides an approach to computation of error vector magnitude as described in several standards from measured or simulated data. It is shown...... that the error vector magnitude optimization problem is generally non-convex. Robust estimation of the initial conditions for the optimizer is suggested, which is particularly important for a non-convex problem. A Bender decomposition approach is used to separate convex and non-convex parts of the problem...
Task difficulty in mental arithmetic affects microsaccadic rates and magnitudes.
Siegenthaler, Eva; Costela, Francisco M; McCamy, Michael B; Di Stasi, Leandro L; Otero-Millan, Jorge; Sonderegger, Andreas; Groner, Rudolf; Macknik, Stephen; Martinez-Conde, Susana
2014-01-01
Microsaccades are involuntary, small-magnitude saccadic eye movements that occur during attempted visual fixation. Recent research has found that attention can modulate microsaccade dynamics, but few studies have addressed the effects of task difficulty on microsaccade parameters, and those have obtained contradictory results. Further, no study to date has investigated the influence of task difficulty on microsaccade production during the performance of non-visual tasks. Thus, the effects of task difficulty on microsaccades, isolated from sensory modality, remain unclear. Here we investigated the effects of task difficulty on microsaccades during the performance of a non-visual, mental arithmetic task with two levels of complexity. We found that microsaccade rates decreased and microsaccade magnitudes increased with increased task difficulty. We propose that changes in microsaccade rates and magnitudes with task difficulty are mediated by the effects of varying attentional inputs on the rostral superior colliculus activity map.
Newmark design spectra considering earthquake magnitudes and site categories
Li, Bo; Xie, Wei-Chau; Pandey, M. D.
2016-09-01
Newmark design spectra have been implemented in many building codes, especially in building codes for critical structures. Previous studies show that Newmark design spectra exhibit lower amplitudes at high frequencies and larger amplitudes at low frequencies in comparison with spectra developed by statistical methods. To resolve this problem, this study considers three suites of ground motions recorded at three types of sites. Using these ground motions, influences of the shear-wave velocity, earthquake magnitudes, source-to-site distances on the ratios of ground motion parameters are studied, and spectrum amplification factors are statistically calculated. Spectral bounds for combinations of three site categories and two cases of earthquake magnitudes are estimated. Site design spectrum coefficients for the three site categories considering earthquake magnitudes are established. The problems of Newmark design spectra could be resolved by using the site design spectrum coefficients to modify the spectral values of Newmark design spectra in the acceleration sensitive, velocity sensitive, and displacement sensitive regions.
Leptokaropoulos, Konstantinos; Staszek, Monika; Cielesta, Szymon; Urban, Paweł; Olszewska, Dorota; Lizurek, Grzegorz
2017-01-01
The purpose of this study is to evaluate seismic hazard parameters in connection with the evolution of mining operations and seismic activity. The time-dependent hazard parameters to be estimated are activity rate, Gutenberg-Richter b-value, mean return period and exceedance probability of a prescribed magnitude for selected time windows related with the advance of the mining front. Four magnitude distribution estimation methods are applied and the results obtained from each one are compared with each other. Those approaches are maximum likelihood using the unbounded and upper bounded Gutenberg-Richter law and the non-parametric unbounded and non-parametric upper-bounded kernel estimation of magnitude distribution. The method is applied for seismicity occurred in the longwall mining of panel 3 in coal seam 503 in Bobrek colliery in Upper Silesia Coal Basin, Poland, during 2009-2010. Applications are performed in the recently established Web-Platform for Anthropogenic Seismicity Research, available at https://tcs.ah-epos.eu/.
Leptokaropoulos, Konstantinos; Staszek, Monika; Cielesta, Szymon; Urban, Paweł; Olszewska, Dorota; Lizurek, Grzegorz
2017-06-01
The purpose of this study is to evaluate seismic hazard parameters in connection with the evolution of mining operations and seismic activity. The time-dependent hazard parameters to be estimated are activity rate, Gutenberg-Richter b-value, mean return period and exceedance probability of a prescribed magnitude for selected time windows related with the advance of the mining front. Four magnitude distribution estimation methods are applied and the results obtained from each one are compared with each other. Those approaches are maximum likelihood using the unbounded and upper bounded Gutenberg-Richter law and the non-parametric unbounded and non-parametric upper-bounded kernel estimation of magnitude distribution. The method is applied for seismicity occurred in the longwall mining of panel 3 in coal seam 503 in Bobrek colliery in Upper Silesia Coal Basin, Poland, during 2009-2010. Applications are performed in the recently established Web-Platform for Anthropogenic Seismicity Research, available at https://tcs.ah-epos.eu/.
Real-Time and High-Accuracy Arctangent Computation Using CORDIC and Fast Magnitude Estimation
Luca Pilato
2017-03-01
Full Text Available This paper presents an improved VLSI (Very Large Scale of Integration architecture for real-time and high-accuracy computation of trigonometric functions with fixed-point arithmetic, particularly arctangent using CORDIC (Coordinate Rotation Digital Computer and fast magnitude estimation. The standard CORDIC implementation suffers of a loss of accuracy when the magnitude of the input vector becomes small. Using a fast magnitude estimator before running the standard algorithm, a pre-processing magnification is implemented, shifting the input coordinates by a proper factor. The entire architecture does not use a multiplier, it uses only shift and add primitives as the original CORDIC, and it does not change the data path precision of the CORDIC core. A bit-true case study is presented showing a reduction of the maximum phase error from 414 LSB (angle error of 0.6355 rad to 4 LSB (angle error of 0.0061 rad, with small overheads of complexity and speed. Implementation of the new architecture in 0.18 µm CMOS technology allows for real-time and low-power processing of CORDIC and arctangent, which are key functions in many embedded DSP systems. The proposed macrocell has been verified by integration in a system-on-chip, called SENSASIP (Sensor Application Specific Instruction-set Processor, for position sensor signal processing in automotive measurement applications.
Lin Xiankan
2007-01-01
The author carefully selected earthquakes with ML =4.0～5.0, 215 occurring in the crust in the Taiwan region. The attenuation characteristics of maximum displacement recorded by the Fujian digital network have been obtained by multi-analysis as follows:logA = 2.07 + 231 /△ (150km ≤⊿ 650km) And the corresponding expression of calibration function is,R(⊿) = 3.45 - 231.1(1/⊿-0.01) (150km ≤⊿≤650km) Then, the author determined the magnitude and its error with the data from the Fujian network using the calibration function brought forward in 1997 and the above formula for 790 earthquakes occurring in the crust in the Taiwan region from September 1997 ～ August 2005. The result indicates that the average error of the network is 0.20 with the former and 0.18 with the latter. The average error is 0.13 with the latter with station correction. Compared with the magnitude determined by Taiwan seismologists, the magnitude value with the former is lower by 0.50 on average and that with the latter is higher by 0.08 on average.
Effects of different magnitudes of whole-body vibration on arm muscular performance.
Marín, Pedro J; Herrero, Azael J; Sáinz, Nuria; Rhea, Matthew R; García-López, David
2010-09-01
The purpose of this study was to analyze the effects of different vibration magnitudes via feet on the number of repetitions performed, mean velocity, and perceived exertion during a set of elbow-extension exercise to failure (70% 1 repetition maximum [1RM] load). Twenty recreationally active students (14 men and 6 women) performed, in 3 different days, 1 elbow-extension set applying randomly 1 of the 3 experimental conditions: high magnitude (HM; 50 Hz and 2.51 mmp-p; 98.55 mxs-2), low magnitude (LM; 30 Hz and 1.15 mmp-p; 20.44 m.s-2) or control (Control, without vibration stimulus). Results indicate that the vibration via feet provides superimposed stimuli for elbow-extensor performance, enhancing the total number of repetitions performed in the HM and LM conditions, which was significantly higher (p Control condition (21.5 and 18.1%, respectively). Moreover, there was a significant increase (p Control conditions. This study provides evidence that an HM of vibration generates more neuromuscular facilitation than an LM. These data suggest that a vibration stimulus applied to the feet can result in positive improvements in upper body resistance exercise performance.
The UBV Color Evolution of Classical Novae. II. Color-Magnitude Diagram
Hachisu, Izumi
2016-01-01
We have examined the outburst tracks of 40 novae in the color-magnitude diagram (intrinsic B-V color versus absolute V magnitude). After reaching the optical maximum, each nova generally evolves toward blue from the upper-right to the lower-left and then turns back toward the right. The 40 tracks are categorized into one of six templates: very fast nova V1500 Cyg; fast novae V1668 Cyg, V1974 Cyg, and LV Vul; moderately fast nova FH Ser; and very slow nova PU Vul. These templates are located from the left (blue) to the right (red) in this order, depending on the envelope mass and nova speed class. A bluer nova has a less massive envelope and faster nova speed class. In novae with multiple peaks, the track of the first decay is more red than that of the second (or third) decay, because a large part of the envelope mass had already been ejected during the first peak. Thus, our newly obtained tracks in the color-magnitude diagram provide useful information to understand the physics of classical novae. We also fou...
Robust fitting for pulsar timing analysis
Wang, Yidi; Keith, Michael J.; Stappers, Benjamin; Zheng, Wei
2017-07-01
We introduce a robust fitting method into pulsar timing analysis to cope with the non-Gaussian noise. The general maximum likelihood estimator (M-estimator) can resist the impact of non-Gaussian noise by employing convex and bounded loss functions. Three loss functions, including the Huber function, the Bisquare function and the Welsch function, are investigated. The Shapiro-Wilk test is employed to test whether the uncertainty in the observed times of arrival is drawn from a non-Gaussian distribution. Two simulations, where the non-Gaussian distribution is modelled as contaminated Gaussian distributions, are performed. It is found that M-estimators are unbiased and could achieve a root-mean-square error smaller than that obtained by the least square (LS) at the cost of a slightly higher computation complexity in a non-Gaussian environment. M-estimators are also applied to the real timing data of PSR J1713+0747. The results have shown that the fitting results of M-estimators are more accurate than those of LS and are closer to the result of very long baseline interferometry.
Aftershock Hazard Magnitude, Time, and Location Probability Forecasting
Kuei-Pao Chen
2014-01-01
Full Text Available This study combines branching aftershock sequence (BASS and modified _ law to develop a predictive model for forecasting the magnitude, time, and location of aftershocks of magnitude Mw ≥ 5.00 in large earthquakes. The developed model is presented and applied to the 17:47 20 September 1999 Mw 7.45 Chi-Chi earthquake Taiwan, 09:32 5 November 2009 (UTC Nantou Mw 6.19, 00:18 4 March 2010 (UTC Jiashian Mw 6.49 earthquake sequences, Taiwan, and 05:46 11 March 2011 (UTC Tohoku Mw 9.00 earthquake, Japan. The estimated peak ground acceleration (PGA results are remarkably similar to calculations from the recorded magnitudes in both trend and level. This study proposes an empirical equation to improve the aftershock occurrence forecast time. The forecast time results were greatly improved. The magnitude of aftershocks generally decreases with time. It was found that the aftershock forecast probability of Mw ≥ 5.00 is high in the first six days after the main shock. The results will be of interest to seismic mitigation specialists. Spatial and temporal seismicity parameters to the aftershock sequence investigation into the 17:47 20 September 1999 (UTC Mw 7.45 Chi-Chi earthquake, Taiwan found that immediately after the earthquake the area closest to the epicenter had a lower b value. This pattern suggests that at the time of the Chi-Chi earthquake, the area closest to the epicenter remained prone to large magnitude aftershocks and strong shaking. With time, however, the b value increased, indicating a reduced likelihood for large magnitude aftershocks.
Magnitude comparison with different types of rational numbers.
DeWolf, Melissa; Grounds, Margaret A; Bassok, Miriam; Holyoak, Keith J
2014-02-01
An important issue in understanding mathematical cognition involves the similarities and differences between the magnitude representations associated with various types of rational numbers. For single-digit integers, evidence indicates that magnitudes are represented as analog values on a mental number line, such that magnitude comparisons are made more quickly and accurately as the numerical distance between numbers increases (the distance effect). Evidence concerning a distance effect for compositional numbers (e.g., multidigit whole numbers, fractions and decimals) is mixed. We compared the patterns of response times and errors for college students in magnitude comparison tasks across closely matched sets of rational numbers (e.g., 22/37, 0.595, 595). In Experiment 1, a distance effect was found for both fractions and decimals, but response times were dramatically slower for fractions than for decimals. Experiments 2 and 3 compared performance across fractions, decimals, and 3-digit integers. Response patterns for decimals and integers were extremely similar but, as in Experiment 1, magnitude comparisons based on fractions were dramatically slower, even when the decimals varied in precision (i.e., number of place digits) and could not be compared in the same way as multidigit integers (Experiment 3). Our findings indicate that comparisons of all three types of numbers exhibit a distance effect, but that processing often involves strategic focus on components of numbers. Fractions impose an especially high processing burden due to their bipartite (a/b) structure. In contrast to the other number types, the magnitude values associated with fractions appear to be less precise, and more dependent on explicit calculation.
A catalog of observed nuclear magnitudes of Jupiter family comets
Tancredi, G.; Fernández, J. A.; Rickman, H.; Licandro, J.
2000-10-01
A catalog of a sample of 105 Jupiter family (JF) comets (defined as those with Tisserand constants T > 2 and orbital periods P nuclear magnitudes H_N = V(1,0,0). The catalog includes all the nuclear magnitudes reported after 1950 until August 1998 that appear in the International Comet Quarterly Archive of Cometary Photometric Data, the Minor Planet Center (MPC) data base, IAU Circulars, International Comet Quarterly, and a few papers devoted to some particular comets, together with our own observations. Photometric data previous to 1990 have mainly been taken from the Comet Light Curve Catalogue (CLICC) compiled by Kamél (\\cite{kamel}). We discuss the reliability of the reported nuclear magnitudes in relation to the inherent sources of errors and uncertainties, in particular the coma contamination often present even at large heliocentric distances. A large fraction of the JF comets of our sample indeed shows various degrees of activity at large heliocentric distances, which is correlated with recent downward jumps in their perihelion distances. The reliability of coma subtraction methods to compute the nuclear magnitude is also discussed. Most absolute nuclear magnitudes are found in the range 15 - 18, with no magnitudes fainter than H_N ~ 19.5. The catalog can be found at: http://www.fisica.edu.uy/ ~ gonzalo/catalog/. Table 2 and Appendix B are only available in electronic form at http://www.edpsciences.org Table 5 is also available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Diaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Hasankiadeh, Q. Dorosti; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fox, B. D.; Fratu, O.; Froehlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; Garcia, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gonzalez, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hoerandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kaeaepae, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Lopez Agueera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Micanovic, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morello, C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Mueller, S.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nozka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovanek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Smialkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cardenas, B.; Varner, G.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.
2014-01-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (X-max), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the d
Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Batista, R. Alves; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Baeuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertania, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S.; Blanco, M.; Bleve, C.; Bluemer, H.; Bohacova, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceicao, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Diaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Hasankiadeh, Q. Dorosti; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Luis, P. Facal San; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipcic, A.; Fox, B. D.; Fratu, O.; Froehlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; Garcia, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Berisso, M.; Gomez Vitale, P. F.; Goncalves, P.; Gonzalez, J. G.; Gonzalez, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hoerandel, J. R.; Horvath, P.; Hrabovsky, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kaeaepae, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kegl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kroemer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leao, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopez, R.; Lopez Agueera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martinez Bravo, O.; Martraire, D.; Masias Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Micanovic, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Ragaigne, D. Monnier; Montanet, F.; Morello, C.; Mostafa, M.; Moura, C. A.; Muller, M. A.; Mueller, G.; Mueller, S.; Muenchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nozka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodriguez-Frias, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Greus, F. Salesa; Salina, G.; Sanchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovanek, P.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Smialkowski, A.; Smida, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijaervi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tome, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdes Galicia, J. F.; Valino, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cardenas, B.; Varner, G.; Vazquez, J. R.; Vazquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villasenor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.
2014-01-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (X-max), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the d
Color-magnitude distribution of face-on nearby galaxies in Sloan digital sky survey DR7
Jin, Shuo-Wen; Feng, Long-Long [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China); Gu, Qiusheng; Huang, Song; Shi, Yong, E-mail: qsgu@nju.edu.cn [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China)
2014-05-20
We have analyzed the distributions in the color-magnitude diagram (CMD) of a large sample of face-on galaxies to minimize the effect of dust extinctions on galaxy color. About 300,000 galaxies with log (a/b) < 0.2 and redshift z < 0.2 are selected from the Sloan Digital Sky Survey DR7 catalog. Two methods are employed to investigate the distributions of galaxies in the CMD, including one-dimensional (1D) Gaussian fitting to the distributions in individual magnitude bins and two-dimensional (2D) Gaussian mixture model (GMM) fitting to galaxies as a whole. We find that in the 1D fitting, two Gaussians are not enough to fit galaxies with the excess present between the blue cloud and the red sequence. The fitting to this excess defines the center of the green valley in the local universe to be (u – r){sub 0.1} = –0.121M {sub r,} 0{sub .1} – 0.061. The fraction of blue cloud and red sequence galaxies turns over around M {sub r,} {sub 0.1} ∼ –20.1 mag, corresponding to stellar mass of 3 × 10{sup 10} M {sub ☉}. For the 2D GMM fitting, a total of four Gaussians are required, one for the blue cloud, one for the red sequence, and the additional two for the green valley. The fact that two Gaussians are needed to describe the distributions of galaxies in the green valley is consistent with some models that argue for two different evolutionary paths from the blue cloud to the red sequence.
The Association between Motor Skill Competence and Physical Fitness in Young Adults
Stodden, David; Langendorfer, Stephen; Roberton, Mary Ann
2009-01-01
We examined the relationship between competence in three fundamental motor skills (throwing, kicking, and jumping) and six measures of health-related physical fitness in young adults (ages 18-25). We assessed motor skill competence using product scores of maximum kicking and throwing speed and maximum jumping distance. A factor analysis indicated…
The Association between Motor Skill Competence and Physical Fitness in Young Adults
Stodden, David; Langendorfer, Stephen; Roberton, Mary Ann
2009-01-01
We examined the relationship between competence in three fundamental motor skills (throwing, kicking, and jumping) and six measures of health-related physical fitness in young adults (ages 18-25). We assessed motor skill competence using product scores of maximum kicking and throwing speed and maximum jumping distance. A factor analysis indicated…
Implementation of Health Fitness Exercise Programs.
Cundiff, David E., Ed.
This monograph includes the following articles to aid in implementation of fitness concepts: (1) "Trends in Physical Fitness: A Personal Perspective" (H. Harrison Clarke); (2) "A Total Health-Fitness Life-Style" (Steven N. Blair); (3) "Objectives for the Nation--Physical Fitness and Exercise" (Jack H. Wilmore); (4) "A New Physical Fitness Test"…
prediction of rainfall magnitudes and variations in nigeria
engr peter ekpo
Department of Civil Engineering, University of Nigeria, Nsukka. .... maximum annual rainfall depth of return period T. ..... of Gdańsk Meteorological Station. ... Landsliding in Pittwater. Australian. Geomechanics: Vol 42 No 1 March 2007. 3.
2017-01-01
Crystal structures of protein–ligand complexes are often used to infer biology and inform structure-based drug discovery. Hence, it is important to build accurate, reliable models of ligands that give confidence in the interpretation of the respective protein–ligand complex. This paper discusses key stages in the ligand-fitting process, including ligand binding-site identification, ligand description and conformer generation, ligand fitting, refinement and subsequent validation. The CCP4 suite contains a number of software tools that facilitate this task: AceDRG for the creation of ligand descriptions and conformers, Lidia and JLigand for two-dimensional and three-dimensional ligand editing and visual analysis, Coot for density interpretation, ligand fitting, analysis and validation, and REFMAC5 for macromolecular refinement. In addition to recent advancements in automatic carbohydrate building in Coot (LO/Carb) and ligand-validation tools (FLEV), the release of the CCP4i2 GUI provides an integrated solution that streamlines the ligand-fitting workflow, seamlessly passing results from one program to the next. The ligand-fitting process is illustrated using instructive practical examples, including problematic cases such as post-translational modifications, highlighting the need for careful analysis and rigorous validation. PMID:28177312
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
无
2006-01-01
By using orthogonal regression method, a systematic comparison is made between surface wave magnitudes determined by Institute of Geophysics of China Earthquake Administration (IGCEA) and National Earthquake Information Center of US Geological Survey (USGS/NEIC) on the basis of observation data collected by the two institutions between 1983 and 2004. A formula is obtained which reveals the relationship between surface wave magnitudes determined by China seismograph network and US seismograph network. The result shows that, as different calculation formulae and observational instruments are used, surface wave magnitude determined by IGCEA is generally greater by 0.2 than that determined by NEIC: for M=3.5～4.5 earthquakes, it is greater by 0.3;for M=5.0～6.5 earthquakes, it is greater by 0.2;and for M≥7.0 earthquakes, it is greater by no more than 0.1.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Relating perturbation magnitude to temporal gene expression in biological systems
Pfrender Michael E
2009-03-01
Full Text Available Abstract Background Most transcriptional activity is a result of environmental variability. This cause (environment and effect (gene expression relationship is essential to survival in any changing environment. The specific relationship between environmental perturbation and gene expression – and stability of the response – has yet to be measured in detail. We describe a method to quantitatively relate perturbation magnitude to response at the level of gene expression. We test our method using Saccharomyces cerevisiae as a model organism and osmotic stress as an environmental stress. Results Patterns of gene expression were measured in response to increasing sodium chloride concentrations (0, 0.5, 0.7, 1.0, and 1.2 M for sixty genes impacted by osmotic shock. Expression of these genes was quantified over five time points using reverse transcriptase real-time polymerase chain reaction. Magnitudes of cumulative response for specific pathways, and the set of all genes, were obtained by combining the temporal response envelopes for genes exhibiting significant changes in expression with time. A linear relationship between perturbation magnitude and response was observed for the range of concentrations studied. Conclusion This study develops a quantitative approach to describe the stability of gene response and pathways to environmental perturbation and illustrates the utility of this approach. The approach should be applicable to quantitatively evaluate the response of organisms via the magnitude of response and stability of the transcriptome to environmental change.
The Magnitude Distribution of Earthquakes Near Southern California Faults
2011-12-16
Lindh , 1985; Jackson and Kagan, 2006]. We do not consider time dependence in this study, but focus instead on the magnitude distribution for this fault...90032-7. Bakun, W. H., and A. G. Lindh (1985), The Parkfield, California, earth- quake prediction experiment, Science, 229(4714), 619–624, doi:10.1126
Magnitude estimation with noisy integrators linked by an adaptive reference
Kay eThurley
2016-02-01
Full Text Available Judgments of physical stimuli show characteristic biases; relatively small stimuli are overestimated whereas relatively large stimuli are underestimated (regression effect. Such biases likely result from a strategy that seeks to minimize errors given noisy estimates about stimuli that itself are drawn from a distribution, i.e., the statistics of the environment. While being conceptually well described, it is unclear how such a strategy could be implemented neurally. The present paper aims towards answering this question. A theoretical approach is introduced that describes magnitude estimation as two successive stages of noisy (neural integration. Both stages are linked by a reference memory that is updated with every new stimulus. The model reproduces the behavioral characteristics of magnitude estimation and makes several experimentally testable predictions. Moreover, the model identifies the regression effect as a means of minimizing estimation errors and explains how this optimality strategy depends on the subject's discrimination abilities and on the stimulus statistics. The latter influence predicts another property of magnitude estimation, the so-called range effect. Beyond being successful in describing decision-making, the present work suggests that noisy integration may also be important in processing magnitudes.
Absolute magnitudes and phase coefficients of trans-Neptunian objects
Alvarez-Candal, A; Ortiz, J L; Duffard, R; Morales, N; Santos-Sanz, P; Thirouin, A; Silva, J S
2015-01-01
Context: Accurate measurements of diameters of trans-Neptunian objects are extremely complicated to obtain. Thermal modeling can provide good results, but accurate absolute magnitudes are needed to constrain the thermal models and derive diameters and geometric albedos. The absolute magnitude, Hv, is defined as the magnitude of the object reduced to unit helio- and geocentric distances and a zero solar phase angle and is determined using phase curves. Phase coefficients can also be obtained from phase curves. These are related to surface properties, yet not many are known. Aims: Our objective is to measure accurate V band absolute magnitudes and phase coefficients for a sample of trans-Neptunian objects, many of which have been observed, and modeled, within the 'TNOs are cool' program, one of Herschel Space Observatory key projects. Methods: We observed 56 objects using the V and R filters. These data, along with those available in the literature, were used to obtain phase curves and measure V band absolute m...
Extremal Regions Detection Guided by Maxima of Gradient Magnitude
Faraji, Mehdi; Shambezadeh, Jamshid; Nasrollahi, Kamal
2015-01-01
boundaries we introduce Maxima of Gradient Magnitudes (MGMs) which are shown to be points that are mostly around the boundaries of the regions. Having found the MGMs, the method obtains a Global Criterion (GC) for each level of the input image which is used to find Extremum Levels (ELs). The found ELs...
DIGITAL FILTER PROCESS DURING THE DISCRETE MAGNITUDE DATA GATHERING
姚天忠; 邹丽新; 胡冶
1995-01-01
We analyze the reason that causes the error during the discrete magnitude data gathering.A method,dealing with data by means of second-order low-pass digital filter,is brought out,which will improve both the smooth degree and the reponse of the data into a quite good state.
Asteroid magnitudes, UBV colors, and IRAS albedos and diameters
Tedesco, Edward F.
1989-01-01
This paper lists absolute magnitudes and slope parameters for known asteroids numbered through 3318. The values presented are those used in reducing asteroid IR flux data obtained with the IRAS. U-B colors are given for 938 asteroids, and B-V colors are given for 945 asteroids. The IRAS albedos and diameters are tabulated for 1790 asteroids.
Milli-Magnitude Time-Resolved Photometry with BEST
Karoff, Christoffer; Rauer, H.; Erikson, E.
2006-01-01
We present a comparative test of different photometry algorithms. The test has been made in order to optimize the number of stars for which light curves with milli-magnitude precision can be achieved in observations made by the Berlin Exoplanet Search Telescope (BEST), a small wide-angle telescope...
Milli-Magnitude Time-Resolved Photometry with BEST
Karoff, Christoffer; Rauer, H.; Erikson, E.
2006-01-01
We present a comparative test of different photometry algorithms. The test has been made in order to optimize the number of stars for which light curves with milli-magnitude precision can be achieved in observations made by the Berlin Exoplanet Search Telescope (BEST), a small wide-angle telescope...
Fraction Development in Children: Importance of Building Numerical Magnitude Understanding
Jordan, Nancy C.; Carrique, Jessica; Hansen, Nicole; Resnick, Ilyse
2016-01-01
This chapter situates fraction learning within the integrated theory of numerical development. We argue that the understanding of numerical magnitudes for whole numbers as well as for fractions is critical to fraction learning in particular and mathematics achievement more generally. Results from the Delaware Longitudinal Study, which examined…
Estimating the magnitude of food waste generated in South Africa
Oelofse, Suzanna HH
2012-08-01
Full Text Available Throughout the developed world, food is treated as a disposable commodity. Between one third and half of all food produced for human consumption globally is estimated to be wasted. However, attempts to quantify the actual magnitude of food wasted...
Neural representations of magnitude for natural and rational numbers.
DeWolf, Melissa; Chiang, Jeffrey N; Bassok, Miriam; Holyoak, Keith J; Monti, Martin M
2016-11-01
Humans have developed multiple symbolic representations for numbers, including natural numbers (positive integers) as well as rational numbers (both fractions and decimals). Despite a considerable body of behavioral and neuroimaging research, it is currently unknown whether different notations map onto a single, fully abstract, magnitude code, or whether separate representations exist for specific number types (e.g., natural versus rational) or number representations (e.g., base-10 versus fractions). We address this question by comparing brain metabolic response during a magnitude comparison task involving (on different trials) integers, decimals, and fractions. Univariate and multivariate analyses revealed that the strength and pattern of activation for fractions differed systematically, within the intraparietal sulcus, from that of both decimals and integers, while the latter two number representations appeared virtually indistinguishable. These results demonstrate that the two major notations formats for rational numbers, fractions and decimals, evoke distinct neural representations of magnitude, with decimals representations being more closely linked to those of integers than to those of magnitude-equivalent fractions. Our findings thus suggest that number representation (base-10 versus fractions) is an important organizational principle for the neural substrate underlying mathematical cognition.
Passive seismic monitoring at the ketzin CCS site -Magnitude estimation
Paap, B.F.; Steeghs, T.P.H.
2014-01-01
In order to allow quantification of the strength of local micro-seismic events recorded at the CCS pilot site in Ketzin in terms of local magnitude, earthquake data recorded by standardized seismometers were used. Earthquakes were selected that occurred in Poland and Czech Republic and that were det
Mandel, Kaisey; Scolnic, Daniel; Shariff, Hikmatali; Foley, Ryan; Kirshner, Robert
2017-01-01
Inferring peak optical absolute magnitudes of Type Ia supernovae (SN Ia) from distance-independent measures such as their light curve shapes and colors underpins the evidence for cosmic acceleration. SN Ia with broader, slower declining optical light curves are more luminous (“broader-brighter”) and those with redder colors are dimmer. But the “redder-dimmer” color-luminosity relation widely used in cosmological SN Ia analyses confounds its two separate physical origins. An intrinsic correlation arises from the physics of exploding white dwarfs, while interstellar dust in the host galaxy also makes SN Ia appear dimmer and redder. Conventional SN Ia cosmology analyses currently use a simplistic linear regression of magnitude versus color and light curve shape, which does not model intrinsic SN Ia variations and host galaxy dust as physically distinct effects, resulting in low color-magnitude slopes. We construct a probabilistic generative model for the dusty distribution of extinguished absolute magnitudes and apparent colors as the convolution of an intrinsic SN Ia color-magnitude distribution and a host galaxy dust reddening-extinction distribution. If the intrinsic color-magnitude (MB vs. B-V) slope βint differs from the host galaxy dust law RB, this convolution results in a specific curve of mean extinguished absolute magnitude vs. apparent color. The derivative of this curve smoothly transitions from βint in the blue tail to RB in the red tail of the apparent color distribution. The conventional linear fit approximates this effective curve near the average apparent color, resulting in an apparent slope βapp between βint and RB. We incorporate these effects into a hierarchical Bayesian statistical model for SN Ia light curve measurements, and analyze a dataset of SALT2 optical light curve fits of 277 nearby SN Ia at z < 0.10. The conventional linear fit obtains βapp ≈ 3. Our model finds a βint = 2.2 ± 0.3 and a distinct dust law of RB = 3.7 ± 0
Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique
Daniel Maposa
2016-03-01
Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate.Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock
Shcherbakov, R.
2014-12-01
Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and
Desirable design of hose fittings
Voigt, Kristian
1998-01-01
This paper describes the primary functionality of a hose fitting. There has been made a discussion about the different parts of the hose assembly - the nipple, the hose and the outer compression parts. The last subject covered is which criteria should be put up for determining what is a good hose...... fittings. There has been made an uncompleted list of 'Voice of Customer' to this respect. Observations and interviews in industry should expand this list.......This paper describes the primary functionality of a hose fitting. There has been made a discussion about the different parts of the hose assembly - the nipple, the hose and the outer compression parts. The last subject covered is which criteria should be put up for determining what is a good hose...
Accelerated Fitting of Stellar Spectra
Ting, Yuan-Sen; Rix, Hans-Walter
2016-01-01
Stellar spectra are often modeled and fit by interpolating within a rectilinear grid of synthetic spectra to derive the stars' labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of parameters separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach -- CHAT (Convex Hull Adaptive Tessellation) -- which includes several new ideas for inexpensively generating a sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock datasets demonstrate that CHAT can reduce the number of required synthetic model calculations by...
Fitness Doping and Body Management
Thualagant, Nicole
This PhD thesis examines in a first paper the conceptualization of fitness doping and its current limitations. Based on a review of studies on bodywork and fitness doping it is emphasised that the definition of doping does not provide insights into bodywork of both men and women. Moreover......, it is argued that the social and a cultural context are missing in the many epidemiological studies on the prevalence of doping. The second paper explores the difficulties of implementing an anti-doping policy, which was originally formulated in an elite sport context, in a fitness context and more...... specifically in a sport-for-all context. It is questioned whether the anti-doping policy contradicts some of the national sport-for-all organisation, DGI’s values of fostering fellowship, challenge and health. Last but not least, this thesis examines in a third paper the bodywork of the users’ of the club...
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
CMB Maximum Temperature Asymmetry Axis: Alignment with Other Cosmic Asymmetries
Mariano, Antonio
2012-01-01
We use a global pixel based estimator to identify the axis of the residual Maximum Temperature Asymmetry (MTA) (after the dipole subtraction) of the WMAP 7 year Internal Linear Combination (ILC) CMB temperature sky map. The estimator is based on considering the temperature differences between opposite pixels in the sky at various angular resolutions (4 degrees-15 degrees and selecting the axis that maximizes this difference. We consider three large scale Healpix resolutions (N_{side}=16 (3.7 degrees), N_{side}=8 (7.3 degrees) and N_{side}=4 (14.7 degrees)). We compare the direction and magnitude of this asymmetry with three other cosmic asymmetry axes (\\alpha dipole, Dark Energy Dipole and Dark Flow) and find that the four asymmetry axes are abnormally close to each other. We compare the observed MTA axis with the corresponding MTA axes of 10^4 Gaussian isotropic simulated ILC maps (based on LCDM). The fraction of simulated ILC maps that reproduces the observed magnitude of the MTA asymmetry and alignment wit...
Fit;o) - A M\\"ossbauer spectrum fitting program
Hjøllum, Jari í
2009-01-01
Fit;o) is a M\\"ossbauer fitting and analysis program written in Borland Delphi. It has a complete graphical user interface that allows all actions to be carried out via mouse clicks or key shortcut operations in a WYSIWYG fashion. The program does not perform complete transmission integrals, and will therefore not be suited for a complete analysis of all types of M\\"ossbauer spectra and e.g. low temperature spectra of ferrous silicates. Instead, the program is intended for application on complex spectra resulting from typical mineral samples, in which many phases and different crystallite sizes are often present at the same time. The program provides the opportunity to fit the spectra with Gaussian, Lorentzian, Split-Lorentzian, Pseudo-Voigt, Pseudo-Lorentz and Pearson-VII line profiles for individual components of the spectra. This feature is particularly useful when the sample contains components, that are affected by effects of either relaxation or interaction among particles. Fitted spectra may be printed...
Fitness landscapes among many options under social influence.
Caiado, Camila C S; Brock, William A; Bentley, R Alexander; O'Brien, Michael J
2016-09-21
Cultural learning represents a novel problem in that an optimal decision depends not only on intrinsic utility of the decision/behavior but also on transparency of costs and benefits, the degree of social versus individual learning, and the relative popularity of each possible choice in a population. In terms of a fitness-landscape function, this recursive relationship means that multiple equilibria can exist. Here we use discrete-choice theory to construct a fitness-landscape function for a bi-axial decision-making map that plots the magnitude of social influence in the learning process against the costs and payoffs of decisions. Specifically, we use econometric and statistical methods to estimate not only the fitness function but also movements along the map axes. To search for these equilibria, we employ a hill-climbing algorithm that leads to the expected values of optimal decisions, which we define as peaks on the fitness landscape. We illustrate how estimation of a measure of transparency, a measure of social influence, and the associated fitness landscape can be accomplished using panel data sets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improved treatment of the strongly varying slope in fitting solar cell I-V curves
Burgers, A.R.; Eikelboom, J.A.; Schoenecker, A.; Sinke, W.C.
1996-05-01
Straightforward least squares fitting of l-V curves leads to non optimal fits: residuals around and above the open-circuit voltage dominate the fit, leading to a bad fit at the maximum power point and lower voltage values. To deal with this problem, the authors have resorted to using weighting functions or to minimizing the area between data and fit instead of the least squares procedure. Both approaches lack a sound statistical basis. Voltage noise has a big influence on fitting due to the steep slope of an l-V curve for higher voltage values. For this reason the authors have used Orthogonal Distance Regression (ODR), which is a mathematical method for fitting measurements with errors in both voltage- and current measurements. It allows for computing both the l-V curve parameters and their uncertainties. 4 figs., 1 tab., 12 refs.
Earthquake Early Warning with Seismogeodesy: Detection, Location, and Magnitude Estimation
Goldberg, D.; Bock, Y.; Melgar, D.
2016-12-01
Earthquake early warning is critical to reducing injuries and casualties in case of a large magnitude earthquake. The system must rely on near-source data to minimize the time between event onset and issuance of a warning. Early warning systems typically use seismic instruments (seismometers and accelerometers), but these instruments experience difficulty maintaining reliable data in the near-source region and undergo magnitude saturation for large events. Global Navigation Satellite System (GNSS) instruments capture the long period motions and have been shown to produce robust estimates of the true size of the earthquake source. However, GNSS is often overlooked in this context in part because it is not precise enough to record the first seismic wave arrivals (P-wave detection), an important consideration for issuing an early warning. GNSS instruments are becoming integrated into early warning, but are not yet fully exploited. Our approach involves the combination of direct measurements from collocated GNSS and accelerometer stations to estimate broadband coseismic displacement and velocity waveforms [Bock et al., 2011], a method known as seismogeodesy. We present the prototype seismogeodetic early warning system developed at Scripps and demonstrate that the seismogeodetic dataset can be used for P-wave detection, hypocenter location, and shaking onset determination. We discuss uncertainties in each of these estimates and include discussion of the sensitivity of our estimates as a function of the azimuthal distribution of monitoring stations. The seismogeodetic combination has previously been shown to be immune to magnitude saturation [Crowell et al., 2013; Melgar et al., 2015]. Rapid magnitude estimation is an important product in earthquake early warning, and is the critical metric in current tsunami hazard warnings. Using the seismogeodetic approach, we refine earthquake magnitude scaling using P-wave amplitudes (Pd) and peak ground displacements (PGD) for a
Hildyard, M.; Rietbrock, A.
2007-12-01
Considerable interest has been shown in a method for estimating predominant period in the time domain (TpMax), first proposed by Nakamura (1988) and currently being developed for other early warning systems (e.g. Lockman and Allen, BSSA, 2005). Issues still exist as to the causes of the scatter evident in empirical work, and how effective the method is for characterising large events whose time to rupture is longer than the few seconds desired to estimate the magnitude. Our work on applying this method to an aftershock dataset motivated us to investigate the method through the use of synthetic rupture models. The rupture model we use prescribes a stress-drop with a prescribed rise-time over a small patch of the fault surface. This stress-drop is propagated to other patches of the fault according to a prescribed rupture rate. The same finite difference model geometry and fault patch size was then used to model events ranging from magnitude 3.7 to 7.2. Moment Magnitude was calculated directly by integrating the resultant slip on the fault, and TpMax was calculated from seismograms recorded on surface 50 km from the centre of the fault. The initial modelling used a homogenous stress drop, rise-time, and rupture rate. A dataset of 165 events, showed a significant increasing relationship between the TpMax calculation and magnitude. Isolating similar events initiating at the same point on the fault, gave a near straight-line trend. Scatter in the relationship is shown to result from variations in the position, initiation point, stress drop, rise time, and rupture velocity. Low frequency filtering was found to significantly affect the TpMax calculations and trends. Without filtering, the relationship saturated from just after magnitude 6, as the time to rupture becomes longer than the window used to calculate TpMax. However, low frequency filtering actually reduces the time to reach a maximum in the calculation, and this can cause the increasing trend to continue into
Is physiological performance a good predictor for fitness? Insights from an invasive plant species.
Marco A Molina-Montenegro
Full Text Available Is physiological performance a suitable proxy of fitness in plants? Although, several studies have been conducted to measure some fitness-related traits and physiological performance, direct assessments are seldom found in the literature. Here, we assessed the physiology-fitness relationship using second-generation individuals of the invasive plant species Taraxacum officinale from 17 localities distributed in five continents. Specifically, we tested if i the maximum quantum yield is a good predictor for seed-output ii whether this physiology-fitness relationship can be modified by environmental heterogeneity, and iii if this relationship has an adaptive consequence for T. officinale individuals from different localities. Overall, we found a significant positive relationship between the maximum quantum yield and fitness for all localities evaluated, but this relationship decreased in T. officinale individuals from localities with greater environmental heterogeneity. Finally, we found that those individuals from localities where environmental conditions are highly seasonal performed better under heterogeneous environmental conditions. Contrarily, under homogeneous controlled conditions, those individuals from localities with low environmental seasonality performed much better. In conclusion, our results suggest that the maximum quantum yield seem to be good predictors for plant fitness. We suggest that rapid measurements, such as those obtained from the maximum quantum yield, could provide a straightforward proxy of individual's fitness in changing environments.
Maximum likelihood identification of aircraft stability and control derivatives
Mehra, R. K.; Stepner, D. E.; Tyler, J. S.
1974-01-01
Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.
Quantum algorithm for data fitting.
Wiebe, Nathan; Braun, Daniel; Lloyd, Seth
2012-08-03
We provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set by building upon an algorithm for solving systems of linear equations efficiently [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)]. In many cases, our algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data are pure quantum states, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum-state tomography given a fault tolerant quantum computer.
Fitness Doping and Body Management
Thualagant, Nicole
-based fitness centres. Based on a survey in ten Danish club-based fitness centres and on narratives from semi-structured interviews, it is highlighted that the objectives of bodywork differ according to the users’ age and gender. Two different ways of investing in the body are explored in the paper, namely...... a work on the body and a work with the body. As it is concluded, different orientations towards bodywork imply various uses of body enhancing strategies. Although the objectives of bodywork and the body investments are different, these practices seem to be based on the same quest, namely the search...
Physical fitness assessment: an update.
Wilder, Robert P; Greene, Jill Amanda; Winters, Kathryne L; Long, William B; Gubler, K; Edlich, Richard F
2006-01-01
The American College of Sports Medicine (ACSM) gives the following definition of health-related physical fitness: Physical fitness is defined as a set of attributes that people have or achieve that relates to the ability to perform physical activity. It is also characterized by (1) an ability to perform daily activities with vigor, and (2) a demonstration of traits and capacities that are associated with a low risk of premature development of hypokinetic diseases (e.g., those associated with physical inactivity). Information from an individual's health and medical records can be combined with information from physical fitness assessment to meet the specific health goals and rehabilitative needs of that individual. Attaining adequate informed consent from participants prior to exercise testing is mandatory because of ethical and legal considerations.A physical fitness assessment includes measures of body composition, cardiorespiratory endurance, muscular fitness, and musculoskeletal flexibility. The three common techniques for assessing body composition are hydrostatic weighing, and skinfold measurements, and anthropometric measurements. Cardiorespiratory endurance is a crucial component of physical fitness assessment because of its strong correlation with health and health risks. Maximal oxygen uptake (VO2max) is the traditionally accepted criterion for measuring cardiorespiratory endurance. Although maximal-effort tests must be used to measure VO2max, submaximal exercise can be used to estimate this value. Muscular fitness has historically been used to describe an individual's integrated status of muscular strength and muscular endurance. An individual's muscular strength is specific to a particular muscle or muscle group and refers to the maximal force (N or kg) that the muscle or muscle group can generate. Dynamic strength can be assessed by measuring the movement of an individual's body against an external load. Isokinetic testing may be performed by assessing
Chinese Hailed "National Fitness Program"
1995-01-01
IN March, 1994, Liu Ji, Vice Minister of the State Physical Culture and Sports Commission (SPCSC) announced the "National Fitness Program" on behalf of Chinese government at the World Sports-for-All Congress in Uruguay. Almost all the participants thought it important to carry out the program in a country with a population of 1.2 billion. It not only helps improve Chinese people’s health but also the world’s average standard of health. The "National Fitness Program" is an overall, century-spanning, systematic project, which is snowballing. In 1994, the SPCSC issued the One-Two-One Project of the
Decision making on fitness landscapes
Arthur, R.; Sibani, P.
2017-04-01
We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et al. that we call the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures.
Decision Making on Fitness Landscapes
Arthur, Rudy; Sibani, Paolo
2017-01-01
We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et. al. that we call...... the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures....
Descision Making on Fitness Landscapes
Arthur, Rudy
2016-01-01
We discuss fitness landscapes and how they can be modified to account for co-evolution. We are interested in using the landscape as a way to model rational decision making in a toy economic system. We develop a model very similar to the Tangled Nature Model of Christensen et. al. that we call the Tangled Decision Model. This is a natural setting for our discussion of co-evolutionary fitness landscapes. We use a Monte Carlo step to simulate decision making and investigate two different decision making procedures.
Fitness measures and health outcomes in youth
Pate, Russell R; Oria, Maria; Pillsbury, Laura
2012-01-01
.... Physical fitness testing in American youth was established on a large scale in the 1950s with an early focus on performance-related fitness that gradually gave way to an emphasis on health-related fitness...
Daily physical activity and its relation to aerobic fitness in children aged 8-11 years
Dencker, Magnus; Thorsson, Ola; Karlsson, Magnus K.
2006-01-01
Abstract A positive relationship between daily physical activity and aerobic fitness exists in adults. Studies in children have given conflicting results, possibly because of differences in methods used to assess daily physical activity and fitness. No study regarding daily physical activity...... and fitness in children has been published, where fitness has been assessed by direct measurement of maximum oxygen uptake and related to daily physical activity intensities by accelerometers. We examined 248 children (140 boys and 108 girls), aged 7.9-11.1 years. Maximum workload and maximal oxygen uptake...... (VO2PEAK) by indirect calorimetry were measured during a maximum bicycle ergometer exercise test. Exercise capacity was adjusted for body mass and (body mass)2/3. Daily physical activity was evaluated by accelerometers, worn around the waist for 4 days. Mean accelerometer counts and time spent...
Absolute-magnitude Calibration for W UMa-type Systems Based on Gaia Data
Mateo, Nicole M.; Rucinski, Slavek M.
2017-09-01
Tycho-Gaia Astrometric Solution (TGAS) parallax data are used to determine absolute magnitudes M V for 318 W UMa-type (EW) contact binary stars. A very steep (slope ≃ ‑9), single-parameter ({log}P), linear calibration can be used to predict M V to about 0.1–0.3 mag over the whole range of accessible orbital period, 0.22distribution. Although the scatter around the linear {log}P-fit is fairly large (0.2–0.4 mag), the current data do not support the inclusion of a B-V color term in the calibration. ). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
Pisarenko, V F; Sornette, A; Sornette, D
2007-01-01
We develop a new method for the statistical esitmation of the tail of the distribution of earthquake sizes recorded in the Worldwide Harvard catalog of seismic moments converted to mW-magnitudes (1977-2004 and 1977-2006). We show that using the set of maximum magnitudes (the set of T-maxima) in windows of duration T days provides a significant improvement over existing methods, in particular (i) by minimizing the negative impact of time-clustering of foreshock / main shock /aftershock sequences in the estimation of the tail of the magnitude distribution, and (ii) by providing via a simulation method reliable estimates of the biases in the Moment estimation procedure (which turns out to be more efficient than the Maximum Likelihood estimation). Using a simulation method, we have determined the optimal window size of the T-maxima to be T=500 days. We have estimated the following quantiles of the distribution of T-maxima of earthquake magnitudes for the whole period 1977-2006: Q_{0.16}(Mmax)=9.3, Q_{0.5}(Mmax)=9...
UVMULTIFIT: A versatile tool for fitting astronomical radio interferometric data
Martí-Vidal, I.; Vlemmings, W. H. T.; Muller, S.; Casey, S.
2014-03-01
Context. The analysis of astronomical interferometric data is often performed on the images obtained after deconvolving the interferometer's point spread function. This strategy can be understood (especially for cases of sparse arrays) as fitting models to models, since the deconvolved images are already non-unique model representations of the actual data (i.e., the visibilities). Indeed, the interferometric images may be affected by visibility gridding, weighting schemes (e.g., natural vs. uniform), and the particulars of the (non-linear) deconvolution algorithms. Fitting models to the direct interferometric observables (i.e., the visibilities) is preferable in the cases of simple (analytical) sky intensity distributions. Aims: We present UVMULTIFIT, a versatile library for fitting visibility data, implemented in a Python-based framework. Our software is currently based on the CASA package, but can be easily adapted to other analysis packages, provided they have a Python API. Methods: The user can simultaneously fit an indefinite number of source components to the data, each of which depend on any algebraic combination of fitting parameters. Fits to individual spectral-line channels or simultaneous fits to all frequency channels are allowed. Results: We have tested the software with synthetic data and with real observations. In some cases (e.g., sources with sizes smaller than the diffraction limit of the interferometer), the results from the fit to the visibilities (e.g., spectra of close by sources) are far superior to the output obtained from the mere analysis of the deconvolved images. Conclusions: UVMULTIFIT is a powerful improvement of existing tasks to extract the maximum amount of information from visibility data, especially in cases close to the sensitivity/resolution limits of interferometric observations.
A framework for inferring fitness landscapes of patient-derived viruses using quasispecies theory.
Seifert, David; Di Giallonardo, Francesca; Metzner, Karin J; Günthard, Huldrych F; Beerenwinkel, Niko
2015-01-01
Fitness is a central quantity in evolutionary models of viruses. However, it remains difficult to determine viral fitness experimentally, and existing in vitro assays can be poor predictors of in vivo fitness of viral populations within their hosts. Next-generation sequencing can nowadays provide snapshots of evolving virus populations, and these data offer new opportunities for inferring viral fitness. Using the equilibrium distribution of the quasispecies model, an established model of intrahost viral evolution, we linked fitness parameters to the composition of the virus population, which can be estimated by next-generation sequencing. For inference, we developed a Bayesian Markov chain Monte Carlo method to sample from the posterior distribution of fitness values. The sampler can overcome situations where no maximum-likelihood estimator exists, and it can adaptively learn the posterior distribution of highly correlated fitness landscapes without prior knowledge of their shape. We tested our approach on simulated data and applied it to clinical human immunodeficiency virus 1 samples to estimate their fitness landscapes in vivo. The posterior fitness distributions allowed for differentiating viral haplotypes from each other, for determining neutral haplotype networks, in which no haplotype is more or less credibly fit than any other, and for detecting epistasis in fitness landscapes. Our implemented approach, called QuasiFit, is available at http://www.cbg.ethz.ch/software/quasifit.
Suitability of rapid energy magnitude determinations for emergency response purposes
Di Giacomo, Domenico; Parolai, Stefano; Bormann, Peter; Grosser, Helmut; Saul, Joachim; Wang, Rongjiang; Zschau, Jochen
2010-01-01
It is common practice in the seismological community to use, especially for large earthquakes, the moment magnitude Mw as a unique magnitude parameter to evaluate the earthquake's damage potential. However, as a static measure of earthquake size, Mw does not provide direct information about the released seismic wave energy and its high frequency content, which is the more interesting information both for engineering purposes and for a rapid assessment of the earthquake's shaking potential. Therefore, we recommend to provide to disaster management organizations besides Mw also sufficiently accurate energy magnitude determinations as soon as possible after large earthquakes. We developed and extensively tested a rapid method for calculating the energy magnitude Me within about 10-15 min after an earthquake's occurrence. The method is based on pre-calculated spectral amplitude decay functions obtained from numerical simulations of Green's functions. After empirical validation, the procedure has been applied offline to a large data set of 767 shallow earthquakes that have been grouped according to their type of mechanism (strike-slip, normal faulting, thrust faulting, etc.). The suitability of the proposed approach is discussed by comparing our rapid Me estimates with Mw published by GCMT as well as with Mw and Me reported by the USGS. Mw is on average slightly larger than our Me for all types of mechanisms. No clear dependence on source mechanism is observed for our Me estimates. In contrast, Me from the USGS is generally larger than Mw for strike-slip earthquakes and generally smaller for the other source types. For ~67 per cent of the event data set our Me differs events. A reason of that may be the overcorrection of the energy flux applied by the USGS for this type of earthquakes. We follow the original definition of magnitude scales, which does not apply a priori mechanism corrections to measured amplitudes, also since reliable fault-plane solutions are hardly
Behavior of induced microseismic events with large magnitude
Asanuma, H.; Nozaki, H.; Niitsuma, H.; Wyborn, D.; Baria, R.
2006-12-01
Hydraulic stimulation of geothermal and oil/gas reservoir is one of the conventional techniques used for enhancing the productivity from reservoirs. In most cases, the stimulation process induces microseismic events. Based on the activity, location, magnitude and source mechanism of such events, the 3D localization and characterization of the reservoir can be carried out with practical resolution. Typically, microseismic events from a reservoir have moment magnitudes of less than zero, and most of them are detectable only by downhole sensors with high sensitivity. However, it is known that some of these events have higher magnitudes and can be felt at the surface. These large events can be hazardous from an environmental point of view, while resulting in enhanced permeability in the reservoir at the same time. The authors have analyzed the spatio-temporal distribution, and source mechanism of such microseismic events having large magnitudes (big events) observed during the hydraulic stimulations at Australian hot fractured rock (HFR) site in the Cooper Basin (Asanuma et al., SEG Exp. Abst., 2004) and also at the European hot dry rock (HDR) site in Soultz, France (Asanuma et al., Trans. GRC, 2004). A comparison between the origin time of these big events and the hydraulic records showed that many of the big events occurred after the shut-ins at both the sites. Besides, during pumping, most of these events did not show a clear correlation to the wellhead pressure and the rate of pumping. In most cases, the source mechanism of the big events were consistent with the shear slip of a preexisting fracture. We have also found that some of the big events at the Australian site brought very clear extension of the seismic cloud into zones that were seismically silent before, suggesting that some kind of hydraulic barrier was overcome by these big events. The observational data also showed that the microseismic events at those sites originated mainly from a slip of asperities
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Walking Shoes: Features and Fit
... be snug, not tight. If you're a woman with wide feet, consider men's or boys' shoes, which are cut a bit larger through the heel and the ball of the foot. Walk in the shoes before buying them. They should feel comfortable right away. Make sure your heel fits snugly in ...
Self-reported cardiorespiratory fitness
Holtermann, Andreas; Marott, Jacob Louis; Gyntelberg, Finn;
2015-01-01
BACKGROUND: The predictive value and improved risk classification of self-reported cardiorespiratory fitness (SRCF), when added to traditional risk factors on cardiovascular disease (CVD) and longevity, are unknown. METHODS AND RESULTS: A total of 3843 males and 5093 females from the Copenhagen...
Inclusive fitness theory and eusociality
Abbot, P.; Wrangham, R.; Abe, J.
2011-01-01
Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Nowak et al. argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. Howev...
Preparation of Police Fitness Instructors.
Collingwood, Thomas R.; And Others
1979-01-01
Concern about the declining level of physical fitness of police officers has led the Bureau of Training of the Kentucky Department of Justice and the Department of Physical Education at Eastern Kentucky University to implement a training course for police instructors. (LH)
LLNL Calibration Program: Data Collection, Ground Truth Validation, and Regional Coda Magnitude
Myers, S C; Mayeda, K; Walter, C; Schultz, C; O' Boyle, J; Hofstetter, A; Rodgers, A; Ruppert, S
2001-08-28
Lawrence Livermore National Laboratory (LLNL) integrates and collects data for use in calibration of seismic detection, location, and identification. Calibration data is collected by (1) numerous seismic field efforts, many conducted under NNSA (ROA) and DTRA (PRDA) contracts, and (2) permanent seismic stations that are operated by national and international organizations. Local-network operators and international organizations (e.g. International Seismic Center) provide location and other source characterization (collectively referred to as source parameters) to LLNL, or LLNL determines these parameters from raw data. For each seismic event, LLNL rigorously characterizes the uncertainty of source parameters. This validation process is used to identify events whose source parameters are accurate enough for use in calibration. LLNL has developed criteria for determining the accuracy of seismic locations and methods to characterize the covariance of calibration datasets. Although the most desirable calibration events are chemical and nuclear explosions with highly accurate locations and origin times, catalogues of naturally occurring earthquakes offer needed geographic coverage that is not provided by man made sources. The issue in using seismically determined locations for calibration is validating the location accuracy. Sweeney (1998) presented a 50/90 teleseismic, network-coverage criterion (50 defining phases and 90{sup o} maximum azimuthal gap) that generally results in 15-km maximum epicenter error. We have also conducted tests of recently proposed local/regional criteria and found that 10-km accuracy can be achieved by applying a 20/90 criteria. We continue to conduct tests that may validate less stringent criteria (which will produce more calibration events) while maintaining desirable location accuracy. Lastly, we examine methods of characterizing the covariance structure of calibration datasets. Each dataset is likely to be effected by distinct error
LLNL Calibration Program: Data Collection, Ground Truth Validation, and Regional Coda Magnitude
Myers, S C; Mayeda, K; Walter, C; Schultz, C; O' Boyle, J; Hofstetter, A; Rodgers, A; Ruppert, S
2001-08-28
Lawrence Livermore National Laboratory (LLNL) integrates and collects data for use in calibration of seismic detection, location, and identification. Calibration data is collected by (1) numerous seismic field efforts, many conducted under NNSA (ROA) and DTRA (PRDA) contracts, and (2) permanent seismic stations that are operated by national and international organizations. Local-network operators and international organizations (e.g. International Seismic Center) provide location and other source characterization (collectively referred to as source parameters) to LLNL, or LLNL determines these parameters from raw data. For each seismic event, LLNL rigorously characterizes the uncertainty of source parameters. This validation process is used to identify events whose source parameters are accurate enough for use in calibration. LLNL has developed criteria for determining the accuracy of seismic locations and methods to characterize the covariance of calibration datasets. Although the most desirable calibration events are chemical and nuclear explosions with highly accurate locations and origin times, catalogues of naturally occurring earthquakes offer needed geographic coverage that is not provided by man made sources. The issue in using seismically determined locations for calibration is validating the location accuracy. Sweeney (1998) presented a 50/90 teleseismic, network-coverage criterion (50 defining phases and 90{sup o} maximum azimuthal gap) that generally results in 15-km maximum epicenter error. We have also conducted tests of recently proposed local/regional criteria and found that 10-km accuracy can be achieved by applying a 20/90 criteria. We continue to conduct tests that may validate less stringent criteria (which will produce more calibration events) while maintaining desirable location accuracy. Lastly, we examine methods of characterizing the covariance structure of calibration datasets. Each dataset is likely to be effected by distinct error
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Gitte Keidser
2016-04-01
Full Text Available A self-contained, self-fitting hearing aid (SFHA is a device that enables the user to perform both threshold measurements leading to a prescribed hearing aid setting and fine-tuning, without the need for audiological support or access to other equipment. The SFHA has been proposed as a potential solution to address unmet hearing health care in developing countries and remote locations in the developed world and is considered a means to lower cost and increase uptake of hearing aids in developed countries. This article reviews the status of the SFHA and the evidence for its feasibility and challenges and predicts where it is heading. Devices that can be considered partly or fully self-fitting without audiological support were identified in the direct-to-consumer market. None of these devices are considered self-contained as they require access to other hardware such as a proprietary interface, computer, smartphone, or tablet for manipulation. While there is evidence that self-administered fitting processes can provide valid and reliable results, their success relies on user-friendly device designs and interfaces and easy-to-interpret instructions. Until these issues have been sufficiently addressed, optional assistance with the self-fitting process and on-going use of SFHAs is recommended. Affordability and a sustainable delivery system remain additional challenges for the SFHA in developing countries. Future predictions include a growth in self-fitting products, with most future SFHAs consisting of earpieces that connect wirelessly with a smartphone and providers offering assistance through a telehealth infrastructure, and the integration of SFHAs into the traditional hearing health-care model.
The UV-optical Galaxy Color-Magnitude Diagram I: Basic Properties
Wyder, Ted K; Schiminovich, David; Seibert, Mark; Budavari, Tamas; Treyer, Marie A; Barlow, Tom A; Forster, Karl; Friedman, Peter G; Morrissey, Patrick; Neff, Susan G; Small, Todd; Bianchi, Luciana; Donas, Jose; Heckman, Timothy M; Lee, Young-Wook; Madore, Barry F; Milliard, Bruno; Rich, R Michael; Szalay, Alexander S; Welsh, Barry Y; Yi, Sukyoung K
2007-01-01
We have analyzed the bivariate distribution of galaxies as a function of ultraviolet-optical colors and absolute magnitudes in the local universe. The sample consists of galaxies with redshifts and optical photometry from the Sloan Digital Sky Survey (SDSS) main galaxy sample matched with detections in the near-ultraviolet (NUV) and far-ultraviolet (FUV) bands in the Medium Imaging Survey being carried out by the Galaxy Evolution Explorer (GALEX) satellite. In the (NUV-r)_{0.1} vs. M_{r,0.1} galaxy color-magnitude diagram, the galaxies separate into two well-defined blue and red sequences. The (NUV-r)_{0.1} color distribution at each M_{r,0.1} is not well fit by the sum of two Gaussians due to an excess of galaxies in between the two sequences. The peaks of both sequences become redder with increasing luminosity with a distinct blue peak visible up to M_{r,0.1}\\sim-23. The r_{0.1}-band luminosity functions vary systematically with color, with the faint end slope and characteristic luminosity gradually increas...
The stay/switch model describes choice among magnitudes of reinforcers.
MacDonall, James S
2008-06-01
The stay/switch model is an alternative to the generalized matching law for describing choice in concurrent procedures. The purpose of the present experiment was to extend this model to choice among magnitudes of reinforcers. Rats were exposed to conditions in which the magnitude of reinforcers (number of food pellets) varied for staying at alternative 1, switching from alternative 1, staying at alternative 2 and switching from alternative 2. A changeover delay was not used. The results showed that the stay/switch model provided a good account of the data overall, and deviations from fits of the generalized matching law to response allocation data were in the direction predicted by the stay/switch model. In addition, comparisons among specific conditions suggested that varying the ratio of obtained reinforcers, as in the generalized matching law, was not necessary to change the response and time allocations. Other comparisons suggested that varying the ratio of obtained reinforcers was not sufficient to change response allocation. Taken together these results provide additional support for the stay/switch model of concurrent choice.
The estimated magnitude of AIDS in Brazil: a delay correction applied to cases with lost dates
Barbosa Maria Tereza S.
2002-01-01
Full Text Available The number of HIV-infected people is an important measure of the magnitude of the AIDS epidemic in Brazil and allows for comparison with epidemic patterns in other countries. This quantity can be estimated from the number of reported AIDS cases, which in turn needs to be corrected for the distribution of reporting delays and under-recording of cases. These distributions are unknown and must also be estimated from the recorded dates, which were missed to the Brazilian National AIDS registry. This paper estimates the number of AIDS cases diagnosed by imputing the lost information based on an estimate of the pattern in registration delay until 1996. We first fitted a non-stationary bivariate Poisson regression model to estimate the pattern in reporting delay. In the subsequent steps these models were applied to impute new data, thus replacing the missing information, and to estimate the magnitude of the AIDS epidemic in the country. Model estimates ranged from 36,000 to 50,000 AIDS cases diagnosed in Brazil and still unreported. Therefore, the epidemic was 20 to 30% greater than known from the available information as of February 1999. To be useful to health policy-makers, the surveillance system based on officially reported AIDS cases must be continuously improved.
How are number words mapped to approximate magnitudes?
Sullivan, Jessica; Barner, David
2013-01-01
How do we map number words to the magnitudes they represent? While much is known about the developmental trajectory of number word learning, the acquisition of the counting routine, and the academic correlates of estimation ability, previous studies have yet to describe the mechanisms that link number words to nonverbal representations of number. We investigated two mechanisms: associative mapping and structure mapping. Four dot array estimation tasks found that adults' ability to match a number word to one of two discriminably different sets declined as a function of set size and that participants' estimates of relatively large, but not small, set sizes were influenced by misleading feedback during an estimation task. We propose that subjects employ structure mappings for linking relatively large number words to set sizes, but rely chiefly on item-by-item associative mappings for smaller sets. These results indicate that both inference and association play important roles in mapping number words to approximate magnitudes.
EARTHQUAKE-INDUCED DEFORMATION STRUCTURES AND RELATED TO EARTHQUAKE MAGNITUDES
Savaş TOPAL
2003-02-01
Full Text Available Earthquake-induced deformation structures which are called seismites may helpful to clasify the paleoseismic history of a location and to estimate the magnitudes of the potention earthquakes in the future. In this paper, seismites were investigated according to the types formed in deep and shallow lake sediments. Seismites are observed forms of sand dikes, introduced and fractured gravels and pillow structures in shallow lakes and pseudonodules, mushroom-like silts protruding laminites, mixed layers, disturbed varved lamination and loop bedding in deep lake sediments. Earthquake-induced deformation structures, by benefiting from previous studies, were ordered according to their formations and earthquake magnitudes. In this order, the lowest eartquake's record is loop bedding and the highest one is introduced and fractured gravels in lacustrine deposits.
Sensori-motor spatial training of number magnitude representation.
Fischer, Ursula; Moeller, Korbinian; Bientzle, Martina; Cress, Ulrike; Nuerk, Hans-Christoph
2011-02-01
An adequately developed spatial representation of number magnitude is associated with children's general arithmetic achievement. Therefore, a new spatial-numerical training program for kindergarten children was developed in which presentation and response were associated with a congruent spatial numerical representation. In particular, children responded by a full-body spatial movement on a digital dance mat in a magnitude comparison task. This spatial-numerical training was more effective than a non-spatial control training in enhancing children's performance on a number line estimation task and a subtest of a standardized mathematical achievement battery (TEDI-MATH). A mediation analysis suggested that these improvements were driven by an improvement of children's mental number line representation and not only by unspecific factors such as attention or motivation. These results suggest a benefit of spatial numerical associations. Rather than being a merely associated covariate, they work as an independently manipulated variable which is functional for numerical development.
Resistance to change as a function of concurrent reinforcer magnitude.
Rau, J C; Pickering, L D; McLean, A P
1996-12-01
Six pigeons responded on two keys in each of three signalled multiple-schedule components, and resistance to disruption of responding on one (target) key by extinction and by response-independent food presented during inter-component blackouts was studied. Alternative reinforcement of different magnitudes was contingent on pecking a non-target key in two components, and in the third only the target response was reinforced. Resistance to change varied with the overall quantity of reinforcement in the component, regardless of whether reinforcers were contingent on the target or non-target response, but did not differ across the two key locations. These results using different magnitudes of reinforcement confirm previous findings using rate of reinforcement as the variable, and suggest that resistance to change is dependent on stimulus-reinforcer rather than response-reinforcer contingencies.
Estimation of continuous object distributions from limited Fourier magnitude measurements
Byrne, Charles L.; Fiddy, Michael A.
1987-01-01
From finite complex spectral data one can construct a continuous object with a given support that is consistent with the data. Given Fourier magnitude data only, one can choose the phases arbitrarily in the above construction. The energy in the extrapolated spectrum is phase-dependent and provides a cost function to be used in phase retrieval. The minimization process is performed iteratively, using an algorithm that can be viewed as a combination of Gerchberg-Papoulis and Fienup error reduction.
Magnitude and influencing factors of parasomnia in schoolchildren
Choudhury Habibur Rasul; Khan Golam Mostafa; Nitya Nanda Baruri; Jakia Sultana
2013-01-01
Background Parasomnias are undesirable events occurring in the sleep-wake transition period. Several predisposing factors are reported to induce parasomnia in preschool children. Objective To estimate the magnitude of parasomnia in school children and to evaluate its relationship with possible predisposing factors. Methods Five hundred children aged 5-16 years from a boys’ school and a girls’ school in Khulna City, Bangladesh, were randomly selected for the study conducted from July t...
Magnitude and influencing factors of parasomnia in schoolchildren
Choudhury Habibur Rasul; Khan Golam Mostafa; Nitya Nanda Baruri; Jakia Sultana
2013-01-01
Background Parasomnias are undesirable events occurring in the sleep-wake transition period. Several predisposing factors are reported to induce parasomnia in preschool children. Objective To es timate the magnitude of parasomnia in school children and to evaluate its relationship with possible predisposing factors . Methods Five hundred children aged 5- 16 years from a boys' school and a girls' school in Khulna City, Ban gladesh, were randomly selected for the study c...
The Absolute Magnitudes of Type Ia Supernovae in the Ultraviolet
Brown, Peter J.; Roming, Peter W. A.; Milne, Peter; Bufano, Filomena; Ciardullo, Robin; Elias-Rosa, Nancy; Filippenko, Alexei V.; Foley, Ryan J.; Gehrels, Neil; Gronwall, Caryl; Hicken, Malcolm; Holland, Stephen T.; Hoversten, Erik A.; Immler, Stefan; Kirshner, Robert P.; Li, Weidong; Mazzali, Paolo; Phillips, Mark M.; Pritchard, Tyler; Still, Martin; Turatto, Massimo; Vanden Berk, Daniel
2010-10-01
We examine the absolute magnitudes and light-curve shapes of 14 nearby (redshift z = 0.004-0.027) Type Ia supernovae (SNe Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1 rc covering ~2600-3300 Å after removing optical light, and u ≈ 3000-4000 Å) compared to a mid-UV filter (uvm2 ≈2000-2400 Å). The uvw1 rc - b colors show a scatter of ~0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2 - uvw1 colors implies significantly larger spectral variability below 2600 Å. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with the optical decay rate with a scatter of 0.4 mag, comparable to that found for the optical in our sample. However, in the mid-UV the scatter is larger, ~1 mag, possibly indicating differences in metallicity. We find no strong correlation between either the UV light-curve shapes or the UV colors and the UV absolute magnitudes. With larger samples, the UV luminosity might be useful as an additional constraint to help determine distance, extinction, and metallicity in order to improve the utility of SNe Ia as standardized candles.
Modeling the Color Magnitude Relation for Galaxy Clusters
Jimenez, Noelia; Castelli, Analia Smith; Bassino, Lilia P
2011-01-01
We investigate the origin of the colour-magnitude relation (CMR) observed in cluster galaxies by using a combination of a cosmological N-body simulation of a cluster of galaxies and a semi-analytic model of galaxy formation. The departure of galaxies in the bright end of the CMR with respect to the trend denoted by less luminous galaxies could be explained by the influence of minor mergers
Understanding the timing and magnitude of advertising spending patterns.
Gijsenberg, Maarten; van Heerde, Harald J.; Dekimpe, Marnik; Jan-Benedict E M Steenkamp; Nijs, Vincent R.
2009-01-01
Notwithstanding the fact that advertising is one of the most used marketing tools, little is known about what is driving (i) the timing and (ii) the magnitude of advertising actions. Building on normative theory, the authors develop a parsimonious model that captures this dual investment process. They explain advertising spending patterns as observed in the market, and investigate the impact of company, competitive, and category-related factors on these decisions, thereby introducing the nove...
Age influences magnitude but not duration of response to levodopa.
Durso, R; Isaac, K; Perry, L; Saint-Hilaire, M; Feldman, R G
1993-01-01
Following an all-night fast, 45 patients with Parkinson's disease were examined using certain motor items present in the United Parkinson's Disease Rating Scale. All were given a single tablet of carbidopa 25 mg and levodopa 250 mg and re-examined 90 minutes later. In addition to this evaluation, 23 of these patients underwent further scoring over a 4-hour period. A significant negative correlation was found between age and one important aspect of drug-derived benefit: magnitude of response. ...
Spatial patterns of landslide dimension: A tool for magnitude mapping
Catani, Filippo; Tofani, Veronica; Lagomarsino, Daniela
2016-11-01
The magnitude of mass movements, which may be expressed by their dimension in terms of area or volume, is an important component of intensity together with velocity. In the case of slow-moving deep-seated landslides, the expected magnitude is the prevalent parameter for defining intensity when assessed as a spatially distributed variable in a given area. In particular, the frequency-volume statistics of past landslides may be used to understand and predict the magnitude of new landslides and reactivations. In this paper we study the spatial properties of volume frequency distributions in the Arno river basin (Central Italy, about 9100 km2). The overall landslide inventory taken into account (around 27,500 events) shows a power-law scaling of volumes for values greater than a cutoff value of about 2 × 104 m3. We explore the variability of the power-law exponent in the geographic space by setting up local subsets of the inventory based on neighbourhoods with radii between 5 and 50 km. We found that the power-law exponent α varies according to geographic position and that the exponent itself can be treated as a random space variable with autocorrelation properties both at local and regional scale. We use this finding to devise a simple method to map the magnitude frequency distribution in space and to create maps of exceeding probability of landslide volume for risk analysis. We also study the causes of spatial variation of α by analysing the dependence of power-law properties on geological and geomorphological factors, and we find that structural settings and valley density exert a strong influence on mass movement dimensions.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Typical magnitude and spatial extent of crowding in autism.
Freyberg, Jan; Robertson, Caroline E; Baron-Cohen, Simon
2016-01-01
Enhanced spatial processing of local visual details has been reported in individuals with autism spectrum conditions (ASC), and crowding is postulated to be a mechanism that may produce this ability. However, evidence for atypical crowding in ASC is mixed, with some studies reporting a complete lack of crowding in autism and others reporting a typical magnitude of crowding between individuals with and without ASC. Here, we aim to disambiguate these conflicting results by testing both the magnitude and the spatial extent of crowding in individuals with ASC (N = 25) and age- and IQ-matched controls (N = 23) during an orientation discrimination task. We find a strong crowding effect in individuals with and without ASC, which falls off as the distance between target and flanker is increased. Both the magnitude and the spatial range of this effect were comparable between individuals with and without ASC. We also find typical (uncrowded) orientation discrimination thresholds in individuals with ASC. These findings suggest that the spatial extent of crowding is unremarkable in ASC, and is therefore unlikely to account for the visual symptoms reported in individuals with the diagnosis.
Relating Perturbation Magnitude to Temporal Gene Expression in Biological Systems
Callister, Stephen J.; Parnell, John J.; Pfrender, Michael E.; Hashsham, Syed
2009-03-19
A method to quantitatively relate stress to response at the level of gene expression is described using Saccharomyces cerevisiae as a model organism. Stress was defined as the magnitude of perturbation and strain was defined as the magnitude of cumulative response in terms of gene expression. Expression patterns of sixty genes previously reported to be significantly impacted by osmotic shock or belonging to the high-osmotic glycerol, glycerolipid metabolism, and glycolysis pathways were determined following perturbations of increasing sodium chloride concentrations (0, 0.5, 0.7, 1.0, 1.5, and 1.4 M). Expression of these genes was quantified temporally using reverse transcriptase real time polymerase chain reaction. The magnitude of cumulative response was obtained by calculating the total moment of area of the temporal response envelope for all the 60 genes, either together or for the set of genes related to each pathway. A non-linear relationship between stress and response was observed for the range of stress studied. This study examines a quantitative approach to quantify the strain at the level of gene expression to relate stress to strain in biological systems. The approach should be generally applicable to quantitatively evaluate the response of organisms to environmental change.
The Absolute Magnitudes of Type Ia Supernovae in the Ultraviolet
Brown, Peter J; Milne, Peter; Bufano, Filomena; Ciardullo, Robin; Elias-Rosa, Nancy; Filippenko, Alexei V; Foley, Ryan J; Gehrels, Neil; Gronwall, Caryl; Hicken, Malcolm; Holland, Stephen T; Hoversten, Erik A; Immler, Stefan; Kirshner, Robert P; Li, Weidong; Mazzali, Paolo; Phillips, Mark M; Pritchard, Tyler; Still, Martin; Turatto, Massimo; Berk, Daniel Vanden
2010-01-01
We examine the absolute magnitudes and light-curve shapes of 14 nearby(redshift z = 0.004--0.027) Type Ia supernovae (SNe~Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way (MW) extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1_rc covering ~2600-3300 A after removing optical light, and u ~3000--4000 A) compared to a mid-UV filter (uvm2 ~2000-2400 A). The uvw1_rc-b colors show a scatter of ~0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2-uvw1 colors implies significantly larger spectral variability below 2600 A. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with ...
Magnitude Uncertainties Impact Seismic Rate Estimates, Forecasts and Predictability Experiments
Werner, M J
2007-01-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) aims to prospectively test time-dependent earthquake probability forecasts on their consistency with observations. To compete, time-dependent seismicity models are calibrated on earthquake catalog data. But catalogs contain much observational uncertainty. We study the impact of magnitude uncertainties on rate estimates in clustering models, on their forecasts and on their evaluation by CSEP's consistency tests. First, we quantify magnitude uncertainties. We find that magnitude uncertainty is more heavy-tailed than a Gaussian, such as a double-sided exponential distribution, with scale parameter nu_c=0.1 - 0.3. Second, we study the impact of such noise on the forecasts of a simple clustering model which captures the main ingredients of popular short term models. We prove that the deviations of noisy forecasts from an exact forecast are power law distributed in the tail with exponent alpha=1/(a*nu_c), where a is the exponent of the productivity...
The Road to Convergence in Earthquake Frequency-Magnitude Statistics
Naylor, M.; Bell, A. F.; Main, I. G.
2013-12-01
The Gutenberg-Richter frequency-magnitude relation is a fundamental empirical law of seismology, but its form remains uncertain for rare extreme events. Convergence trends can be diagnostic of the nature of an underlying distribution and its sampling even before convergence has occurred. We examine the evolution of an information criteria metric applied to earthquake magnitude time series, in order to test whether the Gutenberg-Richter law can be rejecting in various earthquake catalogues. This would imply that the catalogue is starting to sample roll-off in the tail though it cannot yet identify the form of the roll-off. We compare bootstrapped synthetic Gutenberg-Richter and synthetic modified Gutenberg-Richter catalogues with the convergence trends observed in real earthquake data e.g. the global CMT catalogue, Southern California and mining/geothermal data. Whilst convergence in the tail remains some way off, we show that the temporal evolution of model likelihoods and parameters for the frequency-magnitude distribution of the global Harvard Centroid Moment Tensor catalogue is inconsistent with an unbounded GR relation, despite it being the preferred model at the current time. Bell, A. F., M. Naylor, and I. G. Main (2013), Convergence of the frequency-size distribution of global earthquakes, Geophys. Res. Lett., 40, 2585-2589, doi:10.1002/grl.50416.
Mid-IR period-magnitude relations for AGB stars
Glass, I S; Blommaert, J A D L; Sahai, R; Stute, M; Uttenthaler, S
2009-01-01
Asymptotic Giant Branch variables are found to obey period-luminosity relations in the mid-IR similar to those seen at K_S (2.14 microns), even at 24 microns where emission from circumstellar dust is expected to be dominant. Their loci in the M, logP diagrams are essentially the same for the LMC and for NGC6522 in spite of different ages and metallicities. There is no systematic trend of slope with wavelength. The offsets of the apparent magnitude vs. logP relations imply a difference between the two fields of 3.8 in distance modulus. The colours of the variables confirm that a principal period with log P > 1.75 is a necessary condition for detectable mass-loss. At the longest observed wavelength, 24 microns, many semi-regular variables have dust shells comparable in luminosity to those around Miras. There is a clear bifurcation in LMC colour-magnitude diagrams involving 24 micron magnitudes.
Magnitude and valence of outcomes as determinants of causal judgments
Diana Delgado
2013-01-01
Full Text Available El propósito de este proyecto es examinar si el modelo de bloqueo predice la atribución de juicios causales al variar la valencia y la magnitud de las consecuencias. El arreglo experimental consiste en la presentación de reportes sobre los efectos positivos y negativos que producen diferentes sustancias al ser consumidas solas o en conjunto con otras. Los participantes del primer grupo estuvieron expuestos a consecuencias de alta magnitud y los del segundo grupo, a consecuencias de baja magnitud. Se evaluó si la atribución de causalidad es consistente con las predicciones del efecto bloqueo mediante dos tipos de pregunta: una pregunta acerca si la sustancia X produce o no el efecto, y una pregunta sobre la probabilidad de que X produzca el efecto. Se examinaron las diferencias en los juicios causales cuando las atribuciones son producto del razonamiento lógico o intuitivo. Si bien no se observó evidencia del efecto bloqueo, se obtuvieron efectos de interacción entre los factores valencia y condición experimental (sustancias bloqueo y control. Se discuten los hallazgos en términos de las diferencias entre el aprendizaje asociativo en humanos y animales no humanos, y en términos de las implicaciones sobre las diferencias teóricas entre el condicionamiento evaluativo y el condicionamiento predictivo.
Correlating precursory declines in groundwater radon with earthquake magnitude.
Kuo, T
2014-01-01
Both studies at the Antung hot spring in eastern Taiwan and at the Paihe spring in southern Taiwan confirm that groundwater radon can be a consistent tracer for strain changes in the crust preceding an earthquake when observed in a low-porosity fractured aquifer surrounded by a ductile formation. Recurrent anomalous declines in groundwater radon were observed at the Antung D1 monitoring well in eastern Taiwan prior to the five earthquakes of magnitude (Mw ): 6.8, 6.1, 5.9, 5.4, and 5.0 that occurred on December 10, 2003; April 1, 2006; April 15, 2006; February 17, 2008; and July 12, 2011, respectively. For earthquakes occurring on the longitudinal valley fault in eastern Taiwan, the observed radon minima decrease as the earthquake magnitude increases. The above correlation has been proven to be useful for early warning local large earthquakes. In southern Taiwan, radon anomalous declines prior to the 2010 Mw 6.3 Jiasian, 2012 Mw 5.9 Wutai, and 2012 ML 5.4 Kaohsiung earthquakes were also recorded at the Paihe spring. For earthquakes occurring on different faults in southern Taiwan, the correlation between the observed radon minima and the earthquake magnitude is not yet possible. © 2013, National Ground Water Association.
Hubble Space Telescope Observations of M32 The Color-Magnitude Diagram
Grillmair, C J; Worthey, G; Faber, S M; Freedman, W L; Madore, B F; Ajhar, E A; Baum, W A; Holtzmann, J A; Lynds, C R; O'Neil, E J; Stetson, P B; Grillmair, Carl J.; Lauer, Tod R.; Worthey, Guy; Freedman, Wendy L.; Madore, Barry F.; Ajhar, Edward A.; Baum, William A.; Holtzman, Jon A.; Neil, Earl J. O'; Stetson, Peter B.
1996-01-01
We present a V-I color-magnitude diagram for a region 1'-2' from the center of M32 based on Hubble Space Telescope WFPC2 images. The broad color-luminosity distribution of red giants shows that the stellar population comprises stars with a wide range in metallicity. This distribution cannot be explained by a spread in age. The blue side of the giant branch rises to M_I ~ -4.0 and can be fitted with isochrones having [Fe/H] ~ -1.5. The red side consists of a heavily populated and dominant sequence that tops out at M_I ~ -3.2, and extends beyond V-I=4. This sequence can be fitted with isochrones with -0.2 < [Fe/H] < +0.1, for ages running from 15 Gyr to 5 Gyr respectively. We do not find the optically bright asymptotic giant branch stars seen in previous ground-based work and argue that the majority of them were artifacts of crowding. Our results are consistent with the presence of the infrared-luminous giants found in ground-based studies, though their existence cannot be directly confirmed by our data. ...
Industrial Psychology: Goodness of fit? Fit for goodness?
Leon J. van Vuuren
2010-12-01
Full Text Available Orientation: This theoretical opinion-based paper represents a critical reflection on the relevance of industrial psychology.Research purpose: Against a historical-developmental background of the discipline, the inquiry questions its goodness of fit, that is its contribution to organisation and society.Motivation for the study: Regular introspection in the discipline ensures that it remains relevant in both science and practice. As such, such introspection calls for a meta-theoretical imperative, to ensure that industrial psychology is fully aware of how the theoretical models applied in the discipline influence people and the society that they form part of.Research design, approach and method: The question of industrial psychology’s potential fit for goodness that is broader than what is merely good for the organisation and its employees is explored with a view to enhancing its relevance. The exploration is conducted through the utilisation of theoretical argumentation in which industrial psychology is analysed in terms of contextual considerations that require the discipline to evaluate its real versus its potential contribution to society.Main findings: It is found that the fit is limited to its relevance for inwardly focused organisational behaviour due to its endorsement of the instrumental (strategic motives of organisations that subscribe to an owner and/or shareholder agenda.Practical/managerial implications: In light of the main finding, industrial psychology’s potential fit for goodness is explored with a view to enhancing its relevance in an era of goodness. The creation of a scientific and practical interface between industrial psychology and business ethics is suggested to facilitate movement away from a descriptive approach.Contribution/value-add: The heuristics of reflection, reform, research and resources are suggested to facilitate movement towards a normative (multiple stakeholder paradigm aimed at broad based goodness and
Industrial Psychology: Goodness of fit? Fit for goodness?
Leon J. van Vuuren
2010-01-01
Orientation: This theoretical opinion-based paper represents a critical reflection on the relevance of industrial psychology.Research purpose: Against a historical-developmental background of the discipline, the inquiry questions its goodness of fit, that is its contribution to organisation and society.Motivation for the study: Regular introspection in the discipline ensures that it remains relevant in both science and practice. As such, such introspection calls for a meta-theoretical imperat...
Maximum Bipartite Matching Size And Application to Cuckoo Hashing
Kanizo, Yossi; Keslassy, Isaac
2010-01-01
Cuckoo hashing with a stash is a robust high-performance hashing scheme that can be used in many real-life applications. It complements cuckoo hashing by adding a small stash storing the elements that cannot fit into the main hash table due to collisions. However, the exact required size of the stash and the tradeoff between its size and the memory over-provisioning of the hash table are still unknown. We settle this question by investigating the equivalent maximum matching size of a random bipartite graph, with a constant left-side vertex degree $d=2$. Specifically, we provide an exact expression for the expected maximum matching size and show that its actual size is close to its mean, with high probability. This result relies on decomposing the bipartite graph into connected components, and then separately evaluating the distribution of the matching size in each of these components. In particular, we provide an exact expression for any finite bipartite graph size and also deduce asymptotic results as the nu...
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Does low magnitude earthquake ground shaking cause landslides?
Brain, Matthew; Rosser, Nick; Vann Jones, Emma; Tunstall, Neil
2015-04-01
Estimating the magnitude of coseismic landslide strain accumulation at both local and regional scales is a key goal in understanding earthquake-triggered landslide distributions and landscape evolution, and in undertaking seismic risk assessment. Research in this field has primarily been carried out using the 'Newmark sliding block method' to model landslide behaviour; downslope movement of the landslide mass occurs when seismic ground accelerations are sufficient to overcome shear resistance at the landslide shear surface. The Newmark method has the advantage of simplicity, requiring only limited information on material strength properties, landslide geometry and coseismic ground motion. However, the underlying conceptual model assumes that shear strength characteristics (friction angle and cohesion) calculated using conventional strain-controlled monotonic shear tests are valid under dynamic conditions, and that values describing shear strength do not change as landslide shear strain accumulates. Recent experimental work has begun to question these assumptions, highlighting, for example, the importance of shear strain rate and changes in shear strength properties following seismic loading. However, such studies typically focus on a single earthquake event that is of sufficient magnitude to cause permanent strain accumulation; by doing so, they do not consider the potential effects that multiple low-magnitude ground shaking events can have on material strength. Since such events are more common in nature relative to high-magnitude shaking events, it is important to constrain their geomorphic effectiveness. Using an experimental laboratory approach, we present results that address this key question. We used a bespoke geotechnical testing apparatus, the Dynamic Back-Pressured Shear Box (DynBPS), that uniquely permits more realistic simulation of earthquake ground-shaking conditions within a hillslope. We tested both cohesive and granular materials, both of which
Automated Determination of Magnitude and Source Extent of Large Earthquakes
Wang, Dun
2017-04-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Benefit segmentation of the fitness market.
Brown, J D
1992-01-01
While considerate attention is being paid to the fitness and wellness needs of people by healthcare and related marketing organizations, little research attention has been directed to identifying the market segments for fitness based upon consumers' perceived benefits of fitness. This article describes three distinct segments of fitness consumers comprising an estimated 50 percent of households. Implications for marketing strategies are also presented.
Youth Physical Fitness: Ten Key Concepts
Corbin, Charles B.; Welk, Gregory J.; Richardson, Cheryl; Vowell, Catherine; Lambdin, Dolly; Wikgren, Scott
2014-01-01
The promotion of physical fitness has been a key objective of physical education for more than a century. During this period, physical education has evolved to accommodate changing views on fitness and health. The purpose of this article is to discuss issues with fitness assessment and fitness education central to the new Presidential Youth…
Youth Physical Fitness: Ten Key Concepts
Corbin, Charles B.; Welk, Gregory J.; Richardson, Cheryl; Vowell, Catherine; Lambdin, Dolly; Wikgren, Scott
2014-01-01
The promotion of physical fitness has been a key objective of physical education for more than a century. During this period, physical education has evolved to accommodate changing views on fitness and health. The purpose of this article is to discuss issues with fitness assessment and fitness education central to the new Presidential Youth…
Best-Fit Conic Approximation of Spacecraft Trajectory
Singh, Gurkipal
2005-01-01
A computer program calculates a best conic fit of a given spacecraft trajectory. Spacecraft trajectories are often propagated as conics onboard. The conic-section parameters as a result of the best-conic-fit are uplinked to computers aboard the spacecraft for use in updating predictions of the spacecraft trajectory for operational purposes. In the initial application for which this program was written, there is a requirement to fit a single conic section (necessitated by onboard memory constraints) accurate within 200 microradians to a sequence of positions measured over a 4.7-hour interval. The present program supplants a prior one that could not cover the interval with fewer than four successive conic sections. The present program is based on formulating the best-fit conic problem as a parameter-optimization problem and solving the problem numerically, on the ground, by use of a modified steepest-descent algorithm. For the purpose of this algorithm, optimization is defined as minimization of the maximum directional propagation error across the fit interval. In the specific initial application, the program generates a single 4.7-hour conic, the directional propagation of which is accurate to within 34 microradians easily exceeding the mission constraints by a wide margin.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Effect of slip-area scaling on the earthquake frequency-magnitude relationship
Senatorski, Piotr
2017-06-01
The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.
Seeing How Fitting Process Works
Montalbano, Vera
2016-01-01
A common problem in teaching Physics in secondary school is due to the gap in terms of difficulty between the physical concepts and the mathematical tools which are necessary to study them quantitatively. Advanced statistical estimators are commonly introduced only a couple of years later than some common physical topics, such as e.g. the electronic circuit analysis. Filling this gap with alternative methods appears to be opportune, in order to let the students reach a full comprehension of the issue they are facing with. In this work we use a smartphone camera and GeoGebra to propose a visual method for understanding the physical meaning of a fitting process. The time constant of an RC circuit is estimated by fitting the discharge curve of a capacitor visualized on the screen of an oscilloscope.
The Andersen aerobic fitness test
Aadland, Eivind; Terum, Torkil; Mamen, Asgeir
2014-01-01
of agreement) were 26.7±125.2 m for test 2 vs. test 1 (ptest 3 vs. test 2 (p = .514 for mean difference). The equation to estimate VO2peak suggested by Andersen et al. (2008) showed a poor fit in the present sample; thus, we suggest a new equation: VO2peak = 23......BACKGROUND: High aerobic fitness is consistently associated with a favorable metabolic risk profile in children. Direct measurement of peak oxygen consumption (VO2peak) is often not feasible, thus indirect tests such as the Andersen test are required in many settings. The present study seeks...... to determine the reliability and validity of the Andersen test in 10-year-old children. METHODS: A total of 118 10-year-old children (67 boys and 51 girls) were recruited from one school and performed four VO2peak tests over three weeks: three Andersen tests (indirect) and one continuous progressive treadmill...