Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Perceiving action boundaries: Learning effects in perceiving maximum jumping-reach affordances
Ramenzoni, V.C; Davis, T.J; Riley, M.A; Shockley, K
2010-01-01
.... Those estimates were compared with estimates that perceivers made for themselves. In Experiment 1, participants initially underestimated the maximum jumping-reach height both for themselves and for the...
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Maximum Relative Entropy Updating and the Value of Learning
Patryk Dziurosz-Serafinowicz
2015-03-01
Full Text Available We examine the possibility of justifying the principle of maximum relative entropy (MRE considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive.
Perceiving action boundaries: Learning effects in perceiving maximum jumping-reach affordances
Ramenzoni, V.C.; Davis, T.J.; Riley, M.A.; Shockley, K.
2010-01-01
Coordinating with another person requires that one can perceive what the other is capable of doing. This ability often benefits from opportunities to practice and learn. Two experiments were conducted in which we investigated perceptual learning in the context of perceiving the maximum height to whi
Investigation on Maximum Available Reach for Different Modulation Formats in WDM-PON Systems
Kurbatska, I.; Bobrovs, V.; Spolitis, S.; Gavars, P.; Ivanovs, G.; Parts, R.
2016-08-01
Considering the growing demand for broadband of access networks, in the present paper we investigate various modulation formats as a way of increasing the performance of optical transmission systems. Non-return-to-zero (NRZ) on-off keying, return-to-zero (RZ) OOK, carrier suppressed RZ (CSRZ) OOK, duobinary (DB), NRZ differential phase shift keying (NRZDPSK), RZ-DPSK and CSRZ-DPSK formats are compared using the maximal achievable reach with bit error rate less than 10-9 as a criterion. Simulations are performed by using OptSim software tool. It is shown that using the transmission system without dispersion compensation the best results are shown by duobinary and CSRZ-OOK modulation formats, but with the system using dispersion compensating fiber (DCF) the longest transmission distance is achieved by RZ-DPSK modulation format. By investigating the influence of channel spacing for best-performed modulation formats, network reach decrease for transmission systems with DCF fiber has been observed due to channel crosstalk.
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Hatem A. Emara
2015-10-01
Full Text Available Background: The most critical feature of motor development is the ability to balance the body in sitting or standing. Impaired balance limits a child’s ability to recover from unexpected threats to stability. The functional reach test (FRT defines the maximal distance an individual is able to reach forward beyond arm’s length in a standing position without loss of balance, taking a step, or touching the wall. The Purpose of this study was to establish the normal values for FRT in Saudi Arabia school children with typical development and to study the correlation of anthropometric measures with FRT values. Methods: This cross-sectional study was conducted in Almadinah Almonawarah, Kingdom of Saudi Arabia. A total of 280 children without disabilities aged 6 to 12 years were randomly selected. Functional reach was assessed by having subjects extend their arms to 90 degrees and reach as far forward as they could without taking a step. Reach distance was recorded by noting the beginning and final position of the subject's extended arm parallel to a yard stick attached to the wall. Three successive trials of FRT were performed and the mean of the three trials was calculated. Pearson product moment correlation was used to examine the association of FR to age, and anthropometric measures. Results: Normal mean values of FR ranged from 24.2cm to 33.95cm. Age, height and weight significantly correlate with FRT. Conclusion: The FRT is a feasible test to examine the balance of 6-12 year-old children. FRT may be useful for detecting balance impairment, change in balance performance over time.
Downstream hydraulic geometry relationships: Gathering reference reach-scale width values from LiDAR
Sofia, G.; Tarolli, P.; Cazorzi, F.; Dalla Fontana, G.
2015-12-01
This paper examines the ability of LiDAR topography to provide reach-scale width values for the analysis of downstream hydraulic geometry relationships along some streams in the Dolomites (northern Italy). Multiple reach-scale dimensions can provide representative geometries and statistics characterising the longitudinal variability in the channel, improving the understanding of geomorphic processes across networks. Starting from the minimum curvature derived from a LiDAR DTM, the proposed algorithm uses a statistical approach for the identification of the scale of analysis, and for the automatic characterisation of reach-scale bankfull widths. The downstream adjustment in channel morphology is then related to flow parameters (drainage area and stream power). With the correct planning of a LiDAR survey, uncertainties in the procedure are principally due to the resolution of the DTM. The outputs are in general comparable in quality to field survey measurements, and the procedure allows the quick comparison among different watersheds. The proposed automatic approach could improve knowledge about river systems with highly variable widths, and about systems in areas covered by vegetation or inaccessible to field surveys. With proven effectiveness, this research could offer an interesting starting point for the analysis of differences between watersheds, and to improve knowledge about downstream channel adjustment in relation, for example, to scale and landscape forcing (e.g. sediment transport, tectonics, lithology, climate, geomorphology, and anthropic pressure).
Domoshnitsky Alexander
2009-01-01
Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.
J.-U. Grooß
2011-12-01
Full Text Available Balloon-borne observations of ozone from the South Pole Station have been reported to reach ozone mixing ratios below the detection limit of about 10 ppbv at the 70 hPa level by late September. After reaching a minimum, ozone mixing ratios increase to above 1 ppmv on the 70 hPa level by late December. While the basic mechanisms causing the ozone hole have been known for more than 20 yr, the detailed chemical processes determining how low the local concentration can fall, and how it recovers from the minimum have not been explored so far. Both of these aspects are investigated here by analysing results from the Chemical Lagrangian Model of the Stratosphere (CLaMS. As ozone falls below about 0.5 ppmv, a balance is maintained by gas phase production of both HCl and HOCl followed by heterogeneous reaction between these two compounds in these simulations. Thereafter, a very rapid, irreversible chlorine deactivation into HCl can occur, either when ozone drops to values low enough for gas phase HCl production to exceed chlorine activation processes or when temperatures increase above the polar stratospheric cloud (PSC threshold. As a consequence, the timing and mixing ratio of the minimum ozone depends sensitively on model parameters, including the ozone initialisation. The subsequent ozone increase between October and December is linked mainly to photochemical ozone production, caused by oxygen photolysis and by the oxidation of carbon monoxide and methane.
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
Maximum-entropy parameter estimation for the k-NN modified value-difference kernel
Hendrickx, I.H.E.; van den Bosch, A.; Verbruggen, R.; Taatgen, N.; Schomaker, L.
2004-01-01
We introduce an extension of the modified value-difference kernel of $k$-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybri
Maximum-Entropy Parameter Estimation for the k-nn Modified Value-Difference Kernel
Hendrickx, Iris; Bosch, Antal van den
2005-01-01
We introduce an extension of the modified value-difference kernel of k-nn by replacing the kernel's default class distribution matrix with the matrix produced by the maximum-entropy learning algorithm. This hybrid algorithm is tested on fifteen machine learning benchmark tasks, comparing the hybrid
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)
2001-07-01
We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.
Gutenberg-Richter b-value maximum likelihood estimation and sample size
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2017-01-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Asymptotic Behavior of the Maximum and Minimum Singular Value of Random Vandermonde Matrices
Tucci, Gabriel H
2012-01-01
This work examines various statistical distributions in connection with random Vandermonde matrices and their extension to $d$-dimensional phase distributions. Upper and lower bound asymptotics for the maximum singular value are found to be $O(\\log N^d)$ and $O(\\log N^{d} /\\log \\log N^d)$ respectively where $N$ is the dimension of the matrix, generalizing the results in \\cite{TW}. We further study the behavior of the minimum singular value of a random Vandermonde matrix. In particular, we prove that the minimum singular value is at most $N^2\\exp(-C\\sqrt{N}))$ where $N$ is the dimension of the matrix and $C$ is a constant. Furthermore, the value of the constant $C$ is determined explicitly. The main result is obtained in two different ways. One approach uses techniques from stochastic processes and in particular, a construction related with the Brownian bridge. The other one is a more direct analytical approach involving combinatorics and complex analysis. As a consequence, we obtain a lower bound for the maxi...
Value of social media in reaching and engaging employers in Total Worker Health.
Hudson, Heidi; Hall, Jennifer
2013-12-01
To describe the initial use of social media by the National Institute for Occupational Safety and Health (NIOSH) Total Worker Health™ (TWH) Program and the University of Iowa Healthier Workforce Center for Excellence (HWCE) Outreach Program. Social media analytics tools and process evaluation methods were used to derive initial insights on the social media strategies used by the NIOSH and the HWCE. The on-line community size for the NIOSH TWH Program indicated 100% growth in 6 months; however, social media platforms have been slow to gain participation among employers. The NIOSH TWH Program and the HWCE Outreach Program have found social media tools as an effective way to expand reach, foster engagement, and gain understanding of audience interests around TWH concepts. More needs to be known about how to best use social media to reach and engage target audiences on issues relevant to TWH.
F. Harinck; C.K.W. de Dreu
2004-01-01
Negotiation research and theory tends to focus on interests and ignores values. This experiment compared the influence of negotiations about interests with negotiations about values under low or high time pressure. Results showed that (1) individuals got locked into early impasses more often under l
Localization of b-values and maximum earthquakes; B chi to saidai jishin no chiikisei
Kurimoto, H.
1996-05-01
There is a thought that hourly and spacial blanks in earthquake activity contribute to earthquake occurrence probability. Based on an idea that if so, this tendency may appear also in statistical parameters of earthquake, earthquake activities in every ten years were investigated in the relation between locational distribution of inclined b values of a line relating to the number of earthquake and the magnitude, and the center focus of earthquakes which are M{ge}7.0. The field surveyed is the Japanese Islands and the peripheral ocean, and the area inside the circle with a radius of 100km with a lattice-like point divided in 1{degree} in every direction of latitude and longitude as center was made a unit region. The depth is divided by above 60km or below 60km. As a result, the following were found out: as to epicenters of earthquakes with M{ge}7.0 during the survey period of 100 years, many are in a range of b(b value){le}0.75, and sometimes they may be in a range of b{ge}0.75 in the area from the ocean near Izu peninsula to the ocean off the west Hokkaido; the position of epicenters in a range of b{le}0.75 seems not to come close to the center of contour which indicates the maximum b value. 7 refs., 2 figs.
Wheel-slip Control Method for Seeking Maximum Value of Tangential Force between Wheel and Rail
Kondo, Keiichiro; Yasuoka, Ikuo; Yamazaki, Osamu; Toda, Shinichi; Nakazawa, Yosuke
A method for reducing motor torque in proportion to wheel slip is applied to an inverter-driven electric locomotive. The motor torque at wheel-slip speed is less than the torque at the maximum tangential force or the adhesion force. A novel anti-slip control method for seeking the maximum value of the tangential force between the wheel and rail is proposed in this paper. The characteristics of the proposed method are analyzed theoretically to design the torque reduction ratio and the rate of change of the pattern between the wheel-slip speed and motor current. In addition, experimental tests are also carried out to verify that the use of the proposed method increases the traction force of an electric locomotive driven by induction motors and inverters. The experimental test results obtained by using the proposed control method are compared with the experimental results obtained by using a conventional control method. The averaged operational current when using the proposed control method is 10% more than that when using the conventional control method.
NONE
2000-07-01
The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Elhussain, O. A.; Abdel-Magid, T. I. M.
2016-08-01
Mono-Crystalline solar cell module is experimentally conducted in Khartoum, Sudan to study the difference between maximum empirical value of peak Watt and maximum value of thermal power produced in field under highly sufficient solar conditions. Field measurements are recorded for incident solar radiation, produced voltage, current and temperature at several time intervals during sun shine period. The thermal power system has been calculated using fundamental principles of heat transfer. The study shows that solar power for considered module could not attain the empirical peak power irrespective to maximum value of direct incident solar radiation and maximum temperature gained. A loss of about 6% of power can be considered as the difference between field measurements and the manufacturer's indicated empirical value. Solar cell exhibits 94% efficiency in comparison with manufacturer's provided data, and is 3'% more efficient in thermal energy production than in electrical power extraction for hot-dry climate conditions.
News at Nine: The value of near-real time data for reaching mass media
Allen, J.; Ward, K.; Simmon, R. B.; Carlowicz, M. J.; Scott, M.; Przyborski, P. D.; Voiland, A. P.
2012-12-01
NASA's Earth Observatory (EO) is an online publication featuring NASA Earth science news and images. Since its inception in 1999, the EO team has relied heavily on near-real time satellite data to publish imagery of breaking news events, such as volcanoes, floods, fires, and dust storms. Major news outlets (Associated Press, The Weather Channel, CNN, etc.) have regularly republished Earth Observatory imagery in their coverage of events. Because of the nature of modern 24-hour news cycle, media almost always want near-real time coverage; providing it depends heavily on rapid data turnaround, user-friendly data systems, and fast data access. We will discuss how we use near-real time data and provide examples of how data systems have been transformed in the past 13 years. We will offer some thoughts on best practices (from the view of a user) in expedited data systems and the positive effect of those practices on public awareness of our content.. Finally, we will share how we work with science teams to see the potential stories in their data and the value of providing the data in a timely fashionAcquired October 9, 2010, this natural-color image shows the toxic sludge spill from an alumina plant in southern Hungary.
Taniai, Yoshiaki; Nishii, Jun
2015-08-01
When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constraint determining locomotor parameters, such as stride frequency and stride length; however, other constraints have been proposed for a human upper-arm reaching task. In this study, we examined whether the minimum metabolic energy cost model can also explain the characteristics of the upper-arm reaching trajectories. Our results show that the optimal trajectory that minimizes the expected value of energy cost under the effect of signal-dependent noise on motor commands expresses not only the characteristics of reaching movements of typical speed but also those of slower movements. These results suggest that minimization of the energy cost would be a basic constraint not only in locomotion but also in upper-arm reaching.
Nakajo, Masatoyo [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Kajiya, Yoriko; Tani, Atsushi; Ueno, Masako [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kaneko, Tomoyo; Kaneko, Youichi [Kaneko Clinic, Department of Breast Surgery, Kagoshima (Japan); Takasaki, Takashi [Department of Pathology, Clinical Pathology Laboratory, Kagoshima (Japan); Koriyama, Chihaya [Kagoshima University, Department of Epidemiology and Preventive Medicine, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Nakajo, Masayuki [Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan)
2010-11-15
To correlate both primary lesion {sup 18}F-fluorodeoxyglucose (FDG) maximum standardized uptake value (SUVmax) and diffusion-weighted imaging (DWI) apparent diffusion coefficient (ADC) with clinicopathological prognostic factors and compare the prognostic value of these indexes in breast cancer. The study population consisted of 44 patients with 44 breast cancers visible on both preoperative FDG PET/CT and DWI images. The breast cancers included 9 ductal carcinoma in situ (DCIS) and 35 invasive ductal carcinomas (IDC). The relationships between both SUVmax and ADC and clinicopathological prognostic factors were evaluated by univariate and multivariate regression analysis and the degree of correlation was determined by Spearman's rank test. The patients were divided into a better prognosis group (n = 24) and a worse prognosis group (n = 20) based upon invasiveness (DCIS or IDC) and upon their prognostic group (good, moderate or poor) determined from the modified Nottingham prognostic index. Their prognostic values were examined by receiver operating characteristic analysis. Both SUVmax and ADC were significantly associated (p<0.05) with histological grade (independently), nodal status and vascular invasion. Significant associations were also noted between SUVmax and tumour size (independently), oestrogen receptor status and human epidermal growth factor receptor-2 status, and between ADC and invasiveness. SUVmax and ADC were negatively correlated ({rho}=-0.486, p = 0.001) and positively and negatively associated with increasing of histological grade, respectively. The threshold values for predicting a worse prognosis were {>=}4.2 for SUVmax (with a sensitivity, specificity and accuracy of 80%, 75% and 77%, respectively) and {<=}0.98 for ADC (with a sensitivity, specificity and accuracy of 90%, 67% and 77%, respectively). SUVmax and ADC correlated with several of pathological prognostic factors and both indexes may have the same potential for predicting the
Cristian Enache
2006-06-01
Full Text Available For a class of nonlinear elliptic boundary value problems in divergence form, we construct some general elliptic inequalities for appropriate combinations of u(x and |Ã¢ÂˆÂ‡u|2, where u(x are the solutions of our problems. From these inequalities, we derive, using Hopf's maximum principles, some maximum principles for the appropriate combinations of u(x and |Ã¢ÂˆÂ‡u|2, and we list a few examples of problems to which these maximum principles may be applied.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
A New Fuzzy-Based Maximum Power Point Tracker for a Solar Panel Based on Datasheet Values
Ali Kargarnejad
2013-01-01
Full Text Available Tracking maximum power point of a solar panel is of interest in most of photovoltaic applications. Solar panel modeling is also very interesting exclusively based on manufacturers data. Knowing that the manufacturers generally give the electrical specifications of their products at one operating condition, there are so many cases in which the specifications in other conditions are of interest. In this research, a comprehensive one-diode model for a solar panel with maximum obtainable accuracy is fully developed only based on datasheet values. The model parameters dependencies on environmental conditions are taken into consideration as much as possible. Comparison between real data and simulations results shows that the proposed model has maximum obtainable accuracy. Then a new fuzzy-based controller to track the maximum power point of the solar panel is also proposed which has better response from speed, accuracy and stability point of view respect to the previous common developed one.
Tibaek, S; Holmestad-Bechmann, N; Pedersen, Trine B
2015-01-01
OBJECTIVES: To establish reference values for maximum walking speed over 10m for independent community-dwelling Danish adults, aged 60 to 79 years, and to evaluate the effects of gender and age. DESIGN: Cross-sectional study. SETTING: Danish companies and senior citizens clubs. PARTICIPANTS: Two ...
Valuing option on the maximum of two assets using improving modified Gauss-Seidel method
Koh, Wei Sin; Muthuvalu, Mohana Sundaram; Aruchunan, Elayaraja; Sulaiman, Jumat
2014-07-01
This paper presents the numerical solution for the option on the maximum of two assets using Improving Modified Gauss-Seidel (IMGS) iterative method. Actually, this option can be governed by two-dimensional Black-Scholes partial differential equation (PDE). The Crank-Nicolson scheme is applied to discretize the Black-Scholes PDE in order to derive a linear system. Then, the IMGS iterative method is formulated to solve the linear system. Numerical experiments involving Gauss-Seidel (GS) and Modified Gauss-Seidel (MGS) iterative methods are implemented as control methods to test the computational efficiency of the IMGS iterative method.
Grooß, J.-U.; Brautzsch, K.; Pommrich, R.; Solomon, S.; Müller, R.
2011-08-01
Balloon-borne observations of ozone from Antarctic stations have been reported to reach ozone mixing ratios as low as about 10 ppbv at the 70 hPa level by late September. After reaching a minimum, ozone mixing ratios then increase to the ppmv level by late December. While the basic mechanisms causing the ozone hole have been known for more than 20 yr, the detailed chemical processes controlling how low the local concentration can fall, and how it recovers from the minimum have not been explored so far. Both of these aspects are investigated here by analysing results from the Chemical Lagrangian Model of the Stratosphere (CLaMS). We discuss the processes responsible for stopping of the catalytic ozone depletion. We show that an irreversible chlorine deactivation into HCl can occur either when ozone drops to very low values or by temperatures increasing above the PSC threshold in these simulations. As a consequence, the timing and mixing ratio of the minimum depends sensitively on model parameters including the ozone initialisation. The subsequent observed ozone increase between October and December is linked not only to transport, but also to photochemical ozone production, caused by oxygen photolysis and by the oxidation of carbon monoxide and methane.
Inaugural Maximum Values for Sodium in Processed Food Products in the Americas.
Campbell, Norm; Legowski, Barbara; Legetic, Branka; Nilson, Eduardo; L'Abbé, Mary
2015-08-01
Reducing dietary salt/sodium is one of the most cost-effective interventions to improve population health. There are five initiatives in the Americas that independently developed targets for reformulating foods to reduce salt/sodium content. Applying selection criteria, recommended by the Pan American Health Organization (PAHO)/World Health Organization (WHO) Technical Advisory Group on Dietary Salt/Sodium Reduction, a consortium of governments, civil society, and food companies (the Salt Smart Consortium) agreed to an inaugural set of regional maximum targets (upper limits) for salt/sodium levels for 11 food categories, to be achieved by December 2016. Ultimately, to substantively reduce dietary salt across whole populations, targets will be needed for the majority of processed and pre-prepared foods. Cardiovascular and hypertension organizations are encouraged to utilize the regional targets in advocacy and in monitoring and evaluation of progress by the food industry.
An extreme value model for maximum wave heights based on weather types
Rueda, Ana; Camus, Paula; Méndez, Fernando J.; Tomás, Antonio; Luceño, Alberto
2016-02-01
Extreme wave heights are climate-related events. Therefore, special attention should be given to the large-scale weather patterns responsible for wave generation in order to properly understand wave climate variability. We propose a classification of weather patterns to statistically downscale daily significant wave height maxima to a local area of interest. The time-dependent statistical model obtained here is based on the convolution of the stationary extreme value model associated to each weather type. The interdaily dependence is treated by a climate-related extremal index. The model's ability to reproduce different time scales (daily, seasonal, and interannual) is presented by means of its application to three locations in the North Atlantic: Mayo (Ireland), La Palma Island, and Coruña (Spain).
Kupczewska-Dobecka, Małgorzata; Soćko, Renata; Czerczak, Sławomir
2006-01-01
The aim of this work is to analyse Maximum Admissible Concentration (MAC) values proposed for irritants by the Group of Experts for Chemical Agents in Poland, based on the RD50 value. In 1994-2004, MAC values for irritants based on the RD50 value were set for 17 chemicals. For the purpose of the analysis, 1/10 RD50, 1/100 RD50 and the MAC/RD50 ratio were calculated. The determined MAC values are within the 0.01-0.09 RD50 range. The RD50 value is a good rough criterion to set MAC values for irritants and it makes it possible to estimate quickly admissible exposure levels. It has become clear that, in some cases, simple setting the MAC value for an irritant at the level of 0.03 RD50 may be insufficient to determine precisely the possible hazard to workers' health. Other available toxicological data, such as NOAEL (No-Observed-Adverse-Effect Level) and LOAEL (Lowest-Observed-Adverse-Effect Level), should always be considered as well.
Ramachandran, Sudarshan; Strange, Richard C; Kalra, Seema; Nayak, Devaki; Zeegers, Maurice P; Gilford, Janice; Hawkins, Clive P
2013-04-01
The extended disability severity scale (EDSS) is clinically useful in assessing disability in multiple sclerosis (MS) patients. It is also being used in studies to determine how genes and environment influence disability. However, since it has a complex relationship with functional scores and mobility and is strongly determined by disease duration its use can be limiting. Study associations of variables with progression described by time from disease onset until EDSS. We used a variable based on below/above median time from MS onset to reach a single EDSS value to define slow or fast progression. We compared patient categorization using this variable and MSSS, and in 533 patients (EDSS 1-8) and 242 of these patients with EDSS1-4, studied associations with skin type, gender, ultraviolet radiation and MC1R Asp294His. Classifying patients into quartiles of slow/fast progression showed mean MSSS increased with faster progression (pEDSS 1-8: MSSS, late onset age and childhood sunburning were associated with fast and MC1R CG/GG(294) with slow progression. Combinations of skin type (1/2 or 3/4) with childhood weekend exposure (EDSS1-4, relative to other combinations, those with no sunburning history and types 1/2 demonstrated slow progression (odds ratio=0.15, 95% CI=0.04, 0.57). This method, though a pilot, allows study of associations of variables with EDSS. It is based on local patients and could substitute for MSSS. In patients with EDSS1-4 but not 1-8, skin type 1/2 with no history of childhood sunburning was associated with slow progression. This is compatible with the view that disability develops through a first stage dependent on inflammation. Copyright © 2012. Published by Elsevier B.V.
Haijing Niu; Ping Guo; Xiaodong Song; Tianzi Jiang
2008-01-01
The sensitivity of diffuse optical tomography (DOT) imaging exponentially decreases with the increase of photon penetration depth, which leads to a poor depth resolution for DOT. In this letter, an exponential adjustment method (EAM) based on maximum singular value of layered sensitivity is proposed. Optimal depth resolution can be achieved by compensating the reduced sensitivity in the deep medium. Simulations are performed using a semi-infinite model and the simulation results show that the EAM method can substantially improve the depth resolution of deeply embedded objects in the medium. Consequently, the image quality and the reconstruction accuracy for these objects have been largely improved.
Prado, Daniel R; Osipov, Andrey V; Quevedo-Teruel, Oscar
2015-03-15
Transformation optics with quasi-conformal mapping is applied to design a Generalized Maxwell Fish-eye Lens (GMFEL) which can be used as a power splitter. The flattened focal line obtained as a result of the transformation allows the lens to adapt to planar antenna feeding systems. Moreover, sub-unity refraction index regions are reduced because of the space compression effect of the transformation, reducing the negative impact of removing those regions when implementing the lens. A technique to reduce the maximum value of the refractive index is presented to compensate for its increase because of the transformation. Finally, the lens is implemented with the bed of nails technology, employing a commercial dielectric slab to improve the range of the effective refractive index. The lens was simulated with a 3D full-wave simulator to validate the design, obtaining an original and feasible power splitter based on a dielectric lens.
Uchiyama, Takanori; Minamitani, Haruyuki; Sakata, Makoto
1990-01-01
The complex maximum entropy method and complex autoregressive model fitting with the singular value decomposition method (SVD) were applied to the free induction decay signal data obtained with a Fourier transform nuclear magnetic resonance spectrometer to estimate superresolved NMR spectra. The practical estimation of superresolved NMR spectra are shown on the data of phosphorus-31 nuclear magnetic resonance spectra. These methods provide sharp peaks and high signal-to-noise ratio compared with conventional fast Fourier transform. The SVD method was more suitable for estimating superresolved NMR spectra than the MEM because the SVD method allowed high-order estimation without spurious peaks, and it was easy to determine the order and the rank.
Chung, Hyun Hoon; Kim, Jae Weon; Park, Noh-Hyun; Song, Yong-Sang; Kang, Soon-Beom [Seoul National University College of Medicine, Department of Obstetrics and Gynecology, Cancer Research Institute, Seoul (Korea); Nam, Byung-Ho [National Cancer Center, Division of Cancer Epidemiology and Management, Research Institute, Seoul (Korea); Kang, Keon Wook; Chung, June-Key [Seoul National University College of Medicine, Department of Nuclear Medicine, Seoul (Korea)
2010-08-15
To determine if preoperative [{sup 18}F]FDG-PET/CT imaging has prognostic significance in patients with uterine cervical cancer. Patients with FIGO stage IB to IIA cervical cancer were imaged with integrated FDG PET/CT before radical surgery. The relationship between the maximum standardized uptake value (SUV{sub max}) of FDG in the primary tumour during PET/CT and recurrence was examined. Included in the study were 75 patients. Medical records including clinical data, treatment modalities, and treatment results were retrospectively reviewed. The median duration of follow-up was 13 months (range 3 to 58 months) after treatment. Median preoperative SUV{sub max} values in the primary tumours were significantly higher in patients with higher FIGO stages (p = 0.0149), pelvic lymph node metastasis (p = 0.0068), parametrial involvement (p = 0.0002), large (>4 cm) tumour size (p = 0.0022), presence of lymphovascular space invasion (p = 0.0055), and deep cervical stromal invasion (p < 0.0001). In univariate analysis, lymph node metastasis, parametrial invasion, presence of lymphovascular space invasion, and preoperative SUV{sub max} (uncategorized values) in the primary tumour were significantly associated with recurrence. However, in multivariate analysis, preoperative SUV{sub max} (p = 0.014, HR 1.178, 95% CI 1.034-1.342), age (p = 0.021, HR 0.87, 95% CI 0.772-0.980), and parametrial involvement (p = 0.040, HR 27.974, 95% CI 1.156-677.043) by primary tumour were significantly associated with recurrence. Preoperative FDG uptake by the primary tumour showed a significant association with recurrence in patients with uterine cervical cancer. (orig.)
NONE
2015-11-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2015 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
NONE
2013-08-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2013 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
NONE
2014-11-01
The book on the MAK (maximum permissible concentrations at the place of work) and BAT (biological tolerance values for working materials) value list 2014 includes the following chapters: (a) Maximum permissible concentrations at the place of work: definition, application and determination of MAT values, list of materials; carcinogenic working materials, sensibilizing working materials, aerosols, limiting the exposition peaks, skin resorption, MAK values during pregnancy, germ cell mutagens, specific working materials; (b) Biological tolerance values for working materials: definition and application of BAT values, list of materials, carcinogenic working materials, biological guide values, biological working material reference values.
Ganau, Sergi, E-mail: sganau@tauli.cat [Women' s Imaging Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); Andreu, Francisco Javier, E-mail: xandreu@tauli.cat [Pathology Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); Escribano, Fernanda, E-mail: fescribano@tauli.cat [Women' s Imaging Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); Martín, Amaya, E-mail: amartino@tauli.cat [Women' s Imaging Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); Tortajada, Lidia, E-mail: ltortajada@tauli.cat [Women' s Imaging Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); Villajos, Maite, E-mail: mvillajos@tauli.cat [Women' s Imaging Department, UDIAT-Centre Diagnòstic, Institut Universitari Parc Taulí – UAB, Parc Taulí, 1, 08205 Sabadell, Barcelona (Spain); and others
2015-04-15
Highlights: •Shear wave elastography provides a quantitative assessment of the hardness of breast lesions. •The hardness of breast lesions correlates with lesion size: larger lesions are harder than smaller ones. •Histologic type and grade do not correlate clearly with elastography parameters. •HER2, luminal B HER2+, and triple-negative tumors have lower maximum hardness and mean hardness than other tumor types. •Half the tumors classified as BI-RADS 3 were luminal A and half were HER2. -- Abstract: Purpose: To evaluate the correlations of maximum stiffness (Emax) and mean stiffness (Emean) of invasive carcinomas on shear-wave elastography (SWE) with St. Gallen consensus tumor phenotypes. Methods: We used an ultrasound system with SWE capabilities to prospectively study 190 women with 216 histologically confirmed invasive breast cancers. We obtained one elastogram for each lesion. We correlated Emax and Emean with tumor size, histologic type and grade, estrogen and progesterone receptors, HER2 expression, the Ki67 proliferation index, and the five St. Gallen molecular subtypes: luminal A, luminal B without HER2 overexpression (luminal B HER2−), luminal B with HER2 overexpression (luminal B HER2+), HER2, and triple negative. Results: Lesions larger than 20 mm had significantly higher Emax (148.04 kPa) and Emean (118.32 kPa) (P = 0.005) than smaller lesions. We found no statistically significant correlations between elasticity parameters and histologic type and grade or molecular subtypes, although tumors with HER2 overexpression regardless whether they expressed hormone receptors (luminal B HER2+ and HER2 phenotypes) and triple-negative tumors had lower Emax and Emean than the others. We assessed the B-mode ultrasound findings of the lesions with some of the Emax or Emean values less than or equal to 80 kPa; only four of these had ultrasound findings suggestive of a benign lesion (two with luminal A phenotype and two with HER2 phenotype). Conclusions: We
NONE
2017-08-01
The MAK and BAT values list 2017 includes the maximum permissible concentrations at the place of work and biological tolerance values for working materials. The following working materials are covered: carcinogenic working materials, sensitizing materials and aerosols. The report discusses the restriction of exposure peaks, skin resorption, MAK (maximum working place concentration) values during pregnancy, germ cell mutagens and specific working materials. Importance and application of BAT (biological working material tolerance) values, list of materials, carcinogens, biological guide values and reference values are also included.
Romero, Claudia; Mesa, Duvan
2015-04-01
L-Moments Regional Frequency Analysis Methodology Application in maximum rainfall values over the Bogota River's basin 1°Claudia Patricia Romero Hernández; 2°Duvan Javier Mesa Fernández Universidad Santo Tomas; Colombia The application area of this methodology is the Bogota River's basin, which is located in Cundinamarca; a Colombian department with a total surface area of 589.143 hectares. This basin includes 19 sub-basins, and it is the most densely urbanized of the country. Including its metropolitan area, this region boasts a population of 9.000.000 inhabitants; which composes approximately 23% of Colombia's population and possesses around 19% of the country's industries. This basin has shown a notorious increase of complicated floods frequency in the last few years due to climatic variations. These climatic periods correspond to a weather pattern called Niña Phenomenon (2010-2011), which affected 57.000 citizens in this department and 4,900 people directly in Bogota city, with an estimated economic damage of 277'121,052 USD. The Regional Frequency Analysis methodology is a statistics procedure that consists in adding information from multiple samples in a single large sample, assuming previously that all of these come from the same probability model, except for a difference between them due to a scale factor. These samples are defined by a "regionalization" procedure known as the "Avenue Index" or "Flood Index". This procedure groups several kinds of information that comes from a common probability model, such as temperature, rainfall, and water flow. This model must be similar for all of the weather stations located in a homogeneous region. Maps for each of 4 return periods (5, 10, 50 and 100 years) were developed based on 120 weather stations located on this basin. The information used in this process comes from median monthly rainfall data, based on historical series between 30 and 40 years average. An increase in the annual median rainfall was
Turner, Eve; Hawkins, Peter
2016-01-01
.... This article presents the results and implications of an international study which explored its use in executive and business coaching, with the aim of sharing best practice and achieving maximum...
A New Fuzzy-Based Maximum Power Point Tracker for a Solar Panel Based on Datasheet Values
Ali Kargarnejad; Mohsen Taherbaneh; Amir Hosein Kashefi
2013-01-01
Tracking maximum power point of a solar panel is of interest in most of photovoltaic applications. Solar panel modeling is also very interesting exclusively based on manufacturers data. Knowing that the manufacturers generally give the electrical specifications of their products at one operating condition, there are so many cases in which the specifications in other conditions are of interest. In this research, a comprehensive one-diode model for a solar panel with maximum obtainable accuracy...
Vasile Cojocaru
2016-12-01
Full Text Available Several methods can be used in the FEM studies to apply the loads on a plain bearing. The paper presents a comparative analysis of maximum stress obtained for three loading scenarios: resultant force applied on the shaft – bearing assembly, variable pressure with sinusoidal distribution applied on the bearing surface, variable pressure with parabolic distribution applied on the bearing surface.
Maceda-Veiga, Alberto; Baselga, Andrés; Sousa, Ronaldo; Vilà, Montserrat; Doadrio, Ignacio; de Sostoa, Adolfo
2017-01-01
Global freshwater biodiversity is declining at unprecedented rates while non-native species are expanding. Examining diversity patterns across variable river conditions can help develop better management strategies. However, many indicators can be used to determine the conservartion value of aquatic communities, and little is known of how well they correlate to each other in making diagnostics, including when testing for the efficacy of protected areas. Using an extensive data set (99,700km(2), n=530 sites) across protected and unprotected river reaches in 15 catchments of NE Spain, we examine correlations among 20 indicators of conservation value of fish communities, including the benefits they provide to birds and threatened mammals and mussels. Our results showed that total native fish abundance or richness correlated reasonably well with many native indicators. However, the lack of a strong congruence led modelling techniques to identify different river attributes for each indicator of conservation value. Overall, tributaries were identified as native fish refugees, and nutrient pollution, salinization, low water velocity and poor habitat structure as major threats to the native biota. We also found that protected areas offered limited coverage to major components of biodiversity, including rarity, threat and host-parasite relationships, even though values of non-native indicators were notably reduced. In conclusion, restoring natural hydrological regimes and water chemical status is a priority to stem freshwater biodiversity loss in this region. A complementary action can be the protection of tributaries, but more studies examining multiple components of diversity are necessary to fully test their potential as fluvial reserves in Mediterranean climate areas. Copyright © 2016 Elsevier B.V. All rights reserved.
Nowak, Karina; Sobota, Grzegorz; Bacik, Bogdan; Hajduk, Grzegorz; Kusz, Damian
2012-01-01
The aim of this study was to check whether there was a correlation between the value of the maximum developed torque of the quadriceps femoris muscle and subjective evaluation of a patient's pain which is measured by the VAS. Also evaluated were changes in the muscle torque value and KSS scale over time. For examining patient's condition use was made of a KSS scale (knee score: pain, range of motion, stability of joint and limb axis) before the surgery and in weeks 6 and 12, as well as 6 months after surgery. It was found to be constantly improving in comparison with the condition before the surgery. This is confirmed by a significant statistical value difference of KSS scale. The surgery substantially increases the quality of live and function recurrence.
The Solution to Maximum Value of a Special Class of Trigonometric Functions%一类特殊三角函数的最大值解
周桂如
2015-01-01
首先在△ABC中，给出特定系数3sinA+4sinB+18sinC的最大值问题，分别利用逐步分析法、拉格朗日乘数法和不等式三种方法获得相同的结果，然后利用拉格朗日乘数法推导出任意系数三角函数asinA+bsinB+csinC（其中a、b、c∈R+)的最大值求解方法，最后推导三角函数acosA+bcosB+ccosC（其中a、b、c∈R+)的极值。%For the maximum value of a trigonometric function 3sinA+4sinB+18sinC with special coefficients in △ ABC , three solutions of analysis segment, the Lagrange multiplier, the inequalities, are first proposed, leading to the same result. Then for a general trigonometric function asinA+bsinB+csinC with the coefficients a,b, and c belonging to ,the Lagrange multiplier is used to seek its maximum value. Finally, the solution to the extreme value of the trigonometric function acosA+bcosB+ccosC with the coefficients , b, and c belonging to is derived.
负二项分布概率最大值的性质%The Characters of the Probability Maximum Value for Negative Binomial Distribution
丁勇
2016-01-01
The character of probability maximum value for negative binomial distribution was explored. The probability maximum value for negative binomial distribution was a function of p and r, where p was the probability of success for each test, and r was the number of the first successful test. It was a mono-tonically increasing continuous function of p when r was given,only (r-1)/p was a integer, its derivative did not exist, and a monotone decreasing function of r when p was given.%负二项分布概率的最大值是每次试验成功的概率p和首次试验成功次数r的函数。对确定的r,该函数是p的单调上升的连续函数,仅当(r-1)/p是整数时不可导；对确定的p,该函数是r的单调下降函数。
Sürer Budak, Evrim; Toptaş, Tayfun; Aydın, Funda; Öner, Ali Ozan; Çevikol, Can; Şimşek, Tayup
2017-02-05
To explore the correlation of the primary tumor's maximum standardized uptake value (SUVmax) and minimum apparent diffusion coefficient (ADCmin) with clinicopathologic features, and to determine their predictive power in endometrial cancer (EC). A total of 45 patients who had undergone staging surgery after a preoperative evaluation with (18)F-fluorodeoxyglucose (FDG) positron emission tomography/computerized tomography (PET/CT) and diffusion-weighted magnetic resonance imaging (DW-MRI) were included in a prospective case-series study with planned data collection. Multiple linear regression analysis was used to determine the correlations between the study variables. The mean ADCmin and SUVmax values were determined as 0.72±0.22 and 16.54±8.73, respectively. A univariate analysis identified age, myometrial invasion (MI) and lymphovascular space involvement (LVSI) as the potential factors associated with ADCmin while it identified age, stage, tumor size, MI, LVSI and number of metastatic lymph nodes as the potential variables correlated to SUVmax. In multivariate analysis, on the other hand, MI was the only significant variable that correlated with ADCmin (p=0.007) and SUVmax (p=0.024). Deep MI was best predicted by an ADCmin cutoff value of ≤0.77 [93.7% sensitivity, 48.2% specificity, and 93.0% negative predictive value (NPV)] and SUVmax cutoff value of >20.5 (62.5% sensitivity, 86.2% specificity, and 81.0% NPV); however, the two diagnostic tests were not significantly different (p=0.266). Among clinicopathologic features, only MI was independently correlated with SUVmax and ADCmin. However, the routine use of (18)F-FDG PET/CT or DW-MRI cannot be recommended at the moment due to less than ideal predictive performances of both parameters.
The Maximum Power of the Wind Power System Based on Extreme Value Method%基于极值法的风电系统最大功率
陆玲黎; 吴雷
2011-01-01
针对风力发电系统的最大功率问题,提出以极值法为依据捕获最大功率的方法.分析了风力机的工作原理及功率特性,讨论了影响功率的主要因素.通过对极值搜索法的基本理论及特点的解析,结合其工作原理,得出功率曲线是占空比的凹函数,因此极值搜索法通过控制占空比来提高风能的捕获效率,并通过改进提高了抗干扰能力和稳定性.实验结果证明了该方法的可行性.%In order to overcome the trouble brought by wind power generation system for maximum power,this paper puts forward a method based on extreme value method to capture the maximum power.The working principle of wind turbine and power characteristics are analyzed,the main factors affecting the power is discussed.Through the analysis of extremum search method on the basic theory and characteristics which combined with its working principle, come to a decision that power curve is concave function of duty cycle.Therefore,extreme value search method can control the duty cycle to improve the efficiency of wind capture, and improve anti-interference ability and stability .Through experiments, the final experimental curves obtained prove the feasibility of the method.
Carkaci, Selin; Adrada, Beatriz E; Rohren, Eric; Wei, Wei; Quraishi, Mohammad A; Mawlawi, Osama; Buchholz, Thomas A; Yang, Wei
2012-05-01
The aim of this study was to determine an optimum standardized uptake value (SUV) threshold for identifying regional nodal metastasis on 18F-fluorodeoxyglucose (FDG) positron emission tomographic (PET)/computed tomographic (CT) studies of patients with inflammatory breast cancer. A database search was performed of patients newly diagnosed with inflammatory breast cancer who underwent 18F-FDG PET/CT imaging at the time of diagnosis at a single institution between January 1, 2001, and September 30, 2009. Three radiologists blinded to the histopathology of the regional lymph nodes retrospectively analyzed all 18F-FDG PET/CT images by measuring the maximum SUV (SUVmax) in visually abnormal nodes. The accuracy of 18F-FDG PET/CT image interpretation was correlated with histopathology when available. Receiver-operating characteristic curve analysis was performed to assess the diagnostic performance of PET/CT imaging. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated using three different SUV cutoff values (2.0, 2.5, and 3.0). A total of 888 regional nodal basins, including bilateral axillary, infraclavicular, internal mammary, and supraclavicular lymph nodes, were evaluated in 111 patients (mean age, 56 years). Of the 888 nodal basins, 625 (70%) were negative and 263 (30%) were positive for metastasis. Malignant lymph nodes had significantly higher SUVmax than benign lymph nodes (P lymph nodes on 18F-FDG PET/CT imaging may help differentiate benign and malignant lymph nodes in patients with inflammatory breast cancer. An SUV cutoff of 2 provided the best accuracy in identifying regional nodal metastasis in this patient population. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.
Kus T
2015-12-01
Full Text Available Tulay Kus,1 Gokmen Aktas,1 Alper Sevinc,1 Mehmet Emin Kalender,1 Mustafa Yilmaz,2 Seval Kul,3 Serdar Oztuzcu,4 Cemil Oktay,5 Celaletdin Camci1 1Department of Internal Medicine, Division of Medical Oncology, Gaziantep Oncology Hospital, 2Department of Nuclear Medicine, 3Department of Biostatistics, Faculty of Medicine, 4Department of Medical Biology, Faculty of Medicine, University of Gaziantep, Gaziantep, 5Department of Radiology, Faculty of Medicine, University of Akdeniz, Antalya, Turkey Purpose: To investigate whether the initial maximum standardized uptake value (SUVmax on fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT has a prognostic significance in metastatic lung adenocarcinoma.Patients and methods: Sixty patients (24 females, mean age: 57.9±12 years with metastatic stage lung adenocarcinoma who used erlotinib and underwent 18F-FDG PET/CT at the time of diagnosis between May 2010 and May 2014 were enrolled in this retrospective study. The patients were stratified according to the median SUVmax value, which was found as 11. Progression-free survival (PFS rates for 3, 6, and 12 months were examined for SUVmax values and epidermal growth factor receptor (EGFR mutation status.Results: The number of EGFR-sensitizing mutation positive/negative/unknown was 26/17/17, respectively, and the number of patients using erlotinib at first-line, second-line, and third-line therapy was 15, 31, and 14 consecutively. The PFS rates of EGFR mutation positive, negative, and unknown patients for 3 months were 73.1%, 35.3%, and 41.2% (P=0.026, odds ratio [OR]=4.39; 95% confidence interval [CI]: 1.45–13.26, respectively. The PFS rates of EGFR positive, negative, and unknown patients for 6 months were 50%, 29.4%, and 29.4% (P=0.267, OR: 2.4; 95% CI: 0.82–6.96, respectively. The PFS rates of EGFR positive, negative, and unknown patients for 12 months were 42.3%, 29.4%, 23.5% (P=0.408, OR: 2.0; 95% CI: 0.42
Amos JM Ela Bella; Ya-Rui Zhang; Wei Fan; Kong-Jia Luo; Tie-Hua Rong; Peng Lin; Hong Yang; Jian-Hua Fu
2014-01-01
The presence of lymph node metastasis is an important prognostic factor for patients with esophageal cancer. Accurate assessment of lymph nodes in thoracic esophageal carcinoma is essential for selecting appropriate treatment and forecasting disease progression. Positron emission tomography combined with computed tomography (PET/CT) is becoming an important tool in the workup of esophageal carcinoma. Here, we evaluated the effectiveness of the maximum standardized uptake value (SUVmax) in assessing lymph node metastasis in esophageal squamous cell carcinoma (ESCC) prior to surgery. Fifty-nine surgical patients with pathologically confirmed thoracic ESCC were retrospectively studied. These patients underwent radical esophagectomy with pathologic evaluation of lymph nodes. They al had 18F-FDG PET/CT scans in their preoperative staging procedures. None had a prior history of cancer. The pathologic status and PET/CT SUVmax of lymph nodes were col ected to calculate the receiver operating characteristic (ROC) curve and to determine the best cutoff value of the PET/CT SUVmax to distinguish benign from malignant lymph nodes. Lymph node data from 27 others were used for the validation. A total of 323 lymph nodes including 39 metastatic lymph nodes were evaluated in the training cohort, and 117 lymph nodes including 32 metastatic lymph nodes were evaluated in the validation cohort. The cutoff point of the SUVmax for lymph nodes was 4.1, as calculated by ROC curve (sensitivity, 80%; specificity, 92%;accuracy, 90%). When this cutoff value was applied to the validation cohort, a sensitivity, a specificity, and an accuracy of 81%, 88%, and 86%, respectively, were obtained. These results suggest that the SUVmax of lymph nodes predicts malignancy. Indeed, when an SUVmax of 4.1 was used instead of 2.5, FDG-PET/CT was more accurate in assessing nodal metastasis.
van Nies, Jessica A. B.; Alves, Celina; Radix-Bloemen, Audrey L. S.; Gaujoux-Viala, Cecile; Huizinga, Tom W. J.; Hazes, Johanna M. W.; Brouwer, Elisabeth; Fautrel, Bruno; van der Helm-van Mil, Annette H. M.
2015-01-01
Introduction: Morning stiffness is assessed daily in the diagnostic process of arthralgia and arthritis, but large-scale studies on the discriminative ability are absent. This study explored the diagnostic value of morning stiffness in 5,202 arthralgia and arthritis patients and the prognostic value
Rodrigues, Elsa Teresa; Pardal, Miguel Ângelo; Gante, Cristiano; Loureiro, João; Lopes, Isabel
2017-02-01
The main goal of the present study was to determine and validate an aquatic Maximum Acceptable Concentration-Environmental Quality Standard (MAC-EQS) value for the agricultural fungicide azoxystrobin (AZX). Assessment factors were applied to short-term toxicity data using the lowest EC50 and after the Species Sensitivity Distribution (SSD) method. Both ways of EQS generation were applied to a freshwater toxicity dataset for AZX based on available data, and to marine toxicity datasets for AZX and Ortiva(®) (a commercial formulation of AZX) obtained by the present study. A high interspecific variability in AZX sensitivity was observed in all datasets, being the copepoda Eudiaptomus graciloides (LC50,48h = 38 μg L(-1)) and the gastropod Gibbula umbilicalis (LC50,96h = 13 μg L(-1)) the most sensitive freshwater and marine species, respectively. MAC-EQS values derived using the lowest EC50 (≤0.38 μg L(-1)) were more protective than those derived using the SSD method (≤3.2 μg L(-1)). After comparing the MAC-EQS values estimated in the present study to the smallest AA-EQS available, which protect against the occurrence of prolonged exposure of AZX, the MAC-EQS values derived using the lowest EC50 were considered overprotective and a MAC-EQS of 1.8 μg L(-1) was validated and recommended for AZX for the water column. This value was derived from marine toxicity data, which highlights the importance of testing marine organisms. Moreover, Ortiva affects the most sensitive marine species to a greater extent than AZX, and marine species are more sensitive than freshwater species to AZX. A risk characterization ratio higher than one allowed to conclude that AZX might pose a high risk to the aquatic environment. Also, in a wider conclusion, before new pesticides are approved, we suggest to improve the Tier 1 prospective Ecological Risk Assessment by increasing the number of short-term data, and apply the SSD approach, in order to ensure the safety of
Kus, Tulay; Aktas, Gokmen; Sevinc, Alper; Kalender, Mehmet Emin; Yilmaz, Mustafa; Kul, Seval; Oztuzcu, Serdar; Oktay, Cemil; Camci, Celaletdin
2015-01-01
Purpose To investigate whether the initial maximum standardized uptake value (SUVmax) on fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) has a prognostic significance in metastatic lung adenocarcinoma. Patients and methods Sixty patients (24 females, mean age: 57.9±12 years) with metastatic stage lung adenocarcinoma who used erlotinib and underwent 18F-FDG PET/CT at the time of diagnosis between May 2010 and May 2014 were enrolled in this retrospective study. The patients were stratified according to the median SUVmax value, which was found as 11. Progression-free survival (PFS) rates for 3, 6, and 12 months were examined for SUVmax values and epidermal growth factor receptor (EGFR) mutation status. Results The number of EGFR-sensitizing mutation positive/negative/unknown was 26/17/17, respectively, and the number of patients using erlotinib at first-line, second-line, and third-line therapy was 15, 31, and 14 consecutively. The PFS rates of EGFR mutation positive, negative, and unknown patients for 3 months were 73.1%, 35.3%, and 41.2% (P=0.026, odds ratio [OR]=4.39; 95% confidence interval [CI]: 1.45–13.26), respectively. The PFS rates of EGFR positive, negative, and unknown patients for 6 months were 50%, 29.4%, and 29.4% (P=0.267, OR: 2.4; 95% CI: 0.82–6.96), respectively. The PFS rates of EGFR positive, negative, and unknown patients for 12 months were 42.3%, 29.4%, 23.5% (P=0.408, OR: 2.0; 95% CI: 0.42–5.26), respectively. Thirty-one of 60 patients had SUVmax values ≤11. The PFS rates for 3, 6, and 12 months were 70.5%/28% (P=0.001, OR=9.0; 95% CI: 2.79–29.04), 61.7%/8% (P11) group, respectively. Conclusion Initial SUVmax value on 18F-FDG PET/CT is found to be a prognostic factor anticipating the response to erlotinib for 3, 6, and 12-month rates of PFS in both EGFR-sensitizing mutation and wild-type tumor group. PMID:26719702
Hoeij, F.B. van; Stadhouders, P.H.G.M.; Weusten, B.L.A.M. [St Antonius Ziekenhuis, Department of Gastroenterology, Nieuwegein (Netherlands); Keijsers, R.G.M. [St Antonius Ziekenhuis, Department of Nuclear Medicine, Nieuwegein (Netherlands); Loffeld, B.C.A.J. [Zuwe Hofpoort Ziekenhuis, Department of Internal Medicine, Woerden (Netherlands); Dun, G. [Ziekenhuis Rivierenland, Department of Internal Medicine, Tiel (Netherlands)
2015-01-15
In patients undergoing {sup 18}F-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUV{sub max}) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from benign lesions, and thereby be helpful in determining the urgency of colonoscopy. The aim of our study was to assess the incidence and underlying pathology of incidental PET-positive colonic lesions in a large cohort of patients, and to determine the usefulness of the SUV{sub max} in differentiating benign from malignant pathology. The electronic records of all patients who underwent FDG PET/CT from January 2010 to March 2013 in our hospital were retrospectively reviewed. The main indications for PET/CT were: characterization of an indeterminate mass on radiological imaging, suspicion or staging of malignancy, and suspicion of inflammation. In patients with incidental focal FDG uptake in the large bowel, data regarding subsequent colonoscopy were retrieved, if performed within 120 days. The final diagnosis was defined using colonoscopy findings, combined with additional histopathological assessment of the lesion, if applicable. Of 7,318 patients analysed, 359 (5 %) had 404 foci of unexpected colonic FDG uptake. In 242 of these 404 lesions (60 %), colonoscopy follow-up data were available. Final diagnoses were: adenocarcinoma in 25 (10 %), adenoma in 90 (37 %), and benign in 127 (53 %). The median [IQR] SUV{sub max} was significantly higher in adenocarcinoma (16.6 [12 - 20.8]) than in benign lesions (8.2 [5.9 - 10.1]; p < 0.0001), non-advanced adenoma (8.3 [6.1 - 10.5]; p < 0.0001) and advanced adenoma (9.7 [7.2 - 12.6]; p < 0.001). The receiver operating characteristic curve of SUV{sub max} for malignant versus nonmalignant lesions had an area under the curve of 0.868 (SD ± 0.038), the optimal cut-off value being 11.4 (sensitivity 80 %, specificity 82
John S.Herold
2008-01-01
@@ Global M&A upstream deal count and asset deal value both reached record highs of in 2007,although total transaction value slipped to just under US$154 billion from US$166 billion in 2006,according to the 2008 Global Upstream M&A Review prepared by John S.Herold,Inc.,an IHS company (NYSE: IHS),and Harrison Lovegrove & Co.,Ltd.,a Standard Chartered group company.
Qi Shi
Full Text Available To find out the most valuable parameter of 18F-Fluorodeoxyglucose positron emission tomography for predicting distant metastasis in nasopharyngeal carcinoma.From June 2007 through December 2010, 43 non-metastatic NPC patients who underwent 18F-Fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT before radical Intensity-Modulated Radiation Therapy were enrolled and reviewed retrospectively. PET parameters including maximum standardized uptake value (SUV max, mean standardized uptake value (SUV mean, metabolic tumor volume (MTV, and total lesion glucose (TLG of both primary tumor and cervical lymph nodes were calculated. Total SUV max were recorded as the sum of SUV max of primary tumor and cervical lymph nodes. Total SUV mean, Total MTV and Total TLG were calculated in the same way as Total SUV max.The median follow-up was 32 months (range, 23-68 months. Distant metastasis was the main pattern of treatment failure. Univariate analysis showed higher SUV max, SUV mean, MTV, and TLG of primary tumor, Total SUV max, Total MTV, Total TLG, and stage T3-4 were factors predicting for significantly poorer distant metastasis-free survival (p = 0.042, p = 0.008, p = 0.023, p = 0.023, p = 0.024, p = 0.033, p = 0.016, p = 0.015. In multivariate analysis, Total SUV max was the independent predictive factor for distant metastasis (p = 0.046. Spearman Rank correlation analysis showed mediate to strong correlationship between Total SUV max and SUV max-T, and between Total SUV max and SUV max-N(Spearman coefficient: 0.568 and 0.834; p = 0.000 and p = 0.000.Preliminary results indicated that Total SUV max was an independently predictive factor for distant metastasis in patients of nasopharyngeal carcinoma treated with Intensity-Modulated Radiation Therapy.
Iskender, Ilker; Kadioglu, Salih Zeki; Kosar, Altug; Atasalihi, Ali; Kir, Altan
2011-06-01
The maximum standardized uptake value (SUV(max)) varies among positron emission tomography-integrated computed tomography (PET/CT) centers in the staging of non-small cell lung cancer. We evaluated the ratio of the optimum SUV(max) cut-off for the lymph nodes to the median SUV(max) of the primary tumor (ratioSUV(max)) to determine SUV(max) variations between PET/CT scanners. The previously described PET predictive ratio (PPR) was also evaluated. PET/CT and mediastinoscopy and/or thoracotomy were performed on 337 consecutive patients between September 2005 and March 2009. Thirty-six patients were excluded from the study. The pathological results were correlated with the PET/CT findings. Histopathological examination was performed on 1136 N2 lymph nodes using 10 different PET/CT centers. The majority of patients (group A: 240) used the same PET/CT scanner at four different centers. Others patients were categorized as group B. The ratioSUV(max) for groups A and B was 0.18 and 0.22, respectively. The same ratio for centers 1, 2, 3 and 4 was 0.2, 0.21, 0.21, and 0.23, respectively. The optimal cut-off value of the PPR to predict mediastinal lymph node pathology for malignancy was 0.49 (likelihood ratio +2.02; sensitivity 70%, specificity 65%). We conclude that the ratioSUV(max) was similar for different scanners. Thus, SUV(max) is a valuable cut-off for comparing-centers.
Schmidt, Matthias; Dietlein, Markus; Kobe, Carsten; Eschner, Wolfgang; Schicha, Harald [University of Cologne, Department of Nuclear Medicine, Cologne (Germany); Bollschweiler, Elfriede; Moenig, Stefan P.; Vallboehmer, Daniel; Hoelscher, Arnulf [University of Cologne, Department of General-, Visceral and Cancer Surgery, Cologne (Germany)
2009-05-15
To evaluate the potential of [{sup 18}F]fluorodeoxyglucose positron emission tomography (FDG-PET) for the assessment of histopathological response and survival after neoadjuvant radiochemotherapy in patients with oesophageal cancer. In 2005 and 2006, 55 patients (43 men, 12 women; median age 60 years) with locally advanced oesophageal cancer (cT3-4 Nx M0; 24 with squamous cell carcinoma, 31 with adenocarcinoma) underwent transthoracic en bloc oesophagectomy after completion of treatment with cisplatin, 5-fluorouracil, and radiotherapy ad 36 Gy in a prospective clinical trial. Of the 55 patients, 21 (38%) were classified as histopathological responders (<10% vital residual tumour cells) and 34 (62%) as nonresponders. FDG-PET was performed before (PET 1) and 3-4 weeks after the end (PET 2) of radiochemotherapy with assessment of maximum and average standardized uptake values (SUV) for correlation with histopathological response and survival. Histopathological responders had a slightly higher baseline SUV than nonresponders (p<0.0001 between PET 1 and PET 2 for responders and nonresponders) and the decrease was more prominent in responders. Except for SUVmax in patients with squamous cell carcinoma neither baseline nor preoperative SUV nor percent SUV reduction correlated significantly with histopathological response. Histopathological responders had a 2-year overall survival of 91 {+-} 9% and nonresponders a survival of 53 {+-} 10% (p = 0.007). Our study does not support recent reports that FDG-PET predicts histopathological response and survival in patients with locally advanced oesophageal cancer treated by neoadjuvant radiochemotherapy. (orig.)
J.A.B. van Nies (Jessica A.B.); C. Alves (Celina); A.L.S. Radix-Bloemen (Audrey L.S.); C. Gaujoux-Viala (Cécile); T.W.J. Huizinga (Tom); J.M.W. Hazes (Mieke); E. Brouwer (Eric); B. Fautrel (Bruno); Mil, A.H.M.H.-V. (Annette H.M. van der Helm-van)
2015-01-01
textabstractIntroduction: Morning stiffness is assessed daily in the diagnostic process of arthralgia and arthritis, but large-scale studies on the discriminative ability are absent. This study explored the diagnostic value of morning stiffness in 5,202 arthralgia and arthritis patients and the prog
Beenakker, EAC; van der Hoeven, JH; Fock, JM; Maurits, NM
2001-01-01
Since muscle force and functional ability are not related linearly; maximum force can be reduced while functional ability is still maintained. For diagnostic and therapeutic reasons loss of muscle force should be detected as early and accurately as possible. Because of growth factors, maximum muscle
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Barclay, R. S.; Wing, S. L.
2013-12-01
The Paleocene-Eocene Thermal Maximum (PETM) was a geologically brief interval of intense global warming 56 million years ago. It is arguably the best geological analog for a worst-case scenario of anthropogenic carbon emissions. The PETM is marked by a ~4-6‰ negative carbon isotope excursion (CIE) and extensive marine carbonate dissolution, which together are powerful evidence for a massive addition of carbon to the oceans and atmosphere. In spite of broad agreement that the PETM reflects a large carbon cycle perturbation, atmospheric concentrations of CO2 (pCO2) during the event are not well constrained. The goal of this study is to produce a high resolution reconstruction of pCO2 using stomatal frequency proxies (both stomatal index and stomatal density) before, during, and after the PETM. These proxies rely upon a genetically controlled mechanism whereby plants decrease the proportion of gas-exchange pores (stomata) in response to increased pCO2. Terrestrial sections in the Bighorn Basin, Wyoming, contain macrofossil plants with cuticle immediately bracketing the PETM, as well as dispersed plant cuticle from within the body of the CIE. These fossils allow for the first stomatal-based reconstruction of pCO2 near the Paleocene-Eocene boundary; we also use them to determine the relative timing of pCO2 change in relation to the CIE that defines the PETM. Preliminary results come from macrofossil specimens of Ginkgo adiantoides, collected from an ~200ka interval prior to the onset of the CIE (~230-30ka before), and just after the 'recovery interval' of the CIE. Stomatal index values decreased by 37% within an ~70ka time interval at least 100ka prior to the onset of the CIE. The decrease in stomatal index is interpreted as a significant increase in pCO2, and has a magnitude equivalent to the entire range of stomatal index adjustment observed in modern Ginkgo biloba during the anthropogenic CO2 rise during the last 150 years. The inferred CO2 increase prior to the
Jin F
2016-05-01
Full Text Available Feng Jin,1,2 Hui Zhu,2 Zheng Fu,3 Li Kong,2 Jinming Yu2 1School of Medicine and Life Sciences, University of Jinan-Shandong Academy of Medical Sciences, 2Department of Radiation Oncology, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, 3Department of Nuclear Medicine, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, Jinan, People’s Republic of China Purpose: The purpose of this study was to investigate the prognostic value of the standardized uptake value maximum (SUVmax change calculated by dual-time-point 18F-fluorodeoxyglucose positron emission tomography (PET imaging in patients with advanced non-small-cell lung cancer (NSCLC.Patients and methods: We conducted a retrospective review of 115 patients with advanced NSCLC who underwent pretreatment dual-time-point 18F-fluorodeoxyglucose PET acquired at 1 and 2 hours after injection. The SUVmax from early images (SUVmax1 and SUVmax from delayed images (SUVmax2 were recorded and used to calculate the SUVmax changes, including the SUVmax increment (ΔSUVmax and percent change of the SUVmax (%ΔSUVmax. Progression-free survival (PFS and overall survival (OS were determined by the Kaplan–Meier method and were compared with the studied PET parameters, and the clinicopathological prognostic factors in univariate analyses and multivariate analyses were constructed using Cox proportional hazards regression.Results: One hundred and fifteen consecutive patients were reviewed, and the median follow-up time was 12.5 months. The estimated median PFS and OS were 3.8 and 9.6 months, respectively. In univariate analysis, SUVmax1, SUVmax2, ΔSUVmax, %ΔSUVmax, clinical stage, and Eastern Cooperative Oncology Group (ECOG scores were significant prognostic factors for PFS. Similar results were significantly correlated with OS, except %ΔSUVmax. In multivariate analysis, ΔSUVmax and %ΔSUVmax were significant
Valter Abrantes Pereira da Silva
2007-03-01
Full Text Available OBJETIVO: O presente estudo objetivou comparar os valores de freqüência cardíaca máxima (FCmáx medidos durante um teste de esforço progressivo (TEP, com os obtidos através de equações de predição, em idosas brasileiras. MÉTODOS: Um TEP máximo sob o protocolo modificado de Bruce, realizado em esteira, foi utilizado para obtenção dos valores de referência da freqüência cardíaca máxima (FCmáx, em 93 mulheres idosas (67,1±5,16 anos. Os valores obtidos foram comparados aos estimados pelas equações "220 - idade" e a de Tanaka e cols., através da ANOVA, para amostras repetidas. A correlação e a concordância entre os valores medidos e os estimados foram testadas. Adicionalmente, a correlação entre os valores de FCmáx medidos e a idade das voluntárias foi examinada. RESULTADOS: Os resultados foram os seguintes: 1 a média da FCmáx atingida no TEP foi de 145,5±12,5 batimentos por minuto (bpm; 2 as equações "220 - idade" e a de Tanaka e cols. (2001 superestimaram significativamente (p OBJECTIVE: This study sought to compare maximum heart rate (HRmax values measured during a graded exercise test (GXT with those calculated from prediction equations in Brazilian elderly women. METHODS: A treadmill maximal graded exercise test in accordance with the modified Bruce protocol was used to obtain reference values for maximum heart rate (HRmax in 93 elderly women (mean age 67.1 ± 5.16. Measured values were compared with those estimated from the "220 - age" and Tanaka et al formulas using repeated-measures ANOVA. Correlation and agreement between measured and estimated values were tested. Also evaluated was the correlation between measured HRmax and volunteers’ age. RESULTS: Results were as follows: 1 mean HRmax reached during GXT was 145.5 ± 12,5 beats per minute (bpm; 2 both the "220 - age" and Tanaka et al (2001 equations significantly overestimated (p < 0.001 HRmax by a mean difference of 7.4 and 15.5 bpm, respectively; 3
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
Jian Zhao
2017-01-01
Full Text Available Partial shading (PS is an unavoidable condition which significantly reduces the efficiency and stability of a photovoltaic (PV system. With PS, the system usually exhibits multiple-peak output power characteristics, but single-peak is also possible under special PS conditions. In fact it is shown that the partial shading condition (PSC is the necessary but not sufficient condition for multiple-peak. Based on circuit analysis, this paper shows that the number of peak points can be determined by short-circuit currents and maximum-power point currents of all the arrays in series. Then the principle is established based on which the number of the peak points is to be determined. Furthermore, based on the dynamic characteristic of solar array, this paper establishes the rule for determination of the relative position of the global maximum power point (GMPP. In order to track the GMPP within an appropriate period, a reliable technique and the corresponding computer algorithm are developed for GMPP tracking (GMPPT control. It exploits a definable nonlinear relation has been found between variable environmental parameters and the output current of solar arrays at every maximum power point, obtained based on the dynamic performance corresponding to PSC. Finally, the proposed method is validated with MATLAB®/Simulink® simulations and actual experiments. It is shown that the GMPPT of a PV generation system is indeed realized efficiently in a realistic environment with partial shading conditions.
Ferreira, Fabiana [Universidad de Buenos Aires (Argentina). Facultad de Ingenieria]. E-mail: fferreir@fi.uba.ar
2001-07-01
The magnetic field which a person near the high voltage line is submitted depends on certain physical magnitudes and also on various elements present in the surrounding as well. Therefore, it is necessary to develop calculation methods to the fast evaluation of the presence of those elements. For certain reference conditions it is possible to calculate the exact value of the magnetic field. Affecting this reference value by a factor for each of the conditions the maximum value of the field is obtained. The paper proposes formulas for some of these factors and studies the viability of obtaining by numerical and/or statistical methods.
Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf)—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only. PMID:28133549
Usa, Hideyuki; Matsumura, Masashi; Ichikawa, Kazuna; Takei, Hitoshi
2017-01-01
This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf )-the static muscular moment to support a limb segment against gravity-from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm ) was calculated. Body weight and limb segment length (thigh and lower leg length) were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
Hideyuki Usa
2017-01-01
Full Text Available This study attempted to develop a formula for predicting maximum muscle strength value for young, middle-aged, and elderly adults using theoretical Grade 3 muscle strength value (moment fair: Mf—the static muscular moment to support a limb segment against gravity—from the manual muscle test by Daniels et al. A total of 130 healthy Japanese individuals divided by age group performed isometric muscle contractions at maximum effort for various movements of hip joint flexion and extension and knee joint flexion and extension, and the accompanying resisting force was measured and maximum muscle strength value (moment max, Mm was calculated. Body weight and limb segment length (thigh and lower leg length were measured, and Mf was calculated using anthropometric measures and theoretical calculation. There was a linear correlation between Mf and Mm in each of the four movement types in all groups, excepting knee flexion in elderly. However, the formula for predicting maximum muscle strength was not sufficiently compatible in middle-aged and elderly adults, suggesting that the formula obtained in this study is applicable in young adults only.
U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...
Horment-Lara, Giselle; Cruz-Montecinos, Carlos; Núñez-Cortés, Rodrigo; Letelier-Horta, Pablo; Henriquez-Fuentes, Luis
2016-04-01
The mechanisms underlying the effects of neurodynamic techniques are still unknown. Therefore, the aim of this study was to provide a starting point for future research on explaining why neurodynamic techniques affect muscular activities in patients with sciatic pain. A double-blind trial was conducted in 12 patients with lumbosciatica. Surface electromyography activity was assessed for different muscles during prone hip extension. Pre- and post-intervention values for muscle activity onset and maximal amplitude signals were determined. There was a significant reduction in the surface electromyography activity of maximal amplitude in the erector spinae and contralateral erector spinae (p < 0.05). Additionally, gluteus maximus (p < 0.05) activity onset was delayed post-intervention. Self-neurodynamic sliding techniques modify muscular activity and onset during prone hip extension, possibly reducing unnecessary adaptations for protecting injured components. Future work will analyze the effects of self-neurodynamic sliding techniques during other physical tasks. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
2016-09-01
Popular culture reflects both the interests of and the issues affecting the general public. As concerns regarding climate change and its impacts grow, is it permeating into popular culture and reaching that global audience?
Teratology testing under REACH.
Barton, Steve
2013-01-01
REACH guidelines may require teratology testing for new and existing chemicals. This chapter discusses procedures to assess the need for teratology testing and the conduct and interpretation of teratology tests where required.
Reaching affects saccade trajectories.
Tipper, S P; Howard, L A; Paul, M A
2001-01-01
The pre-motor theory suggests that, when attention is oriented to a location, the motor systems that are involved in achieving current behavioural goals are activated. For example, when a task requires accurate reaching, attention to a location activates the motor circuits controlling saccades and manual reaches. These actions involve separate neural systems for the control of eye and hand, but we believe that the selection processes acting on neural population codes within these systems are similar and can affect each other. The attentional effect can be revealed in the subsequent movement. The present study shows that the path the eye takes as it saccades to a target is affected by whether a reach to the target is also produced. This effect is interpreted as the influence of a hand-centred frame used in reaching on the spatial frame of reference required for the saccade.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Suenimeire Vieira
2012-12-01
Full Text Available INTRODUÇÃO: Um dos benefícios promovidos pelo exercício físico parece ser a melhora da modulação do sistema nervoso autônomo sobre o coração. No entanto, o papel da atividade física como um fator determinante da variabilidade da frequência cardíaca (VFC não está bem estabelecido. Desta forma, o objetivo do estudo foi verificar se há correlação entre a frequência cardíaca de repouso e a carga máxima atingida no teste de esforço físico com os índices de VFC em homens idosos. MÉTODOS: Foram estudados 18 homens idosos com idades entre 60 e 70 anos. Foram feitas as seguintes avaliações: a teste de esforço máximo em cicloergômetro utilizando-se o protocolo de Balke para avaliação da capacidade aeróbia; b registro da frequência cardíaca (FC e dos intervalos R-R durante 15 minutos na condição de repouso em decúbito dorsal. Após a coleta, os dados foram analisados no domínio do tempo, calculando-se o índice RMSSD, e no domínio da frequência, calculando-se os índices de baixa frequência (BF, alta frequência (AF e razão BF/AF. Para verificar se existe associação entre a carga máxima atingida no teste de esforço e os índices de VFC foi aplicado o teste de correlação de Pearson (p 0,05. CONCLUSÃO: Os índices de variabilidade da frequência cardíaca temporal e espectrais estudados não são indicadores do nível de capacidade aeróbia de homens idosos avaliados em cicloergômetro.INTRODUCTION: One of the benefits provided by regular physical activities seems to be the improvement of cardiac autonomic nervous system modulation. However, the role of physical activity as a determinant factor of the heart rate variability (HRV is not well-established. Therefore, the aim of this study was to verify whether there was a correlation between resting heart rate and maximum workload reached in an exercise test with HRV indices in elderly men. METHODS: A study was carried out with 18 elderly men between the ages of
Terry, Dorothy Givens
2012-01-01
Dr. Mae Jemison is the world's first woman astronaut of color who continues to reach for the stars. Jemison was recently successful in leading a team that has secured a $500,000 federal grant to make interstellar space travel a reality. The Dorothy Jemison Foundation for Excellence (named after Jemison's mother) was selected in June by the Defense…
REACH. Air Conditioning Units.
Garrison, Joe; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of air conditioning. The instructional units focus on air conditioning fundamentals, window air conditioning, system and installation, troubleshooting and…
REACH. Air Conditioning Units.
Garrison, Joe; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of air conditioning. The instructional units focus on air conditioning fundamentals, window air conditioning, system and installation, troubleshooting and…
Reaching into Pictorial Spaces
Volcic, Robert; Vishwanath, Dhanraj; Domini, Fulvio
2014-02-01
While binocular viewing of 2D pictures generates an impression of 3D objects and space, viewing a picture monocularly through an aperture produces a more compelling impression of depth and the feeling that the objects are "out there", almost touchable. Here, we asked observers to actually reach into pictorial space under both binocular- and monocular-aperture viewing. Images of natural scenes were presented at different physical distances via a mirror-system and their retinal size was kept constant. Targets that observers had to reach for in physical space were marked on the image plane, but at different pictorial depths. We measured the 3D position of the index finger at the end of each reach-to-point movement. Observers found the task intuitive. Reaching responses varied as a function of both pictorial depth and physical distance. Under binocular viewing, responses were mainly modulated by the different physical distances. Instead, under monocular viewing, responses were modulated by the different pictorial depths. Importantly, individual variations over time were minor, that is, observers conformed to a consistent pictorial space. Monocular viewing of 2D pictures thus produces a compelling experience of an immersive space and tangible solid objects that can be easily explored through motor actions.
Snow, Rufus; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of refrigeration. The instructional units focus on refrigeration fundamentals, tubing and pipe, refrigerants, troubleshooting, window air conditioning, and…
Terry, Dorothy Givens
2012-01-01
Dr. Mae Jemison is the world's first woman astronaut of color who continues to reach for the stars. Jemison was recently successful in leading a team that has secured a $500,000 federal grant to make interstellar space travel a reality. The Dorothy Jemison Foundation for Excellence (named after Jemison's mother) was selected in June by the Defense…
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Solar Hydrogen Reaching Maturity
Rongé Jan
2015-09-01
Full Text Available Increasingly vast research efforts are devoted to the development of materials and processes for solar hydrogen production by light-driven dissociation of water into oxygen and hydrogen. Storage of solar energy in chemical bonds resolves the issues associated with the intermittent nature of sunlight, by decoupling energy generation and consumption. This paper investigates recent advances and prospects in solar hydrogen processes that are reaching market readiness. Future energy scenarios involving solar hydrogen are proposed and a case is made for systems producing hydrogen from water vapor present in air, supported by advanced modeling.
Karan, Belgin; Pourbagher, Aysin; Torun, Nese
2016-06-01
To evaluate the correlations between the apparent diffusion coefficient (ADC) value and the standardized uptake value (SUV) with prognostic factors in breast cancer. Seventy women with invasive breast cancer (56 cases of invasive ductal carcinoma, four of mixed ductal and lobular invasive carcinoma, three of lobular invasive carcinoma, two of micropapillary carcinoma, and one each of mixed ductal and mucinous carcinoma, mucinous carcinoma, medullary carcinoma, metaplastic carcinoma, and tubular carcinoma) were included in this study. All patients underwent presurgical breast magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) at 1.5T and whole-body (18) F-fluorodeoxyglucose ((18) F-FDG) positron emission tomography (PET) / computed tomography (CT). For all invasive breast cancers and invasive ductal carcinomas, we assessed the relationships among ADC, SUV, and pathological prognostic factors. Both the median ADC value and maximum SUV (SUVmax) were significantly associated with vascular invasion (P = 0.008 and P = 0.026, respectively). SUVmax was also significantly correlated with tumor size (P = 0.001), histological grade (P = 0.001), lymph node status (P = 0.0015), estrogen receptor status (P = 0.010), and human epidermal growth factor receptor 2 status (P = 0.020), whereas ADC values were not. The correlation between the ADC and SUVmax was not significant (P = 0.356; R = -0.112). Mucinous carcinoma showed high ADC and relatively low SUVmax. Medullary carcinoma showed low ADC and high SUVmax. When we evaluated the relationships among ADC, SUVmax, and prognostic factors in the 56 invasive ductal carcinomas, our statistical results were not significantly changed, except SUVmax was also significantly associated with progesterone receptor status (P = 0.034), but not lymph node status. SUVmax may be valuable for predicting the prognosis of breast cancer. Both ADC and SUVmax are useful to predict vascular invasion. J. Magn. Reson. Imaging 2016
Westar reaches critical crossroads
1992-06-01
Westar Mining Ltd. has applied for court protection until September 30, 1992 to gain time to draw up a final reorganization plan. The Companies' Creditors Arrangement Act is a federal statute that allows a business to restructure financially without having to declare bankruptcy. Normal trade terms with suppliers are usually maintained during this period. The company is struggling under the effects of falling coal prices, a high Canadian dollar and a high debt burden. Changes in work practices at the company's Balmer mine are a major part of the restructuring. An agreement must be reached with the United Mineworkers of America and other stakeholders or the Balmer mine will close permanently. Employees have been locked out since May 1, 1992 when union members rejected the company's final offer.
童瑶; 高琴; 谢和宾; 曹霞; 李晓翠; 王乐三
2012-01-01
目的 探讨18F-FDG PET/CT最大标准摄取值(SUVmax)在非小细胞肺癌(NSCLC)中的适宜诊断界值.资料与方法 102例行胸部或全身PET/CT检查并经支气管内镜病理、肿块穿刺细胞学检查、术后病理确诊证实的肺部良、恶性病变患者,根据Youden指数最大原则、误诊率与漏诊率同等重要原则、正确率最大原则寻找18F-FDG PET/CT SUVmax鉴别NSCLC与肺良性病变的适宜诊断标准.结果 18F-FDG PET/CT诊断NSCLC与肺良性病变时,Youden指数最大原则下的适宜诊断界值为S UVmax=2.8,误诊率与漏诊率同等重要原则下的适宜诊断界值为SUVmax=5.45,正确率最大原则下的适宜诊断界值为SUVmax=2.8.结论 SUVmax鉴别NSCLC与肺良性病变的适宜诊断标准为2.8.%Purpose To investigate the suitable cutoff value of maximum standardized uptake value (SUVmax) for diagnosing non-small cell lung cancer (NSCLC) using 18F-FDG PET/CT. Materials and Methods 102 patients with malignant or benign pulmonary lesions proved by pathology underwent PET/CT The suitable cutoff value of SUVmax for 18F-FDG PET/CT was determined to differentiate NSCLC from pulmonary benign lesions based on Youden's index maximum, rate of equal false positive and false negative and accuracy maximum principle. Results The optimal cutoff values of SUVmax were 2.8, 5,45 and 2.8, respectively according to the rule of Youden's index maximum, rate of equal false positive and false negative, and the accuracy maximum. Conclusion The optimal cutoff value of SUVmax to differentiate NSCLC from pulmonary benign lesions is 2.8.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Reaching Fleming's dicrimination bound
Gruebl, Gebhard
2012-01-01
Any rule for identifying a quantum system's state within a set of two non-orthogonal pure states by a single measurement is flawed. It has a non-zero probability of either yielding the wrong result or leaving the query undecided. This also holds if the measurement of an observable $A$ is repeated on a finite sample of $n$ state copies. We formulate a state identification rule for such a sample. This rule's probability of giving the wrong result turns out to be bounded from above by $1/n\\delta_{A}^{2}$ with $\\delta_{A}=|_{1}-_{2}|/(\\Delta_{1}A+\\Delta_{2}A).$ A larger $\\delta_{A}$ results in a smaller upper bound. Yet, according to Fleming, $\\delta_{A}$ cannot exceed $\\tan\\theta$ with $\\theta\\in(0,\\pi/2) $ being the angle between the pure states under consideration. We demonstrate that there exist observables $A$ which reach the bound $\\tan\\theta$ and we determine all of them.
2001-01-01
The creation of the world's largest sandstone cavern, not a small feat! At the bottom, cave-in preventing steel mesh can be seen clinging to the top of the tunnel. The digging of UX-15, the cavern that will house ATLAS, reached the upper ceiling of LEP on October 10th. The breakthrough which took place nearly 100 metres underground occurred precisely on schedule and exactly as planned. But much caution was taken beforehand to make the LEP breakthrough clean and safe. To prevent the possibility of cave-ins in the side tunnels that will eventually be attached to the completed UX-15 cavern, reinforcing steel mesh was fixed into the walls with bolts. Obviously no people were allowed in the LEP tunnels below UX-15 as the breakthrough occurred. The area was completely evacuated and fences were put into place to keep all personnel out. However, while personnel were being kept out of the tunnels below, this has been anything but the case for the work taking place up above. With the creation of the world's largest...
Science Experiments: Reaching Out to Our Users
Nolan, Maureen; Tschirhart, Lori; Wright, Stephanie; Barrett, Laura; Parsons, Matthew; Whang, Linda
2008-01-01
As more users access library services remotely, it has become increasingly important for librarians to reach out to their user communities and promote the value of libraries. Convincing the faculty and students in the sciences of the value of libraries and librarians can be a particularly "hard sell" as more and more of their primary…
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Conservation reaches new heights.
Pepall, J; Khanal, P
1992-10-01
The conservation program with the management assistance of the Woodlands Mountain Institute in 2 contiguous parks, the Mount Everest National Park in Nepal and the Qomolangma Nature Reserve in China, in 2 countries is described. The focus is on conservation of the complex ecosystem with sustainable development by showing local people how to benefit from the park without environmental damage. Cultural diversity is as important as biological diversity. The area has been designated by UNESCO as a World Heritage Site with the "last pure ecological seed" of the Himalayas. The regional geography and culture are presented. Population growth has impacted natural resources through overgrazing, cultivation of marginal land, and deforestation; future plans to build a dam and road bordering the nature reserve pose other threats. Proposed management plans for the Makalu-Barun Nature Park (established in November 1991) and Conservation Area include a division of the park into nature reserve areas free of human activity, protected areas which permit traditional land use, and special sites and trail for tourists and religious pilgrims. The conservation area will act as a buffer for the park and provide economic opportunities; further subdivisions include land use for biodiversity protection, community forest and pasture, agroforestry, and agriculture and settlement. Efforts will be made to increase the welfare of women and local people; proposed projects include the introduction of higher milk-producing animals for stall feeding. Also proposed is a cultural and natural history museum. 70% of the project's resources will be directed to local community participation in consultation and park maintenance. The project is a model of how conservation and protection of natural resources can coexist with local economic development and participation; an integration of preservation of biological diversity, mountain wisdom, and the value of local people as resources for conservation.
Messinger, H
2014-04-01
Under the current European legislation for the Registration, Evaluation, Authorisation and restriction of Chemicals (REACHs) a Derived No Effect Level (DNEL) has to be delineated for acute and chronic inhalation effects. The majority of available experimental studies are performed by the oral route of exposure. Route to route extrapolation poses particular problems for irritating or corrosive substances but the necessity for additional animal studies with inhalation exposure needs to be balanced with the regulatory information requirements. Existing occupational exposure limits (OEL) as surrogate for cut-off limits representing safe exposure under working conditions were grouped under certain criteria for substances that are legally classified in Europe as irritating or corrosive. As a result, it was shown that the OEL for irritating substances in this dataset is not lower than 10mg/m(3) and for corrosives not lower than 1mg/m(3). Under certain conditions these generic limits could be applied as a pragmatic, but still sufficiently reliable and protective upper cut-off limit approach to avoid additional animal tests with irritating or corrosive chemicals. The respective systemic toxicity profiles and physical-chemical properties need to be considered. Specific exclusion criteria for the discussed concept apply.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Carré, David
2015-01-01
, 1992). In response, behavioral economics (Camerer, 1999) has shown that agents have values other than optimization underpinning their decisions. Therefore, concerns arose regarding which values are guiding the agent but not about how such values became relevant for the agent. In this presentation, I......Most economic inquires revolve around agents making decisions. Getting the ‘best value’, it is assumed, drives such decisions: gaining most while risking least. This assumption has been debunked by showing that people does not always choose neither maximum benefit nor less risk (Kahneman & Tversky...... will explore the consequences of shifting to the latter perspective, i.e. looking for the generative framework of values. Here I argue that economic behavior should also be seen as a sense-making process, guided by values that are chosen/rejected along with fellow human beings, in specific socio...
Miura Takeshi
2010-12-01
Full Text Available Abstract Background In this era of molecular targeting therapy when various systematic treatments can be selected, prognostic biomarkers are required for the purpose of risk-directed therapy selection. Numerous reports of various malignancies have revealed that 18-Fluoro-2-deoxy-D-glucose (18F-FDG accumulation, as evaluated by positron emission tomography, can be used to predict the prognosis of patients. The purpose of this study was to evaluate the impact of the maximum standardized uptake value (SUVmax from 18-fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography (18F-FDG PET/CT on survival for patients with advanced renal cell carcinoma (RCC. Methods A total of 26 patients with advanced or metastatic RCC were enrolled in this study. The FDG uptake of all RCC lesions diagnosed by conventional CT was evaluated by 18F-FDG PET/CT. The impact of SUVmax on patient survival was analyzed prospectively. Results FDG uptake was detected in 230 of 243 lesions (94.7% excluding lung or liver metastases with diameters of less than 1 cm. The SUVmax of 26 patients ranged between 1.4 and 16.6 (mean 8.8 ± 4.0. The patients with RCC tumors showing high SUVmax demonstrated poor prognosis (P = 0.005 hazard ratio 1.326, 95% CI 1.089-1.614. The survival between patients with SUVmax equal to the mean of SUVmax, 8.8 or more and patients with SUVmax less than 8.8 were statistically different (P = 0.0012. This is the first report to evaluate the impact of SUVmax on advanced RCC patient survival. However, the number of patients and the follow-up period were still not extensive enough to settle this important question conclusively. Conclusions The survival of patients with advanced RCC can be predicted by evaluating their SUVmax using 18F-FDG-PET/CT. 18F-FDG-PET/CT has potency as an "imaging biomarker" to provide helpful information for the clinical decision-making.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
The REACH Youth Program Learning Toolkit
Sierra Health Foundation, 2011
2011-01-01
Believing in the value of using video documentaries and data as learning tools, members of the REACH technical assistance team collaborated to develop this toolkit. The learning toolkit was designed using and/or incorporating components of the "Engaging Youth in Community Change: Outcomes and Lessons Learned from Sierra Health Foundation's…
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Greene, Nicholas
2012-01-01
ABOUT THE BOOK Halo Reach is the latest installment, and goes back to Halo's roots in more ways than one. Set around one of the most frequently referenced events in the Haloverse-The Fall of Reach-Reach puts you in the shoes of Noble 6, an unnamed Spartan, fighting a doomed battle to save the planet. Dual-wielding's gone, health is back, and equipment now takes the form of different "classes," with different weapon loadouts and special abilities (such as sprinting, cloaking, or flight). If you're reading this guide, you're either new to the Halo franchise and looking to get a leg up on all
Lunar Probe Reaches Deep Space
2011-01-01
@@ China's second lunar probe, Chang'e-2, has reached an orbit 1.5 million kilometers from Earth for an additional mission of deep space exploration, the State Administration for Science, Technology and Industry for National Defense announced.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
Exploring high-density baryonic matter: Maximum freeze-out density
Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)
2016-08-15
The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)
Gadda, Davide; Vannucchi, Letizia; Niccolai, Franco; Neri, Anna T.; Carmignani, Luca; Pacini, Patrizio [Ospedale del Ceppo, U.O. Radiodiagnostica, Pistoia (Italy)
2005-12-01
Maximum intensity projections reconstructions from 2.5 mm unenhanced multidetector computed tomography axial slices were obtained from 49 patients within the first 6 h of anterior-circulation cerebral strokes to identify different patterns of the dense artery sign and their prognostic implications for location and extent of the infarcted areas. The dense artery sign was found in 67.3% of cases. Increased density of the whole M1 segment with extension to M2 of the middle cerebral artery was associated with a wider extension of cerebral infarcts in comparison to M1 segment alone or distal M1 and M2. A dense sylvian branch of the middle cerebral artery pattern was associated with a more restricted extension of infarct territory. We found 62.5% of patients without a demonstrable dense artery to have a limited peripheral cortical or capsulonuclear lesion. In patients with a 7-10 points on the Alberta Stroke Early Programme Computed Tomography Score and a dense proximal MCA in the first hours of ictus the mean decrease in the score between baseline and follow-up was 5.09{+-}1.92 points. In conclusion, maximum intensity projections from thin-slice images can be quickly obtained from standard computed tomography datasets using a multidetector scanner and are useful in identifying and correctly localizing the dense artery sign, with prognostic implications for the entity of cerebral damage. (orig.)
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Astronomical reach of fundamental physics
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-02-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Astronomical reach of fundamental physics.
Burrows, Adam S; Ostriker, Jeremiah P
2014-02-18
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
REACH. Analytical characterisation of petroleum UVCB substances
De Graaff, R.; Forbes, S.; Gennart, J.P.; Gimeno Cortes, M.J.; Hovius, H.; King, D.; Kleise, H.; Martinez Martin, C.; Montanari, L.; Pinzuti, M.; Pollack, H.; Ruggieri, P.; Thomas, M.; Walton, A.; Dmytrasz, B.
2012-10-15
The purpose of this report is to summarise the findings of the scientific and technical work undertaken by CONCAWE to assess the feasibility and potential benefit of characterising petroleum UVCB substances (Substances of Unknown or Variable Composition, Complex reaction products or Biological Materials) beyond the recommendations issued by CONCAWE for the substance identification of petroleum substances under REACH. REACH is the European Community Regulation on chemicals and their safe use (EC 1907/2006). It deals with the Registration, Evaluation, Authorisation and Restriction of Chemical substances. The report is based on Member Company experience of the chemical analysis of petroleum UVCB substances, including analysis in support of REACH registrations undertaken in 2010. This report is structured into four main sections, namely: Section 1 which provides an introduction to the subject of petroleum UVCB substance identification including the purpose of the report, regulatory requirements, the nature of petroleum UVCB substances, and CONCAWE's guidance to Member Companies and other potential registrants. Section 2 provides a description of the capabilities of each of the analytical techniques described in the REACH Regulation. This section also includes details on the type of analytical information obtained by each technique and an evaluation of what each technique can provide for the characterisation of petroleum UVCB substances. Section 3 provides a series of case studies for six petroleum substance categories (low boiling point naphthas, kerosene, heavy fuel oils, other lubricant base oils, residual aromatic extracts and bitumens) to illustrate the value of the information derived from each analytical procedure, and provide an explanation for why some techniques are not scientifically necessary. Section 4 provides a summary of the conclusions reached from the technical investigations undertaken by CONCAWE Member Companies, and summarising the
J. de Haan; W.P. Knulst
2000-01-01
Original title: Het bereik van de kunsten. The reach of the arts (Het bereik van de kunsten) is the fourth study in a series which periodically analyses the status of cultural participation, reading and use of other media. The series, Support for culture (Het culturele draagvlak) is sponsored by th
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Freudenburg, William R.
2006-01-01
Rather than seeking ivory-tower isolation, members of the Rural Sociological Society have always been distinguished by a willingness to work with specialists from a broad range of disciplines, and to work on some of the world's most challenging problems. What is less commonly recognized is that the willingness to reach beyond disciplinary…
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Yu-Erh Huang (Dept. of Nuclear Medicine, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Chih-Feng Chen (Dept. of Radiology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Yu-Jie Huang (Dept. of Radiation Oncology, Chang Gung Memorial Hospital-Kaohsiung Medical Center, Kaohsiung, Taiwan (China)); Konda, Sheela D.; Appelbaum, Daniel E.; Yonglin Pu (Dept. of Radiology, Univ. of Chicago, Chicago, IL (United States)), e-mail: ypu@radiology.bsd.uchicago.edu
2010-09-15
Background: 18F-fluoro-2-deoxyglucose positron emission tomography (18F-FDG PET) imaging has been shown to be an accurate method for diagnosing pulmonary lesions, and the standardized uptake value (SUV) has been shown to be useful in differentiating benign from malignant lesions. Purpose: To survey the interobserver variability of SUVmax and SUVmean measurements on 18F-FDG PET/CT scans and compare them with tumor size measurements on diagnostic CT scans in the same group of patients with focal pulmonary lesions. Material and Methods: Forty-three pulmonary nodules were measured on both 18F-FDG PET/CT and diagnostic chest CT examinations. Four independent readers measured the SUVmax and SUVmean of the 18F-FDG PET images, and the unidimensional nodule size of the diagnostic CT scans (UDCT) in all nodules. The region of interest (ROI) for the SUV measurements was drawn manually around each tumor on all consecutive slices that contained the nodule. The interobserver reliability and variability, represented by the intraclass correlation coefficient (ICC) and coefficient of variation (COV), respectively, were compared among the three parameters. The correlation between the SUVmax and SUVmean was also analyzed. Results: There was 100% agreement in the SUVmax measurements among the 4 readers in the 43 pulmonary tumors. The ICCs for the SUVmax, SUVmean, and UDCT by the four readers were 1.00, 0.97, and 0.97, respectively. The root-mean-square values of the COVs for the SUVmax, SUVmean, and UDCT by the four readers were 0%, 13.56%, and 11.03%, respectively. There was a high correlation observed between the SUVmax and SUVmean (Pearson's r=0.958; P <0.01). Conclusion: This study has shown that the SUVmax of lung nodules can be calculated without any interobserver variation. These findings indicate that SUVmax is a more valuable parameter than the SUVmean or UDCT for the evaluation of therapeutic effects of chemotherapy or radiation therapy on serial studies
Sampling hard to reach populations.
Faugier, J; Sargeant, M
1997-10-01
Studies on 'hidden populations', such as homeless people, prostitutes and drug addicts, raise a number of specific methodological questions usually absent from research involving known populations and less sensitive subjects. This paper examines the advantages and limitations of nonrandom methods of data collection such as snowball sampling. It reviews the currently available literature on sampling hard to reach populations and highlights the dearth of material currently available on this subject. The paper also assesses the potential for using these methods in nursing research. The sampling methodology used by Faugier (1996) in her study of prostitutes, HIV and drugs is used as a current example within this context.
How to reach library users who cannot reach libraries?
Dragana Ljuić
2002-01-01
Full Text Available The article discusses the ways of getting library activities closer to the individuals or groups of users who have difficulties to or cannot visit the library themselves. The author presents the services offered by the Maribor Public Library and discusses how one of the basic human rights – the right to the access of cultural goods, knowledge and information - is exercised also through library activities. By enabling access to library material and information, public libraries help to fulfill basic human rights and thus raise the quality of living in a social environment. The following forms of library activities are presented in the article: »distance library« – borrowing books at home, in hospital, station for the bibliobus for disabled users, »mobile collections« in the institutions where users, due to their age or illness, have difficulties in accessing or even cannot reach library materials and information by themselves.
Reach capacity in older women submitted to flexibility training
Elciana de Paiva Lima Vieira
2015-11-01
Full Text Available The aim of this study was to analyze the effect of flexibility training on the maximum range of motion levels and reach capacity of older women practitioners of aquatic exercises of the Prev-Quedas project. Participants were divided into two groups: intervention (IG, n = 25, which were submitted to flexibility training program and control (CG, n = 21, in which older women participated only in aquatic exercises. Flexibility training lasted three months with weekly frequency of two days, consisting of stretching exercises involving trunk and lower limbs performed after aquatic exercises. The stretching method used was passive static. Assessment consisted of the functional reach, lateral and goniometric tests. Statistical analysis was performed using the following tests: Shapiro-Wilk normality, ANCOVA, Pearson and Spearman correlations. Significant results for GI in gains of maximum range of motion for the right hip joint (p = 0.0025, however, the same result was not observed in other joints assessed, and there was no improvement in functional and lateral reach capacity for both groups. Significant correlations between reach capacity and range of motion in the trunk, hip and ankle were not observed. Therefore, flexibility training associated with the practice of aquatic exercises promoted increased maximum range of motion only for the hip joint; however, improvement in the reach capacity was not observed. The practice of aquatic exercises alone did not show significant results.
Honguero Martínez, A F; García Jiménez, M D; García Vicente, A; López-Torres Hidalgo, J; Colon, M J; van Gómez López, O; Soriano Castrejón, Á M; León Atance, P
2016-01-01
F-18 fluorodeoxyglucose integrated PET-CT scan is commonly used in the work-up of lung cancer to improve preoperative disease stage. The aim of the study was to analyze the ratio between SUVmax of N1 lymph nodes and primary lung cancer to establish prediction of mediastinal disease (N2) in patients operated on non-small cell lung cancer. This is a retrospective study of a prospective database. Patients operated on non-small cell lung cancer (NSCLC) with N1 disease by PET-CT scan were included. None of them had previous induction treatment, but they underwent standard surgical resection plus systematic lymphadenectomy. There were 51 patients with FDG-PET-CT scan N1 disease. 44 (86.3%) patients were male with a mean age of 64.1±10.8 years. Type of resection: pneumonectomy=4 (7.9%), lobectomy/bilobectomy=44 (86.2%), segmentectomy=3 (5.9%). adenocarcinoma=26 (51.0%), squamous=23 (45.1%), adenosquamous=2 (3.9%). Lymph nodes after surgical resection: N0=21 (41.2%), N1=12 (23.5%), N2=18 (35.3%). Mean ratio of the SUVmax of N1 lymph node to the SUVmax of the primary lung tumor (SUVmax N1/T ratio) was 0.60 (range 0.08-2.80). ROC curve analysis to obtain the optimal cut-off value of SUVmax N1/T ratio to predict N2 disease was performed. At multivariate analysis, we found that a ratio of 0.46 or greater was an independent predictor factor of N2 mediastinal lymph node metastases with a sensitivity and specificity of 77.8% and 69.7%, respectively. SUVmax N1/T ratio in NSCLC patients correlates with mediastinal lymph node metastasis (N2 disease) after surgical resection. When SUVmax N1/T ratio on integrated PET-CT scan is equal or superior to 0.46, special attention should be paid on higher probability of N2 disease. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Reach Envelope of Human Extremities
YANG Jingzhou(杨景周); ZHANG Yunqing(张云清); CHEN Liping(陈立平); ABDEL-MALEK Karim
2004-01-01
Significant attention in recent years has been given to obtain a better understanding of human joint ranges, measurement, and functionality, especially in conjunction with commands issued by the central nervous system. While researchers have studied motor commands needed to drive a limb to follow a path trajectory, various computer algorithms have been reported that provide adequate analysis of limb modeling and motion. This paper uses a rigorous mathematical formulation to model human limbs, understand their reach envelope, delineate barriers therein where a trajectory becomes difficult to control, and help visualize these barriers. Workspaces of a typical forearm with 9 degrees of freedom, a typical finger modeled as a 4- degree-of-freedom system, and a lower extremity with 4 degrees of freedom are discussed. The results show that using the proposed formulation, joint limits play an important role in distinguishing the barriers.
崔永刚; 廖栩鹤; 王荣福; 范岩; 邸丽娟; 刘红洁; 赵媛
2012-01-01
[Purpose] To evaluate the correlations of the maximum standardized uptake value (SUVmax) of 'T-FDG PET/CT and the short diameters of pulmonary lesions with the pathological types of lung cancer,and to assess the feasibility of using SUVmax as an important evaluation parameter for lung cancer diagnosis. [Methods] One hundred twenty-seven cases with clinically suspected lung cancer undergoing 18F-FDG PET/CT from July 2010 to February 2012,were retrospectively reviewed. All PET/CT images were analyzed visually and semiquantitatively by 2 physicians. In each case,the SUVmax and the short diameter of the lesions were calculated from the PET/CT images. All data were analyzed by statistical software. [ Results ] Positive correlation of the SUVmax and short diameter of the lesions in malignant group or benign group was found. A significant difference of SUVmax between malignant group and benign group was observed (P=0.0002), but not of the short diameters of the lesions (P=0.0938). The short diameter of squamous cell carcinoma group was significantly different from that of adenocarcinoma group (P=0.0059). However, there was no significant differences of SUVmax or short diameters between non-small cell lung cancer (NSCLC) group and small cell lung cancer group respectively (P=0.8932 and P=0.6355). [Conclusion] 18F-FDG PET/CT SUVmax might be used as an important parameter to differentiate malignant tumors from benign ones,contributing to the diagnosis and differential diagnosis for pulmonary lesions.%[目的]探讨肺部病灶18F-FD GPET/CT最大标准化摄取值(SUVmax)与病灶短径以及肺癌病理类型之间的相关性,以评估SUVmax诊断肺癌的价值.[方法]回顾性分析2010年7月至2012年2月127例行18-FD GPET/CT显像的肺部占位患者,在PET/CT图像上测算肺部病灶的SUVmax值及最短径,并进行统计学分析.[结果]肺癌组和良性组的短径与SUVmax之间分别均呈正相关；肺癌组与良性组的SUVmax存在统计学差异(P=0
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Advanced Reach Tool (ART): development of the mechanistic model.
Fransman, Wouter; Van Tongeren, Martie; Cherrie, John W; Tischer, Martin; Schneider, Thomas; Schinkel, Jody; Kromhout, Hans; Warren, Nick; Goede, Henk; Tielemans, Erik
2011-11-01
This paper describes the development of the mechanistic model within a collaborative project, referred to as the Advanced REACH Tool (ART) project, to develop a tool to model inhalation exposure for workers sharing similar operational conditions across different industries and locations in Europe. The ART mechanistic model is based on a conceptual framework that adopts a source receptor approach, which describes the transport of a contaminant from the source to the receptor and defines seven independent principal modifying factors: substance emission potential, activity emission potential, localized controls, segregation, personal enclosure, surface contamination, and dispersion. ART currently differentiates between three different exposure types: vapours, mists, and dust (fumes, fibres, and gases are presently excluded). Various sources were used to assign numerical values to the multipliers to each modifying factor. The evidence used to underpin this assessment procedure was based on chemical and physical laws. In addition, empirical data obtained from literature were used. Where this was not possible, expert elicitation was applied for the assessment procedure. Multipliers for all modifying factors were peer reviewed by leading experts from industry, research institutes, and public authorities across the globe. In addition, several workshops with experts were organized to discuss the proposed exposure multipliers. The mechanistic model is a central part of the ART tool and with advancing knowledge on exposure, determinants will require updates and refinements on a continuous basis, such as the effect of worker behaviour on personal exposure, 'best practice' values that describe the maximum achievable effectiveness of control measures, the intrinsic emission potential of various solid objects (e.g. metal, glass, plastics, etc.), and extending the applicability domain to certain types of exposures (e.g. gas, fume, and fibre exposure).
How Do Chinese Enterprises Look at REACH?
无
2007-01-01
@@ The new European REACH (Registration, Evaluation, Authorization of Chemicals) regulation has come into force. As soon as the REACH white paper was issued, Chinese enterprises started to research the possible impacts of REACH and prepare to cope with them. How then do these Chinese enterprises look at REACH? Following are views of some Chinese enterprises exporting chemical products to the European Union.
José A Raynal
2008-01-01
Full Text Available Se analiza el método del Principio de la Máxima Entropía (PME para la estimación de los parámetros de la distribución de valores extremos tipo I (VEI. El método PME ha sido comparado con otros de uso común, como son los de momentos (MOM, máxima verosimilitud (MV y momentos de probabilidad pesada (MPP, tanto con datos reales, como por medio de experimentos de muestreo distribucional. El método PME resultó ser una opción viable adicional para estimar los parámetros de la distribución VEI, aunque no tan buena como la de los métodos de MPP, MV y MOM. También se detectó que el método PME funciona mejor cuando la muestra de datos es mayor a 50 valores de caudales máximo anuales.The method of the Principle of Maximum Entropy (POME applied to the estimation of parameters of the extreme value type I distribution, (EVI is analyzed. The POME method has been compared with others of widespread use, like the methods of moments (MOM, maximum likelihood (ML and probability weighted moments (PWM, with both real flood data and through distributional sampling experiments. The POME method was another good option for estimating the parameters of the EVI distribution, but not as good as those provided by the methods of PWM, ML and MOM. It was also detected that the POME method has a better performance when the sample size is bigger than 50 values of maximum annual floods.
LHC Report: reaching high intensity
Jan Uythoven
2015-01-01
After both beams having been ramped to their full energy of 6.5 TeV, the last two weeks saw the beam commissioning process advancing on many fronts. An important milestone was achieved when operators succeeded in circulating a nominal-intensity bunch. During the operation, some sudden beam losses resulted in beam dumps at top energy, a problem that needed to be understood and resolved. In 2015 the LHC will be circulating around 2800 bunches in each beam and each bunch will contain just over 1 x 1011 protons. Until a few days ago commissioning was taking place with single bunches of 5 x 109 protons. The first nominal bunch with an intensity of 1 x 1011 protons was injected on Tuesday, 21 April. In order to circulate such a high-intensity bunch safely, the whole protection system must be working correctly: collimators, which protect the aperture, are set at preliminary values known as coarse settings; all kicker magnets for injecting and extracting the beams are commissioned with beam an...
Has the world economy reached its globalization limit?
Miskiewicz, Janusz
2009-01-01
The economy globalization measure problem is discussed. Four macroeconomic indices of twenty among the "richest" countries are examined. Four types of "distances" are calculated.Two types of networks are next constructed for each distance measure definition. It is shown that the globalization process can be best characterised by an entropy measure, based on entropy Manhattan distance. It is observed that a globalization maximum was reached in the interval 1970-2000. More recently a deglobalization process is observed.
ALMA telescope reaches new heights
2009-09-01
of the Array Operations Site. This means surviving strong winds and temperatures between +20 and -20 Celsius whilst being able to point precisely enough that they could pick out a golf ball at a distance of 15 km, and to keep their smooth reflecting surfaces accurate to better than 25 micrometres (less than the typical thickness of a human hair). Once the transporter reached the high plateau it carried the antenna to a concrete pad - a docking station with connections for power and fibre optics - and positioned it with an accuracy of a few millimetres. The transporter is guided by a laser steering system and, just like some cars today, also has ultrasonic collision detectors. These sensors ensure the safety of the state-of-the-art antennas as the transporter drives them across what will soon be a rather crowded plateau. Ultimately, ALMA will have at least 66 antennas distributed over about 200 pads, spread over distances of up to 18.5 km and operating as a single, giant telescope. Even when ALMA is fully operational, the transporters will be used to move the antennas between pads to reconfigure the telescope for different kinds of observations. "Transporting our first antenna to the Chajnantor plateau is a epic feat which exemplifies the exciting times in which ALMA is living. Day after day, our global collaboration brings us closer to the birth of the most ambitious ground-based astronomical observatory in the world", said Thijs de Graauw, ALMA Director. This first ALMA antenna at the high site will soon be joined by others and the ALMA team looks forward to making their first observations from the Chajnantor plateau. They plan to link three antennas by early 2010, and to make the first scientific observations with ALMA in the second half of 2011. ALMA will help astronomers answer important questions about our cosmic origins. The telescope will observe the Universe using light with millimetre and submillimetre wavelengths, between infrared light and radio waves in
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Effect of the equation of state on the maximum mass of differentially rotating neutron stars
Studzińska, A. M.; Kucaba, M.; Gondek-Rosińska, D.; Villain, L.; Ansorg, M.
2016-12-01
Knowing the value of the maximum mass of a differentially rotating relativistic star is a key step towards the understanding of the signals to be expected from the merger of binary neutron stars, one of the most awaited alternative sources of gravitational waves after binary black holes. In this paper, we study the effects of differential rotation and of the equation of state on the maximum mass of rotating neutron stars modelled as relativistic polytropes with various adiabatic indices. Calculations are performed using a highly accurate numerical code, based on a multidomain spectral method. We thoroughly explore the parameter space and determine how the maximum mass depends on the stiffness, on the degree of differential rotation and on the maximal density, taking into account all the types of solutions that were proven to exist in a preceding paper. The highest increase with respect to the maximum mass for non-rotating stars with the same equation of state is reached for a moderate stiffness. With differential rotation, the maximum mass can even be 3-4 times higher than it is for static stars. This result may have important consequences for the gravitational wave signal from coalescing neutron star binaries or for some supernovae events.
New symmetry of intended curved reaches
Torres Elizabeth B
2010-04-01
Full Text Available Abstract Background Movement regularities are inherently present in automated goal-directed motions of the primate's arm system. They can provide important signatures of intentional behaviours driven by sensory-motor strategies, but it remains unknown if during motor learning new regularities can be uncovered despite high variability in the temporal dynamics of the hand motions. Methods We investigated the conservation and violation of new movement regularity obtained from the hand motions traced by two untrained monkeys as they learned to reach outwardly towards spatial targets while avoiding obstacles in the dark. The regularity pertains to the transformation from postural to hand paths that aim at visual goals. Results In length-minimizing curves the area enclosed between the Euclidean straight line and the curve up to its point of maximum curvature is 1/2 of the total area. Similar trend is found if one examines the perimeter. This new movement regularity remained robust to striking changes in arm dynamics that gave rise to changes in the speed of the reach, to changes in the hand path curvature, and to changes in the arm's postural paths. The area and perimeter ratios characterizing the regularity co-varied across repeats of randomly presented targets whenever the transformation from posture to hand paths was compliant with the intended goals. To interpret this conservation and the cases in which the regularity was violated and recovered, we provide a geometric model that characterizes arm-to-hand and hand-to-arm motion paths as length minimizing curves (geodesics in a non-Euclidean space. Whenever the transformation from one space to the other is distance-metric preserving (isometric the two symmetric ratios co-vary. Otherwise, the symmetric ratios and their co-variation are violated. As predicted by the model we found empirical evidence for the violation of this movement regularity whenever the intended goals mismatched the actions. This
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Stream Habitat Reach Summary - NCWAP [ds158
California Department of Resources — The Stream Habitat - NCWAP - Reach Summary [ds158] shapefile contains in-stream habitat survey data summarized to the stream reach level. It is a derivative of the...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Clades reach highest morphological disparity early in their evolution
Hughes, Martin; Gerber, Sylvain; Albion Wills, Matthew
2013-08-01
There are few putative macroevolutionary trends or rules that withstand scrutiny. Here, we test and verify the purported tendency for animal clades to reach their maximum morphological variety relatively early in their evolutionary histories (early high disparity). We present a meta-analysis of 98 metazoan clades radiating throughout the Phanerozoic. The disparity profiles of groups through time are summarized in terms of their center of gravity (CG), with values above and below 0.50 indicating top- and bottom-heaviness, respectively. Clades that terminate at one of the "big five" mass extinction events tend to have truncated trajectories, with a significantly top-heavy CG distribution overall. The remaining 63 clades show the opposite tendency, with a significantly bottom-heavy mean CG (relatively early high disparity). Resampling tests are used to identify groups with a CG significantly above or below 0.50; clades not terminating at a mass extinction are three times more likely to be significantly bottom-heavy than top-heavy. Overall, there is no clear temporal trend in disparity profile shapes from the Cambrian to the Recent, and early high disparity is the predominant pattern throughout the Phanerozoic. Our results do not allow us to distinguish between ecological and developmental explanations for this phenomenon. To the extent that ecology has a role, however, the paucity of bottom-heavy clades radiating in the immediate wake of mass extinctions suggests that early high disparity more probably results from the evolution of key apomorphies at the base of clades rather than from physical drivers or catastrophic ecospace clearing.
GROWTH ANALYSIS AND ASSESSMENT OF PIG’S BIOLOGICAL MAXIMUM
Dragutin Vincek
2010-06-01
Full Text Available The aim of this study was to determine a mathematical model which can be used to describe the growth of domestic animals in an attempt to predict the optimal time of slaughter/weight or the development of body parts or tissues and estimate the biological maximum. The study was conducted on 60 pigs (30 barrows and 30 gilts in the interval between the age of 49 and 215 days. By applying the generalized logistic function, the growth of live weight and tissues were described. The observed gilts reached the inflection point in approximately 121 days (I = 70.7 kg. The point at which the interval of intensive growth starts was at the age of approximately 42 days, (TB=17.35 kg and the saturation point the pigs reached at the age of 200.5 days (TC=126.74 kg. The estimated biological maximum weight of gilts was 179.79 kg. The barrows reached the inflection point in approximately 149 days (I=92.2 kg. The point at which the intensive interval of growth starts was estimated at the age of approximately 52 days (TB=22.93 kg, and the saturation point the barrows reached at the age of 245 days (TC=164.8 kg. The estimated biological maximum weight of barrows was 233.25 kg. Muscle tissue of gilts reached the inflection point (I = 28.46 kg in approximately 110 days. The point at which the interval of intensive growth of muscle tissue starts (TB=6.06 kg was estimated at approximately 53 days, and the saturation point of growth (TC=52.25 kg the muscle tissue of gilts reached at the age of 162 days. The estimated maximum biological growth of muscle tissue in gilts was 75.79 kg. The muscle tissue of barrows reached the inflection point (I=28.78 kg in approximately 118 days, the point at which the interval of intensive growth starts (TB=6.36 kg at the age of approximately 35 days. The saturation point of muscle tissue growth in barrows (TC=52.51 kg was reached at the age of 202 days. The estimated maximum biological growth of muscle tissue in barrows was 75.74 kg. The
何露; 闵庆文; 袁正
2011-01-01
Yunnan province has the biggest ancient tea tree garden in the world, which is widely distributed in Puer, Lincang, Xishuangbanna, and Baoshan in the middle and lower reaches of the Lancang River.These places have a long history of tea plantation and are rich in ancient tea plant resources.There are the largest ancient tea garden in the world of the longest history and a lot of ancient wild tea trees, including wild type, cultural type, and transitive type tea plant.The ancient tea plant has not only ecological value but also economic and cultural value.The ancient tea garden ecosystem is a typical example for integration of conservation and utilization of natural resources.It is of biodiversity and conducive to tea germplasm resources conservation.There is no fertilizer and pesticide input during the plantation.Ancient tea trees and the wild colony are the source of tea.The main biochemical components consisting of tea polyphenols, catechin, amino acids, and caffeine in ancient tea tree fresh leaves are generally higher than those of tableland tea fresh leaf,which means that the ancient tea has better quality and is organic.This results in a higher price for the ancient tea.The famous ancient tea trees and ancient tea gardens combined with local tremendous tea culture are excellent resources for the development of ecological tourism.All those can promote local sustainable economic development and increase local farmers' income.The ancient wild tea trees, transitive type tea trees, and cultivate tea trees demonstrate that Yunnan is the origin place for tea and tea cultivation.Different minorities in the middle and lower reaches of the Lancang River have developed different tea cultures, including the way that tea is made and consumed, the way that people interact with tea, and the aesthetics surrounding tea drinking.It is noted that over the past fifty years, the area of the ancient tea plant has been decreasing due to population growth, irrational picking, and
Bozym, David J; Uralcan, Betül; Limmer, David T; Pope, Michael A; Szamreta, Nicholas J; Debenedetti, Pablo G; Aksay, Ilhan A
2015-07-02
We use electrochemical impedance spectroscopy to measure the effect of diluting a hydrophobic room temperature ionic liquid with miscible organic solvents on the differential capacitance of the glassy carbon-electrolyte interface. We show that the minimum differential capacitance increases with dilution and reaches a maximum value at ionic liquid contents near 5-10 mol% (i.e., ∼1 M). We provide evidence that mixtures with 1,2-dichloroethane, a low-dielectric constant solvent, yield the largest gains in capacitance near the open circuit potential when compared against two traditional solvents, acetonitrile and propylene carbonate. To provide a fundamental basis for these observations, we use a coarse-grained model to relate structural variations at the double layer to the occurrence of the maximum. Our results reveal the potential for the enhancement of double-layer capacitance through dilution.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
de Abreu, Daniela Cristina Carvalho; Takara, Kelly; Metring, Nathalia Lopes; Reis, Julia Guimaraes; Cliquet, Alberto, Jr.
2012-01-01
We aimed to evaluate the influence of different types of wheelchair seats on paraplegic individuals' postural control using a maximum anterior reaching test. Balance evaluations during 50, 75, and 90% of each individual's maximum reach in the forward direction using two different cushions on seat (one foam and one gel) and a no-cushion condition…
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Anderson, Carryn M., E-mail: carryn-anderson@uiowa.edu [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Chang, Tangel [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Graham, Michael M. [Department of Nuclear Medicine, University of Iowa, Iowa City, Iowa (United States); Marquardt, Michael D. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Button, Anna; Smith, Brian J. [Department of Biostatistics, University of Iowa, Iowa City, Iowa (United States); Menda, Yusuf [Department of Nuclear Medicine, University of Iowa, Iowa City, Iowa (United States); Sun, Wenqing [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States); Pagedar, Nitin A. [Department of Otolaryngology—Head and Neck Surgery, University of Iowa, Iowa City, Iowa (United States); Buatti, John M. [Department of Radiation Oncology, University of Iowa, Iowa City, Iowa (United States)
2015-03-01
Purpose: To evaluate dynamic [{sup 18}F]-fluorodeoxyglucose (FDG) uptake methodology as a post–radiation therapy (RT) response assessment tool, potentially enabling accurate tumor and therapy-related inflammation differentiation, improving the posttherapy value of FDG–positron emission tomography/computed tomography (FDG-PET/CT). Methods and Materials: We prospectively enrolled head-and-neck squamous cell carcinoma patients who completed RT, with scheduled 3-month post-RT FDG-PET/CT. Patients underwent our standard whole-body PET/CT scan at 90 minutes, with the addition of head-and-neck PET/CT scans at 60 and 120 minutes. Maximum standardized uptake values (SUV{sub max}) of regions of interest were measured at 60, 90, and 120 minutes. The SUV{sub max} slope between 60 and 120 minutes and change of SUV{sub max} slope before and after 90 minutes were calculated. Data were analyzed by primary site and nodal site disease status using the Cox regression model and Wilcoxon rank sum test. Outcomes were based on pathologic and clinical follow-up. Results: A total of 84 patients were enrolled, with 79 primary and 43 nodal evaluable sites. Twenty-eight sites were interpreted as positive or equivocal (18 primary, 8 nodal, 2 distant) on 3-month 90-minute FDG-PET/CT. Median follow-up was 13.3 months. All measured SUV endpoints predicted recurrence. Change of SUV{sub max} slope after 90 minutes more accurately identified nonrecurrence in positive or equivocal sites than our current standard of SUV{sub max} ≥2.5 (P=.02). Conclusions: The positive predictive value of post-RT FDG-PET/CT may significantly improve using novel second derivative analysis of dynamic triphasic FDG-PET/CT SUV{sub max} slope, accurately distinguishing tumor from inflammation on positive and equivocal scans.
Tripling the maximum imaging depth with third-harmonic generation microscopy.
Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela
2015-09-01
The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ∼2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.
An approximate, maximum terminal velocity descent to a point
Eisler, G.R.; Hull, D.G.
1987-01-01
No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximate physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.
Reach preparation enhances visual performance and appearance.
Rolfs, Martin; Lawrence, Bonnie M; Carrasco, Marisa
2013-10-19
We investigated the impact of the preparation of reach movements on visual perception by simultaneously quantifying both an objective measure of visual sensitivity and the subjective experience of apparent contrast. Using a two-by-two alternative forced choice task, observers compared the orientation (clockwise or counterclockwise) and the contrast (higher or lower) of a Standard Gabor and a Test Gabor, the latter of which was presented during reach preparation, at the reach target location or the opposite location. Discrimination performance was better overall at the reach target than at the opposite location. Perceived contrast increased continuously at the target relative to the opposite location during reach preparation, that is, after the onset of the cue indicating the reach target. The finding that performance and appearance do not evolve in parallel during reach preparation points to a distinction with saccade preparation, for which we have shown previously there is a parallel temporal evolution of performance and appearance. Yet akin to saccade preparation, this study reveals that overall reach preparation enhances both visual performance and appearance.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Karamuz, Emilia; Romanowicz, Renata; Booij, Martijn
2014-05-01
There is a vast literature on the influence of land use changes on rainfall-runoff processes. The problem is difficult as it requires separation of climatic and water management related changes from land use influences. The present paper addresses the problem of the influence of land use changes on maximum flows at cross-sections along the middle River Vistula reach. We adopt a methodology tested at the catchment scale, which consists of an optimisation of a rainfall-runoff model using a moving time horizon and analysis of the variability of model parameters. In the present application, it consists of an analysis of changes of roughness coefficients of a distributed HEC-RAS model, optimised using a moving five-year window. The chosen river reach (between Annopol and Gusin) has a recorded history of land use changes over 50 years (from 1949 to 2001), which included 36% of the study area. The nature of the changes is complex and shows different trends for different plant communities and sections of the valley. Generally, there has been a several percent increase in the area occupied by forests and grassland communities and a slight increase in the proportion of scrub. The first step of the procedure is to define the river reaches that have recorded information on land use changes. The second step is to perform a moving window optimisation of the HEC-RAS model for a chosen river reach. In order to assess the influence of land use changes on maximum flow values, the goodness-of-fit of the simulation of annual maximum water levels is used as an optimisation criterion. In this way the influence of land use changes on maximum inundation extent related to flood risk assessment can be estimated. The final step is to analyse the results and relate the model parameter changes to historical land use changes. We report here the results of the first two steps of the procedure. This work was partly supported from the project "Stochastic flood forecasting system (The River Vistula
Evaluation of a hydrological model based on Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-04-01
Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river
Sørensen, Asger
parts of business ethics given prominence to especially one term, namely `value'. The question that interests me is the following: What does the articulation of ethics and morality in terms of values mean for ethics and morality as such. Or, to put the question in a more fashionably way: What...... is the value of value for morality and ethics?To make things a bit more precise, we can make use of the common distinction between ethics and morality, i.e. that morality is the immediate, collective and unconscious employment of morals, whereas ethics is the systematic, individual and conscious reflections...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Improving exposure scenario definitions within REACH
Lee, Jihyun; Pizzol, Massimo; Thomsen, Marianne
instruments to support a precautionary chemicals management system and to protect receptor’s health have also been increasing. Since 2007, the European Union adopted REACH (the Regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals): REACH makes industry responsible for assessing...... the different background exposure between two countries allows in fact the definition of a common framework for improving exposure scenarios within REACH system, for monitoring environmental health, and for increasing degree of circularity of resource and substance flows. References 1. European Commission...
Research on network maximum flows algorithm of cascade level graph%级连层次图的网络最大流算法研究
潘荷新; 伊崇信; 李满
2011-01-01
给出一种通过构造网络级连层次图的方法,来间接求出最大网络流的算法.对于给定的有n个顶点,P条边的网络N=(G,s,t,C),该算法可在O(n2)时间内快速求出流经网络N的最大网络流及达最大流时的网络流.%This paper gives an algoritm that structures a network cascade level graph to find out maximum flow of the network indirectly.For the given network N=(G,s,t,C) that has n vetexes and e arcs,this algorithm finds out the maximum value of the network flow fast in O(n2) time that flows from the network N and the network flows when the value of the one reach maximum.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Compact muon solenoid magnet reaches full field
2006-01-01
Scientist of the U.S. Department of Energy in Fermilab and collaborators of the US/CMS project announced that the world's largest superconducting solenoid magnet has reached full field in tests at CERN. (1 apge)
Hanford Reach - Ringold Russian Knapweed Treatment
US Fish and Wildlife Service, Department of the Interior — Increase the diversity of the seed mix on approximately 250 acres in the Ringold Unit of the Hanford Reach National Monument (Monument) treated with aminopyralid as...
RICHY
Expanded Program on Immunisation (EPI) training in. Zambia and critically analyses ... excellence in skills such as sport, music or dance, so it is ... only improve through reaching every child both physically and in .... Non-verbal communication.
Women Reaching Equality in Dubious Habit: Drinking
... page: https://medlineplus.gov/news/fullstory_161640.html Women Reaching Equality in Dubious Habit: Drinking Females also ... 25, 2016 MONDAY, Oct. 24, 2016 (HealthDay News) -- Women have made major strides towards equality with men, ...
Reaching the Overlooked Student in Physical Education
Esslinger, Keri; Esslinger, Travis; Bagshaw, Jarad
2015-01-01
This article describes the use of live action role-playing, or "LARPing," as a non-traditional activity that has the potential to reach students who are not interested in traditional physical education.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Impact of the REACH II and REACH VA Dementia Caregiver Interventions on Healthcare Costs.
Nichols, Linda O; Martindale-Adams, Jennifer; Zhu, Carolyn W; Kaplan, Erin K; Zuber, Jeffrey K; Waters, Teresa M
2017-05-01
Examine caregiver and care recipient healthcare costs associated with caregivers' participation in Resources for Enhancing Alzheimer's Caregivers Health (REACH II or REACH VA) behavioral interventions to improve coping skills and care recipient management. RCT (REACH II); propensity-score matched, retrospective cohort study (REACH VA). Five community sites (REACH II); 24 VA facilities (REACH VA). Care recipients with Alzheimer's disease and related dementias (ADRD) and their caregivers who participated in REACH II study (analysis sample of 110 caregivers and 197 care recipients); care recipients whose caregivers participated in REACH VA and a propensity matched control group (analysis sample of 491). Previously collected data plus Medicare expenditures (REACH II) and VA costs plus Medicare expenditures (REACH VA). There was no increase in VA or Medicare expenditures for care recipients or their caregivers who participated in either REACH intervention. For VA care recipients, REACH was associated with significantly lower total VA costs of care (33.6%). VA caregiver cost data was not available. In previous research, both REACH II and REACH VA have been shown to provide benefit for dementia caregivers at a cost of less than $5/day; however, concerns about additional healthcare costs may have hindered REACH's widespread adoption. Neither REACH intervention was associated with additional healthcare costs for caregivers or patients; in fact, for VA patients, there were significantly lower healthcare costs. The VA costs savings may be related to the addition of a structured format for addressing the caregiver's role in managing complex ADRD care to an existing, integrated care system. These findings suggest that behavioral interventions are a viable mechanism to support burdened dementia caregivers without additional healthcare costs. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Sung Woo Park; Byung Kwan Oh; Hyo Seon Park
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Identification of consistency in rating curve data: Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-04-01
Before calculating rating curve discharges, it is crucial to identify possible interruptions in data consistency. In this research, a methodology to perform this preliminary analysis is developed and validated. This methodology, called Bidirectional Reach (BReach), evaluates in each data point results of a rating curve model with randomly sampled parameter sets. The combination of a parameter set and a data point is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Moreover, a tolerance degree that defines satisfactory behavior of a sequence of model results is chosen. This tolerance degree equals the percentage of observations that are allowed to have non-acceptable model results. Subsequently, the results of the classification is used to assess the maximum left and right reach for each data point of a chronologically sorted time series. This maximum left and right reach in a gauging point represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. This analysis is repeated for a variety of tolerance degrees. Plotting results of this analysis for all data points and all tolerance degrees in a combined BReach plot enables the detection of changes in data consistency. Moreover, if consistent periods are detected, limits of these periods can be derived. The methodology is validated with various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, large errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations.
闫圆圆; 黄勇; 李文武; 白人驹; 付政; 穆殿斌; 郭洪波
2011-01-01
Objective To reveal the relationship among maximum FDG PET standardized uptake value ( SUVmax) , Ki-67 and pathological grading of esophageal carcinomas. Methods Fourty-seven patients with surgical resected esophageal carcinoma were enrolled in this study. 18F-FDG PET/CT examination was performed one week before operation and SUVmax was calculated. Specimens were obtained by surgical procedure. Then immunohistochemistry staining of Ki-67 was carried out and pathological grading was determined by HE staining. Relations among SUVmax , Ki-67 and pathological grading were analysed. Results (l)For all the 47 cases, the average FDG SUVmax and Ki-67 indexes was 12. 504 ±6. 805 and (7. 837 ±29. 798)% t respectively, which was positively correlated (r = 0. 581 ,P <0. 05). (2) Forty-seven specimens were obtained,including 13 well-differentiated squamous cell tumors, 16 moderately differentiated tumors and 18 poorly differentiated tumors. The mean SUVmax of well-differentiated,moderately differentiated and poorly differentiated tumors was 9.787 ± 1. 477,12. 313 ±0.479,15. 053 ±2. 147,respectively,and a significant difference could be determined between them by statistical analysis ( P =0. 000). Conclusions SUVmax may be used to indirectly evaluate the proliferative capacity of esophageal cancer. To some extent,SUVmax could reflect pathologic grading of tumor.%目的 探讨食管鳞癌FDC PET显像的最大标准摄取值(maximum FDG PET standardized uptake value,SUVmax)与肿瘤Ki-67表达及病理分级的关系.方法食管癌患者47例,于术前1周内行18F-FDG PET/CT检查,测得SUVmax.术后取得肿瘤标本,行Ki-67免疫组织化学染色,并HE染色确定病理分级,分析SUVmax、Ki-67、病理分级之间的关系.结果(1)47例患者中共47个食管鳞癌原发病灶,SUVmax为1.9 ～24.0,平均为12.504±6.805,Ki-67平均指数为(67.837±29.798)％,经统计学分析,SUVmax与Ki-67指数呈正相关,r值为0.581,P＜0.05.(2)47例中高分化鳞癌13
Nair, Vimoj J.; MacRae, Robert [Division of Radiation Oncology, University of Ottawa, Ottawa, Ontario (Canada); Ottawa Hospital Research Institute, Ottawa, Ontario (Canada); Sirisegaram, Abby [Ottawa Hospital Research Institute, Ottawa, Ontario (Canada); Pantarotto, Jason R., E-mail: jpantarotto@toh.on.ca [Division of Radiation Oncology, University of Ottawa, Ottawa, Ontario (Canada); Ottawa Hospital Research Institute, Ottawa, Ontario (Canada)
2014-02-01
Purpose: The aim of this study was to determine whether the preradiation maximum standardized uptake value (SUV{sub max}) of the primary tumor for [{sup 18}F]-fluoro-2-deoxy-glucose positron emission tomography (FDG-PET) has a prognostic significance in patients with Stage T1 or T2N0 non-small cell lung cancer (NSCLC) treated with curative radiation therapy, whether conventional or stereotactic body radiation therapy (SBRT). Methods and Materials: Between January 2007 and December 2011, a total of 163 patients (180 tumors) with medically inoperable histologically proven Stage T1 or T2N0 NSCLC and treated with radiation therapy (both conventional and SBRT) were entered in a research ethics board approved database. All patients received pretreatment FDG-PET / computed tomography (CT) at 1 institution with consistent acquisition technique. The medical records and radiologic images of these patients were analyzed. Results: The overall survival at 2 years and 3 years for the whole group was 76% and 67%, respectively. The mean and median SUV{sub max} were 8.1 and 7, respectively. Progression-free survival at 2 years with SUV{sub max} <7 was better than that of the patients with tumor SUV{sub max} ≥7 (67% vs 51%; P=.0096). Tumors with SUV{sub max} ≥7 were associated with a worse regional recurrence-free survival and distant metastasis-free survival. In the multivariate analysis, SUV{sub max} ≥7 was an independent prognostic factor for distant metastasis-free survival. Conclusion: In early-stage NSCLC managed with radiation alone, patients with SUV{sub max} ≥7 on FDG-PET / CT scan have poorer outcomes and high risk of progression, possibly because of aggressive biology. There is a potential role for adjuvant therapies for these high-risk patients with intent to improve outcomes.
Do working environment interventions reach shift workers?
Nabe-Nielsen, Kirsten; Jørgensen, Marie Birk; Garde, Anne Helene
2016-01-01
workers were less likely to be reached by workplace interventions. For example, night workers less frequently reported that they had got more flexibility (OR 0.5; 95 % CI 0.3-0.7) or that they had participated in improvements of the working procedures (OR 0.6; 95 % CI 0.5-0.8). Quality of leadership......PURPOSE: Shift workers are exposed to more physical and psychosocial stressors in the working environment as compared to day workers. Despite the need for targeted prevention, it is likely that workplace interventions less frequently reach shift workers. The aim was therefore to investigate whether...... the reach of workplace interventions varied between shift workers and day workers and whether such differences could be explained by the quality of leadership exhibited at different times of the day. METHODS: We used questionnaire data from 5361 female care workers in the Danish eldercare sector...
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Determination of the maximum retention of cobalt by ion exchange in h-zeolites
A. S. Zola
2012-06-01
Full Text Available This work aimed to determine the maximum content of cobalt that can be incorporated by ion exchange in zeolites H-USY, H-Beta, H-Mordenite, and H-ZSM-5. To reach this goal, batch isotherms at 75ºC were constructed after addition of zeolite samples in flasks filled with cobalt nitrate solution. The equilibrium data were fitted to Langmuir, Freundlich, and Tóth adsorption isotherm models. Langmuir was the best model for zeolites H-Beta, H-Mordenite, and H-ZSM-5, whereas experimental data for H-USY were better fitted to the Freundlich isotherm model. From the isotherms, it was possible to determine the maximum cobalt exchange level (q max that can be incorporated in each zeolite through ion exchange. In this sense, H-USY presented the highest q max value (2.40 meq/g zeol, while H-ZSM-5 showed the lowest one (0.64 meq/g zeol. These results also show the influence of the zeolite framework related to the channel system, pore opening, presence of cavities and secondary porosity and SiO2/Al2O3 ratio (SAR on the maximum capacity and behavior of cobalt ion exchange in protonic zeolites.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
The Astronomical Reach of Fundamental Physics
Burrows, Adam
2014-01-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the Universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the Cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples we expose the deep interrelationships imposed by Nature between disparate realms of the Universe and the amazing consequences of the unifying character of physical law.
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Polishing Difficult-To-Reach Cavities
Malinzak, R. Michael; Booth, Gary N.
1990-01-01
Springy abrasive tool used to finish surfaces of narrow cavities made by electrical-discharge machining. Robot arm moves vibrator around perimeters of cavities, polishing walls of cavities as it does so. Tool needed because such cavities inaccessible or at least difficult to reach with most surface-finishing tools.
REACH. Electricity Units, Post-Secondary.
Smith, Gene; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this postsecondary student manual contains individualized instructional units in the area of electricity. The instructional units focus on electricity fundamentals, electric motors, electrical components, and controls and installation.…
Reliability of the Advanced REACH Tool (ART)
Schinkel, J.; Fransman, W.; McDonnell, P.E.; Entink, R.K.; Tielemans, E.; Kromhout, H.
2014-01-01
Objectives: The aim of this study was to assess the reliability of the Advanced REACH Tool (ART) by (i) studying interassessor agreement of the resulting exposure estimates generated by the ART mechanistic model, (ii) studying interassessor agreement per model parameters of the ART mechanistic model
Reliability of the Advanced REACH Tool (ART)
Schinkel, J.; Fransman, W.; McDonnell, P.E.; Entink, R.K.; Tielemans, E.; Kromhout, H.
2014-01-01
Objectives: The aim of this study was to assess the reliability of the Advanced REACH Tool (ART) by (i) studying interassessor agreement of the resulting exposure estimates generated by the ART mechanistic model, (ii) studying interassessor agreement per model parameters of the ART mechanistic
Guiding Warfare to Reach Sustainable Peace
Vestenskov, David; Drewes, Line
The conference report Guiding Warfare to Reach Sustainable Peace constitutes the primary outcome of the conference It is based on excerpts from the conference presenters and workshop discussions. Furthermore, the report contains policy recommendations and key findings, with the ambition of develo...
ATLAS Barrel Toroid magnet reached nominal field
2006-01-01
Â OnÂ 9 November the barrel toroid magnet reached its nominal field of 4 teslas, with an electrical current of 21 000 amperes (21 kA) passing through the eight superconducting coils as shown on this graph
文建国; 崔林刚; 孟庆军; 任川川; 李金升; 吕宇涛; 张艳
2012-01-01
目的 比较尿流加速度(UFA)和最大尿流率(Qmax)诊断膀胱出口梗阻(BOO)的价值. 方法 分别选取50例前列腺增生(BPH)患者和50例健康者进行前列腺体积、UFA和Qmax测定.以P-Q图梗阻区作为参考标准,比较UFA和Qmax诊断BOO的灵敏度和特异性. 结果 BPH组UFA明显低于非BPH组(P＜0.05).以UFA＜2 ml/s2和Qmax＜10 ml/s作为诊断BOO参考标准,灵敏度和特异度分别为88％、75％与81％、63％,与参考标准P-Q图提示梗阻一致性分析Kappa值分别为0.55比0.35. 结论 UFA可以作为诊断BPH患者BOO的依据之一.%Objective To assess the value of the urine flow acceleration(UFA)versus maximum urinary flow rate (Qmax) for diagnosis of bladder outlet obstruction (BOO) in benign prostate hyperplasia (BPH).Methods A total of 50 men with BPH and 50 normal men were included in this study.Urodynamic examinations were performed in all patients according to the recommendations of the International Continence Society.Prostate volume,UFA and Qmax of each patient were analyzed and the results were compared between two groups.Results The UFA and Qmax of BPH group were much lower than that of the control group [(2.05±0.85)ml/s2 vs.(4.60±1.25)ml/s2 ; (8.50±1.05)ml/s vs.(13.00±3.35)ml/s,P＜0.05].The prostate volume in BPH group was increased compared with control group [(28.6±9.8) ml vs.(24.2±7.6)ml,P＜0.05].As diagnosis standard of UFA＜2.05 ml/s2 and Qmax＜ 10 ml/s,the sensitivity and specificity of UFA and Qmax in diagnosing BOO were (88％,75 ％)vs.(81％,63％).While compared with the result of P-Q chart,the Kappa values in correspondence analysis were 0.55 vs.0.35.The sensitivity,specificity and Kappa value of UFA in diagnosing BOO in BPHs were slightly higher than that of Qmax in comparison with the gold standard (BOO diagnosed by P-Q figure).Conclusions The UFA is a useful urodynamics parameter in diagnosing BOO of BPH.
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
Shao, Y. F.; Song, F.; Jiang, C. P.; Xu, X. H.; Wei, J. C.; Zhou, Z. L.
2016-02-01
We study the difference in the maximum stress on a cylinder surface σmax using the measured surface heat transfer coefficient hm instead of its average value ha during quenching. In the quenching temperatures of 200, 300, 400, 500, 600 and 800°C, the maximum surface stress σmmax calculated by hm is always smaller than σamax calculated by ha, except in the case of 800°C; while the time to reach σmax calculated by hm (fmmax) is always earlier than that by ha (famax). It is inconsistent with the traditional view that σmax increases with increasing Biot number and the time to reach σmax decreases with increasing Biot number. Other temperature-dependent properties also have a small effect on the trend of their mutual ratios with quenching temperatures. Such a difference between the two maximum surface stresses is caused by the dramatic variation of hm with temperature, which needs to be considered in engineering analysis.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Does workplace health promotion reach shift workers?
Nabe-Nielsen, Kirsten; Garde, Anne Helene; Clausen, Thomas;
2015-01-01
OBJECTIVES: One reason for health disparities between shift and day workers may be that workplace health promotion does not reach shift workers to the same extent as it reaches day workers. This study aimed to investigate the association between shift work and the availability of and participation...... in workplace health promotion. METHODS: We used cross-sectional questionnaire data from a large representative sample of all employed people in Denmark. We obtained information on the availability of and participation in six types of workplace health promotion. We also obtained information on working hours, ie......). RESULTS: In the general working population, fixed evening and fixed night workers, and employees working variable shifts including night work reported a higher availability of health promotion, while employees working variable shifts without night work reported a lower availability of health promotion...
Olefins and chemical regulation in Europe: REACH.
Penman, Mike; Banton, Marcy; Erler, Steffen; Moore, Nigel; Semmler, Klaus
2015-11-05
REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) is the European Union's chemical regulation for the management of risk to human health and the environment (European Chemicals Agency, 2006). This regulation entered into force in June 2007 and required manufacturers and importers to register substances produced in annual quantities of 1000 tonnes or more by December 2010, with further deadlines for lower tonnages in 2013 and 2018. Depending on the type of registration, required information included the substance's identification, the hazards of the substance, the potential exposure arising from the manufacture or import, the identified uses of the substance, and the operational conditions and risk management measures applied or recommended to downstream users. Among the content developed to support this information were Derived No-Effect Levels or Derived Minimal Effect Levels (DNELs/DMELs) for human health hazard assessment, Predicted No Effect Concentrations (PNECs) for environmental hazard assessment, and exposure scenarios for exposure and risk assessment. Once registered, substances may undergo evaluation by the European Chemicals Agency (ECHA) or Member State authorities and be subject to requests for additional information or testing as well as additional risk reduction measures. To manage the REACH registration and related activities for the European olefins and aromatics industry, the Lower Olefins and Aromatics REACH Consortium was formed in 2008 with administrative and technical support provided by Penman Consulting. A total of 135 substances are managed by this group including 26 individual chemical registrations (e.g. benzene, 1,3-butadiene) and 13 categories consisting of 5-26 substances. This presentation will describe the content of selected registrations prepared for 2010 in addition to the significant post-2010 activities. Beyond REACH, content of the registrations may also be relevant to other European activities, for
Distance Reached in the Anteromedial Reach Test as a Function of Learning and Leg Length
Bent, Nicholas P.; Rushton, Alison B.; Wright, Chris C.; Batt, Mark E.
2012-01-01
The Anteromedial Reach Test (ART) is a new outcome measure for assessing dynamic knee stability in anterior cruciate ligament-injured patients. The effect of learning and leg length on distance reached in the ART was examined. Thirty-two healthy volunteers performed 15 trials of the ART on each leg. There was a moderate correlation (r = 0.44-0.50)…
Thomas, Catherine [Paris-11 Univ., 91 Orsay (France)
2000-01-19
Theoretical models have shown that the maximum magnetic field in radio frequency superconducting cavities is the superheating field H{sub sh}. For niobium, H{sub sh} is 25 - 30% higher than the thermodynamical H{sub c} field: H{sub sh} within (240 - 274) mT. However, the maximum magnetic field observed so far is in the range H{sub c,max} = 152 mT for the best 1.3 GHz Nb cavities. This field is lower than the critical field H{sub c1} above which the superconductor breaks up into divided normal and superconducting zones (H{sub c1}{<=}H{sub c}). Thermal instabilities are responsible for this low value. In order to reach H{sub sh} before thermal breakdown, high power short pulses are used. The cavity needs then to be strongly over-coupled. The dedicated test bed has been built from the collaboration between Istituto Nazionale di Fisica Nucleare (INFN) - Sezione di Genoa, and the Service d'Etudes et Realisation d'Accelerateurs (SERA) of Laboratoire de l'Accelerateur Lineaire (LAL). The maximum magnetic field, H{sub rf,max}, measurements on INFN cavities give lower results than the theoretical speculations and are in agreement with previous results. The superheating magnetic fields is linked to the magnetic penetration depth. This superconducting characteristic length can be used to determine the quality of niobium through the ratio between the resistivity measured at 300 K and 4.2 K in the normal conducting state (RRR). Results have been compared to previous ones and agree pretty well. They show that the RRR measured on cavities is superficial and lower than the RRR measured on samples which concerns the volume. (author)
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Reaching Diverse Audiences through NOAO Education Programs
Pompea, Stephen M.; Sparks, R. T.; Walker, C. E.
2009-01-01
NOAO education programs are designed to reach diverse audiences. Examples described in this poster include the Hands-On Optics Project nationwide, an extension of the Hands-On Optics program at Boys and Girls Clubs in Arizona and in Hawaii, a professional development program for Navajo and Hopi teachers, a number of programs for the Tohono O'odham Nation, and a project collecting and reviewing Spanish language astronomy materials. Additionally NOAO is also involved in several local outreach projects for diverse and underserved audiences.
Mitigation of maximum world oil production: Shortage scenarios
Hirsch, Robert L. [Management Information Services, Inc., 723 Fords Landing Way, Alexandria, VA 22314 (United States)
2008-02-15
A framework is developed for planning the mitigation of the oil shortages that will be caused by world oil production reaching a maximum and going into decline. To estimate potential economic impacts, a reasonable relationship between percent decline in world oil supply and percent decline in world GDP was determined to be roughly 1:1. As a limiting case for decline rates, giant fields were examined. Actual oil production from Europe and North America indicated significant periods of relatively flat oil production (plateaus). However, before entering its plateau period, North American oil production went through a sharp peak and steep decline. Examination of a number of future world oil production forecasts showed multi-year rollover/roll-down periods, which represent pseudoplateaus. Consideration of resource nationalism posits an Oil Exporter Withholding Scenario, which could potentially overwhelm all other considerations. Three scenarios for mitigation planning resulted from this analysis: (1) A Best Case, where maximum world oil production is followed by a multi-year plateau before the onset of a monatomic decline rate of 2-5% per year; (2) A Middling Case, where world oil production reaches a maximum, after which it drops into a long-term, 2-5% monotonic annual decline; and finally (3) A Worst Case, where the sharp peak of the Middling Case is degraded by oil exporter withholding, leading to world oil shortages growing potentially more rapidly than 2-5% per year, creating the most dire world economic impacts. (author)
Maximum Likelihood Under Response Biased Sampling\\ud
Chambers, Raymond; Dorfman, Alan; Wang, Suojin
2003-01-01
Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
Can donated media placements reach intended audiences?
Cooper, Crystale Purvis; Gelb, Cynthia A; Chu, Jennifer; Polonec, Lindsey
2013-09-01
Donated media placements for public service announcements (PSAs) can be difficult to secure, and may not always reach intended audiences. Strategies used by the Centers for Disease Control and Prevention's (CDC) Screen for Life: National Colorectal Cancer Action Campaign (SFL) to obtain donated media placements include producing a diverse mix of high-quality PSAs, co-branding with state and tribal health agencies, securing celebrity involvement, monitoring media trends to identify new distribution opportunities, and strategically timing the release of PSAs. To investigate open-ended recall of PSAs promoting colorectal cancer screening, CDC conducted 12 focus groups in three U.S. cities with men and women either nearing age 50 years, when screening is recommended to begin, or aged 50-75 years who were not in compliance with screening guidelines. In most focus groups, multiple participants recalled exposure to PSAs promoting colorectal cancer screening, and most of these individuals reported having seen SFL PSAs on television, in transit stations, or on the sides of public buses. Some participants reported exposure to SFL PSAs without prompting from the moderator, as they explained how they learned about the disease. Several participants reported learning key campaign messages from PSAs, including that colorectal cancer screening should begin at age 50 years and screening can find polyps so they can be removed before becoming cancerous. Donated media placements can reach and educate mass audiences, including millions of U.S. adults who have not been screened appropriately for colorectal cancer.
Extended-reach wells tap outlying reserves
Nazzal, G. (Eastman Teleco, Houston, TX (United States))
1993-03-01
Extended-reach drilling (ERD) is being used to exploit fields and reserves that are located far from existing platforms. Effective wellbore placement from fewer platforms can reduce development costs, maximize production and increase reserve recovery. Six wells drilled offshore in the US, North Sea and Australia illustrate how to get the most economic benefit from available infrastructure. These wells are divided into three categories by depth (shallow, medium and deep). Vertical depth of these wells range from 963 to 12,791 ft TVD and displacements range from 4,871 to 23,917 ft. Important factors for successful extended-reach drilling included: careful, comprehensive pre-planning; adequate cuttings removal in all sections; hole stability in long, exposed intervals; torque and drag modeling of drilling BHAs, casing and liners; buoyancy-assisted casing techniques where appropriate; critical modifications to drilling rig and top drive, for medium and deep ERD; modified power swivels for shallow operations; drill pipe rubbers or other casing protection during extended periods of drill string rotation; heavy-wall casting across anticipated high-wear areas; survey accuracy and frequency; sound drilling practices and creativity to accomplish goals and objectives. This paper reviews the case history of these sites and records planning and design procedures.
Napa River Restoration Project: Rutherford Reach Completion and Oakville to Oak Knoll Reach
Information about the SFBWQP Napa River Restoration Project: Rutherford Reach Completion/Oakville to Oak Knoll, part of an EPA competitive grant program to improve SF Bay water quality focused on restoring impaired waters and enhancing aquatic resources.
Marchenko, Artem; Duarte, Vasco
Agile teams want to deliver maximum business value. That’s easy if the on-site Ccstomer assigns business value to each story. But how does the customer do that? How can you estimate business value? This workshop is run as a game, where teams have to make tough business decisions for their ”organizations”. Teams have to decide which orders to take and what to deliver first in order to earn more. The session gives the participants basic business value estimation techniques, but the main point is to make people live through the business situation and to help them feel the consequences of various choices.
Energy Balance of Irrigated Intercropping Field in the Middle Reaches of Heihe River Basin
WU Jinkui; DING Yongjian; WANG Genxu; SHEN Yongping; Yusuke YAMAZAKI; Jumpei KUBOTA
2006-01-01
Based on the experiments conducted in an irrigated intercropping field in Zhangye Oasis in the middle reaches of Heihe River basin in 2004, the characteristics of radiation budget are analyzed. Furthermore, energy balance is calculated by using Bowen-Ratio Energy Balance (BREB) method. The results show that the ratio of the absorbed radiation to the incoming short radiation in intercropping crop canopy-soil system is increasing with growing stages, from 0.81 in the initial growing stage (IGS) to 0.86 in the late growing stage (LGS). The net radiation, which is smaller in IGS, increases rapidly in the first period of the middle growing stage (MGS) and reaches the maximum value in the second period of MGS. It then somewhat decreases in LGS. The ratio of net radiation to total radiation has a similar trend with the net radiation. In the whole growing stages, latent heat flux, which takes up 70% or so of the net radiation, is the dominant item in energy balance. Sensible heat flux shares 20% of the net radiation and soil heat flux has a percentage of 10%. The characteristics of heat balance vary distinctly in different growing stages. In IGS, the ratios of latent heat flux,sensible heat flux and soil heat flux to net radiation are 44.5%, 23.8% and 31.7% respectively. In MGS, with the increasing of latent heat flux and the decreasing of sensible heat flux and soil heat flux, the ratios turn into 84.4%, 6.3% and 9.3%. In LGS, the soil heat flux maintains 0W/m2 or so, and latent heat flux and sensible heat flux take up 61.4% and 38.6% respectively. The energy balance also shows an obvious daily variation characteristic.
QSPR prediction of physico-chemical properties for REACH.
Dearden, J C; Rotureau, P; Fayet, G
2013-01-01
For registration of a chemical, European Union REACH legislation requires information on the relevant physico-chemical properties of the chemical. Predicted property values can be used when the predictions can be shown to be valid and adequate. The relevant physico-chemical properties that are amenable to prediction are: melting/freezing point, boiling point, relative density, vapour pressure, surface tension, water solubility, n-octanol-water partition coefficient, flash point, flammability, explosive properties, self-ignition temperature, adsorption/desorption, dissociation constant, viscosity, and air-water partition coefficient (Henry's law constant). Published quantitative structure-property relationship (QSPR) methods for all of these properties are discussed, together with relevant property prediction software, as an aid for those wishing to use predicted property values in submissions to the European Chemicals Agency (ECHA).
Reaching Synchronization in Networked Harmonic Oscillators With Outdated Position Data.
Song, Qiang; Yu, Wenwu; Cao, Jinde; Liu, Fang
2016-07-01
This paper studies the synchronization problem for a network of coupled harmonic oscillators by proposing a distributed control algorithm based only on delayed position states, i.e., outdated position states stored in memory. The coupling strength of the network is conveniently designed according to the absolute values and the principal arguments of the nonzero eigenvalues of the network Laplacian matrix. By analyzing a finite number of stability switches of the network with respect to the variation in the time delay, some necessary and sufficient conditions are derived for reaching synchronization in networked harmonic oscillators with positive and negative coupling strengths, respectively, and it is shown that the time delay should be taken from a set of intervals bounded by some critical values. Simulation examples are given to illustrate the effectiveness of the theoretical analysis.
Reach and get capability in a computing environment
Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM
2012-06-05
A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.
Efficiency at Maximum Power of Low-Dissipation Carnot Engines
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered.
Efficiency at maximum power of low-dissipation Carnot engines.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-√Tc/Th] is recovered.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Speeded reaching movements around invisible obstacles.
Todd E Hudson
Full Text Available We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain using the Dominance Test employed in Hudson et al. (2007. The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions.
Priority setting in the REACH system.
Hansson, Sven Ove; Rudén, Christina
2006-04-01
Due to the large number of chemicals for which toxicological and ecotoxicological information is lacking, priority setting for data acquisition is a major concern in chemicals regulation. In the current European system, two administrative priority-setting criteria are used, namely novelty (i.e., time of market introduction) and production volume. In the proposed Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system, the novelty criterion is no longer used, and production volume will be the main priority-setting criterion for testing requirements, supplemented in some cases with hazard indications obtained from QSAR modelling. This system for priority setting has severe weaknesses. In this paper we propose that a multicriteria system should be developed that includes at least three additional criteria: chemical properties, results from initial testing in a tiered system, and voluntary testing for which efficient incentives can be created. Toxicological and decision-theoretical research is needed to design testing systems with validated priority-setting mechanisms.
Reaching Consensus by Allowing Moments of Indecision
Svenkeson, A.; Swami, A.
2015-10-01
Group decision-making processes often turn into a drawn out and costly battle between two opposing subgroups. Using analytical arguments based on a master equation description of the opinion dynamics occurring in a three-state model of cooperatively interacting units, we show how the capability of a social group to reach consensus can be enhanced when there is an intermediate state for indecisive individuals to pass through. The time spent in the intermediate state must be relatively short compared to that of the two polar states in order to create the beneficial effect. Furthermore, the cooperation between individuals must not be too low, as the benefit to consensus is possible only when the cooperation level exceeds a specific threshold. We also discuss how zealots, agents that remain in one state forever, can affect the consensus among the rest of the population by counteracting the benefit of the intermediate state or making it virtually impossible for an opposition to form.
Morphodynamics of a pseudomeandering gravel bar reach
Bartholdy, J.; Billi, P.
2002-01-01
A large number of rivers in Tuscany have channel planforms, which are neither straight nor what is usually understood as meandering. In the typical case, they consist of an almost straight, slightly incised main channel fringed with large lateral bars and lunate-shaped embayments eroded into the former flood plain. In the past, these rivers have not been recognised as an individual category and have often been considered to be either braided or meandering. It is suggested here that this type of river planform be termed pseudomeandering. A typical pseudomeandering river (the Cecina River) is described and analysed to investigate the main factors responsible for producing this channel pattern. A study reach (100×300 m) was surveyed in detail and related to data on discharge, channel changes after floods and grain-size distribution of bed sediments. During 18 months of topographic monitoring, the inner lateral bar in the study reach expanded and migrated towards the concave outer bank which, concurrently, retreated by as much as 25 m. A sediment balance was constructed to analyse bar growth and bank retreat in relation to sediment supply and channel morphology. The conditions necessary to maintain the pseudomeandering morphology of these rivers by preventing them from developing a meandering planform, are discussed and interpreted as a combination of a few main factors such as the flashy character of floods, sediment supply (influenced by both natural processes and human impact), the morphological effects of discharges with contrasting return intervals and the short duration of flood events. Finally, the channel response to floods with variable sediment transport capacity (represented by bed shear stress) is analysed using a simple model. It is demonstrated that bend migration is associated with moderate floods while major floods are responsible for the development of chute channels, which act to suppress bend growth and maintain the low sinuosity configuration of
Wang, Xiao-li; Pan, Gang; Bao, Hua-ying; Zhang, Xian-wei; Chen, Hao; Guo, Bo-shu
2008-08-01
The equilibrium phosphate concentration (EPC0) of the Yellow River bed sediments has been measured, which was used to predict whether bed sediments are acting as a source or sink of soluble reactive phosphate (SRP). The modified Langmuir isotherm equation was used to describe phosphate (P) sorption on the Yellow River sediments. The maximum P sorption capacity (PAC) and P-binding energy constant (k) were obtained by the modified Langmuir isotherm model. Native adsorbed exchangeable phosphorus (NAP), the EPC0, and partitioning coefficients (Kp) were subsequently calculated by the corresponding formulae. The influence of pH values and ion strength were evaluated. All the EPC0 s are higher than the P concentration in the overlying water, indicating a potential source of phosphate from the sediments. PAC is linearly related to the contents of TOC of the sediment. The sorption capacity of P increased rapidly with pH below 6.0, and then reached a plateau between pH 6.0 to 9.7, and finally maintained at a slightly higher level from pH 9.7 to 12.0.The adsorption of P by the sediment decreased with the increase in Ca2+ ionic strength.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Effect of speed manipulation on the control of aperture closure during reach-to-grasp movements.
Rand, Miya K; Squire, Linda M; Stelmach, George E
2006-09-01
This study investigates coordination between hand transport and grasp movement components by examining a hypothesis that the hand location, relative to the object, in which aperture closure is initiated remains relatively constant under a wide range of transport speed. Subjects made reach-to-grasp movements to a dowel under four speed conditions: slow, comfortable, fast but comfortable, and maximum (i.e., as fast as possible). The distance traveled by the wrist after aperture reached its maximum (aperture closure distance) increased with an increase of transport speed across the speed conditions. This finding rejected the hypothesis and suggests that the speed of hand transport is taken into account in aperture closure initiation. Within each speed condition, however, the closure distance exhibited relatively small variability across trials, even though the total distance traveled by the wrist during the entire transport movement varied from trial to trial. The observed stability in aperture closure distance across trials implies that the hand distance to the object plays an important role in the control law governing the initiation of aperture closure. Further analysis showed that the aperture closure distance depended on the amplitude of peak aperture as well as hand velocity and acceleration. To clarify the form of the above control law, we analyzed four different mathematical models, in which a decision to initiate grasp closure is made as soon as a specific movement parameter (wrist distance to target or transport time) crosses a threshold that is either a constant value or a function of the above-mentioned other movement-related parameters. Statistical analysis performed across all movement conditions revealed that the control law model (according to which grasp initiation is made when hand distance to target becomes less than a certain linear function of aperture amplitude, hand velocity, and hand acceleration) produced significantly smaller residual errors
Integrated Curricular Approaches in Reaching Adult Students
Emerick-Brown, Dylan
2013-01-01
In the field of adult basic education, there are two strategies that have been found to be of particular value to student learning: multiple intelligences and purpose-based learning. However, putting these learning theories into practice is not always as easy as an educator might at first believe. Adult basic education teacher Dylan Emerick-Brown…
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Maximum efficiency of low-dissipation heat engines at arbitrary power
Holubec, Viktor; Ryabov, Artem
2016-07-01
We investigate maximum efficiency at a given power for low-dissipation heat engines. Close to maximum power, the maximum gain in efficiency scales as a square root of relative loss in power and this scaling is universal for a broad class of systems. For low-dissipation engines, we calculate the maximum gain in efficiency for an arbitrary fixed power. We show that engines working close to maximum power can operate at considerably larger efficiency compared to the efficiency at maximum power. Furthermore, we introduce universal bounds on maximum efficiency at a given power for low-dissipation heat engines. These bounds represent direct generalization of the bounds on efficiency at maximum power obtained by Esposito et al (2010 Phys. Rev. Lett. 105 150603). We derive the bounds analytically in the regime close to maximum power and for small power values. For the intermediate regime we present strong numerical evidence for the validity of the bounds.
Consumer exposure modelling under REACH: Assessing the defaults.
Oltmanns, J; Neisel, F; Heinemeyer, G; Kaiser, E; Schneider, K
2015-07-01
Consumer exposure to chemicals from products and articles is rarely monitored. Since an assessment of consumer exposure has become particularly important under the European REACH Regulation, dedicated modelling approaches with exposure assessment tools are applied. The results of these tools are critically dependent on the default input values embedded in the tools. These inputs were therefore compiled for three lower tier tools (ECETOC TRA (version 3.0), EGRET and REACT)) and benchmarked against a higher tier tool (ConsExpo (version 4.1)). Mostly, conservative input values are used in the lower tier tools. Some cases were identified where the lower tier tools used less conservative values than ConsExpo. However, these deviations only rarely resulted in less conservative exposure estimates compared to ConsExpo, when tested in reference scenarios. This finding is mainly due to the conservatism of (a) the default value for the thickness of the product layer (with complete release of the substance) used for the prediction of dermal exposure and (b) the complete release assumed for volatile substances (i.e. substances with a vapour pressure ⩾10Pa) for inhalation exposure estimates. The examples demonstrate that care must be taken when changing critical defaults in order to retain conservative estimates of consumer exposure to chemicals.
Continental reach: The Westcoast Energy story
Newman, P. C.
2002-07-01
A historical account is given of the spectacular success that was Westcoast Energy Inc., a Canadian natural gas giant that charted a wilderness pipeline from natural gas fields in Canada's sub-arctic solitude. The beginning of the company is traced to an event in 1934 when near the bank of the Pouce Coupe River, close to the Alberta-British Columbia border, Frank McMahon, a solitary wildcatter and the eventual founder of the company, first sighted the fiery inferno of a runaway wildcat well, drilled by geologists of the Imperial Oil Company during their original search for the Canadian petroleum basin's motherlode. It was on this occasion in 1934 that McMahon first conceived a geological profile that connected the gas-bearing sandstone of Pouce Coupe with the reservoir rock of the biggest natural gas field of Alberta, and a pipeline from this sandstone storehouse across the rugged heart of British Columbia to Vancouver, and south into the United States. It took the better part of a quarter century to realize the dream of that pipeline which, in due course, turned out to be only the first step towards reaching the top rank of Canadian corporations in operational and financial terms, and becoming one of only a handful in terms of a story that became a Canadian corporate legend. By chronicling the lives and contributions of the company's founder and senior officials over the years, the book traces the company's meteoric rise from a gleam in its founder's eye to a cautious regional utility, and to the aggressive Canadian adventurer that went on to burst the boundaries of its Pacific Coast world, until the continental reach of its operations and interests run from Canada's Pacific shoreline to its Atlantic basins and Mexico's Campeche Bay to Alaska's Prudhoe Bay. The company's independent existence came to an end in 2002 when Westcoast Energy, by then a $15 billion operation, was acquired by Duke Energy Limited of North
Beck Jørgensen, Torben; Rutgers, Mark R.
2015-01-01
administration is approached in terms of processes guided or restricted by public values and as public value creating: public management and public policy-making are both concerned with establishing, following and realizing public values. To study public values a broad perspective is needed. The article suggest......This article provides the introduction to a symposium on contemporary public values research. It is argued that the contribution to this symposium represent a Public Values Perspective, distinct from other specific lines of research that also use public value as a core concept. Public...... a research agenda for this encompasing kind of public values research. Finally the contributions to the symposium are introduced....
THE MAXIMUM ENERGY OF ACCELERATED PARTICLES IN RELATIVISTIC COLLISIONLESS SHOCKS
Sironi, Lorenzo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Spitkovsky, Anatoly [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544-1001 (United States); Arons, Jonathan, E-mail: lsironi@cfa.harvard.edu [Department of Astronomy, Department of Physics, and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States)
2013-07-01
The afterglow emission from gamma-ray bursts (GRBs) is usually interpreted as synchrotron radiation from electrons accelerated at the GRB external shock that propagates with relativistic velocities into the magnetized interstellar medium. By means of multi-dimensional particle-in-cell simulations, we investigate the acceleration performance of weakly magnetized relativistic shocks, in the magnetization range 0 {approx}< {sigma} {approx}< 10{sup -1}. The pre-shock magnetic field is orthogonal to the flow, as generically expected for relativistic shocks. We find that relativistic perpendicular shocks propagating in electron-positron plasmas are efficient particle accelerators if the magnetization is {sigma} {approx}< 10{sup -3}. For electron-ion plasmas, the transition to efficient acceleration occurs for {sigma} {approx}< 3 Multiplication-Sign 10{sup -5}. Here, the acceleration process proceeds similarly for the two species, since the electrons enter the shock nearly in equipartition with the ions, as a result of strong pre-heating in the self-generated upstream turbulence. In both electron-positron and electron-ion shocks, we find that the maximum energy of the accelerated particles scales in time as {epsilon}{sub max}{proportional_to}t {sup 1/2}. This scaling is shallower than the so-called (and commonly assumed) Bohm limit {epsilon}{sub max}{proportional_to}t, and it naturally results from the small-scale nature of the Weibel turbulence generated in the shock layer. In magnetized plasmas, the energy of the accelerated particles increases until it reaches a saturation value {epsilon}{sub sat}/{gamma}{sub 0} m{sub i}c {sup 2} {approx} {sigma}{sup -1/4}, where {gamma}{sub 0} m{sub i}c {sup 2} is the mean energy per particle in the upstream bulk flow. Further energization is prevented by the fact that the self-generated turbulence is confined within a finite region of thickness {proportional_to}{sigma}{sup -1/2} around the shock. Our results can provide physically
Important ATLAS Forward Calorimeter Milestone Reached
Loch, P.
The ATLAS Forward Calorimeter working group has reached an important milestone in the production of their detectors. The mechanical assembly of the first electromagnetic module (FCal1C) has been completed at the University of Arizona on February 25, 2002, only ten days after the originally scheduled date. The photo shows the University of Arizona FCal group in the clean room, together with the assembled FCal1C module. The module consists of a stack of 18 round copper plates, each about one inch thick. Each plate is about 90 cm in diameter, and has 12260 precision-drilled holes in it, to accommodate the tube/rod electrode assembly. The machining of the plates, which was done at the Science Technology Center (STC) at Carleton University, Ottawa, Canada, required high precision to allow for easy insertion of the electrode copper tube. The plates have been carefully cleaned at the University of Arizona, to remove any machining residue and metal flakes. This process alone took about eleven weeks. Exactly 122...
LEP Dismantling Reaches Half-Way Stage
2001-01-01
LEP's last superconducting module leaves its home port... Just seven months into the operation, LEP dismantling is forging ahead. Two of the eight arcs which form the tunnel have already been emptied and the last of the accelerator's radiofrequency (RF) cavities has just been raised to the surface. The 160 people working on LEP dismantling have reason to feel pleased with their progress. All of the accelerator's 72 superconducting RF modules have already been brought to the surface, with the last one being extracted on 2nd May. This represents an important step in the dismantling process, as head of the project, John Poole, explains. 'This was the most delicate part of the project, because the modules are very big and they could only come out at one place', he says. The shaft at point 1.8 through which the RF cavity modules pass is 18 metres in diameter, while each module is 11.5 metres long. Some modules had to travel more than 10 kilometres to reach the shaft. ... is lifted up the PM 1.8 shaft, after a m...
CAST reaches milestone but keeps on searching
CERN Courier (september 2011 issue)
2011-01-01
After eight years of searching for the emission of a dark matter candidate particle, the axion, from the Sun, the CERN Axion Solar Telescope (CAST) has fulfilled its original physics programme. Members of the CAST collaboration in July, together with dipole-based helioscope. CAST, the world’s most sensitive axion helioscope, points a recycled prototype LHC dipole magnet at the Sun at dawn and dusk, looking for the conversion of axions to X-rays. It incorporates four state-of-the-art X-ray detectors: three Micromegas detectors and a pn-CCD imaging camera attached to a focusing X-ray telescope that was recovered from the German space programme (see CERN Courier April 2010). Over the years, CAST has operated with the magnet bores - the location of the axion conversion - in different conditions: first in vacuum, covering axion masses up to 20 meV/c2, and then with a buffer gas (4He and later 3He) at various densities, finally reaching the goal of 1.17 eV/c2 on 22 ...
Media perspective - new opportunities for reaching audiences
Haswell, Katy
2007-08-01
The world of media is experiencing a period of extreme and rapid change with the rise of internet television and the download generation. Many young people no longer watch standard TV. Instead, they go on-line, talking to friends and downloading pictures, videos, music clips to put on their own websites and watch/ listen to on their laptops and mobile phones. Gone are the days when TV controllers determined what you watched and when you watched it. Now the buzzword is IPTV, Internet Protocol Television, with companies such as JOOST offering hundreds of channels on a wide range of subjects, all of which you can choose to watch when and where you wish, on your high-def widescreen with stereo surround sound at home or on your mobile phone on the train. This media revolution is changing the way organisations get their message out. And it is encouraging companies such as advertising agencies to be creative about new ways of accessing audiences. The good news is that we have fresh opportunities to reach young people through internet-based media and material downloaded through tools such as games machines, as well as through the traditional media. And it is important for Europlanet to make the most of these new and exciting developments.
Effects of aging on interjoint coordination during arm reaching
Marcus Vinicius da Silva
Full Text Available Abstract Introduction Moving the arm towards an object is a complex task. Movements of the arm joints must be well coordinated in order to obtain a smooth and accurate hand trajectory. Most studies regarding reaching movements address young subjects. Coordination differences in the neural mechanism underlying motor control throughout the life stages is yet unknown. The understanding of these changes can lead to a better comprehension of neuromotor pathologies and therefore to more suitable therapies. Methods Our purpose was to investigate interjoint coordination in three different aging groups (children, young, elderly. Kinematics and kinetics specific variables were analyzed focusing on defined parameters to get insight into arm coordination. Intersegmental dynamics was used to calculate shoulder and elbow torques assuming a 2-link segment model of the upper extremity (upper arm and forearm with two friction-less joints (shoulder and elbow. A virtual reality environment was used to examine multidirectional planar reaching in three different directions (randomly presented. Results Seven measures were computed to investigate group interlimb differences: shoulder and elbow muscle torques (peak and impulse, work performed by shoulder and elbow joints, maximum velocity, movement distance, distance error at final position, movement duration and acceleration duration. Our data analysis showed differences between movement performances for all analyzed variables, at all ages. Conclusion We found that the intersegmental dynamics for the interlimb (left/right comparisons were similar for the elderly and children groups as compared to the young. In addition, the coordination and control of motor tasks changes during life, becoming less effective in old age.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Planning of the Extended Reach well Dieksand 2; Planung der Extended Reach Bohrung Dieksand 2
Frank, U.; Berners, H. [RWE-DEA AG, Hamburg (Germany). Drilling Team Mittelplate und Dieksand; Hadow, A.; Klop, G.; Sickinger, W. [Wintershall AG Erdoelwerke, Barnstdorf (Germany); Sudron, K.
1998-12-31
The Mittelplate oil field is located 7 km offshore the town of Friedrichskoog. Reserves are estimated at 30 million tonnes of oil. At a production rate of 2,500 t/d, it will last about 33 years. The transport capacity of the offshore platform is limited, so that attempts were made to enhance production by constructing the extended reach borehole Dieksand 2. Details are presented. (orig.) [Deutsch] Das Erdoelfeld Mittelplate liegt am suedlichen Rand des Nationalparks Schleswig Holsteinisches Wattenmeer, ca. 7000 m westlich der Ortschaft Friedrichskoog. Die gewinnbaren Reserven betragen ca. 30 Millionen t Oel. Bei einer Foerderkapazitaet von 2.500 t/Tag betraegt die Foerderdauer ca. 33 Jahre. Aufgrund der begrenzten Transportkapazitaeten von der Insel, laesst sich durch zusaetzliche Bohrungen von der kuenstlichen Insel Mittelplate keine entscheidende Erhoehung der Foerderkapazitaet erzielen. Ab Sommer 1996 wurde erstmals die Moeglichkeit der Lagerstaettenerschliessung von Land untersucht. Ein im Mai 1997 in Hamburg etabliertes Drilling Team wurde mit der Aufgabe betraut, die Extended Reach Bohrung Dieksand 2 zu planen und abzuteufen. Die Planungsphasen fuer die Extended Reach Bohrung Dieksand 2 wurden aufgezeigt. Die fuer den Erfolg einer Extended Reach Bohrung wichtigen Planungsparameter wurden erlaeutert. Es wurden Wege gezeigt, wie bei diesem Projekt technische und geologische Risiken in der Planung mit beruecksichtigt und nach Beginn der Bohrung weiter bearbeitet werden koennen. (orig.)
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Columbia River Estuary Ecosystem Classification Hydrogeomorphic Reach
Cannon, Charles M.; Ramirez, Mary F.; Heatwole, Danelle W.; Burke, Jennifer L.; Simenstad, Charles A.; O'Connor, Jim E.; Marcoe, Keith
2012-01-01
Estuarine ecosystems are controlled by a variety of processes that operate at multiple spatial and temporal scales. Understanding the hierarchical nature of these processes will aid in prioritization of restoration efforts. This hierarchical Columbia River Estuary Ecosystem Classification (henceforth "Classification") of the Columbia River estuary is a spatial database of the tidally-influenced reaches of the lower Columbia River, the tidally affected parts of its tributaries, and the landforms that make up their floodplains for the 230 kilometers between the Pacific Ocean and Bonneville Dam. This work is a collaborative effort between University of Washington School of Aquatic and Fishery Sciences (henceforth "UW"), U.S. Geological Survey (henceforth "USGS"), and the Lower Columbia Estuary Partnership (henceforth "EP"). Consideration of geomorphologic processes will improve the understanding of controlling physical factors that drive ecosystem evolution along the tidal Columbia River. The Classification is organized around six hierarchical levels, progressing from the coarsest, regional scale to the finest, localized scale: (1) Ecosystem Province; (2) Ecoregion; (3) Hydrogeomorphic Reach; (4) Ecosystem Complex; (5) Geomorphic Catena; and (6) Primary Cover Class. For Levels 4 and 5, we mapped landforms within the Holocene floodplain primarily by visual interpretation of Light Detection and Ranging (LiDAR) topography supplemented with aerial photographs, Natural Resources Conservation Service (NRCS) soils data, and historical maps. Mapped landforms are classified as to their current geomorphic function, the inferred process regime that formed them, and anthropogenic modification. Channels were classified primarily by a set of depth-based rules and geometric relationships. Classification Level 5 floodplain landforms ("geomorphic catenae") were further classified based on multivariate analysis of land-cover within the mapped landform area and attributed as "sub
Parallel explicit and implicit control of reaching.
Pietro Mazzoni
Full Text Available BACKGROUND: Human movement can be guided automatically (implicit control or attentively (explicit control. Explicit control may be engaged when learning a new movement, while implicit control enables simultaneous execution of multiple actions. Explicit and implicit control can often be assigned arbitrarily: we can simultaneously drive a car and tune the radio, seamlessly allocating implicit or explicit control to either action. This flexibility suggests that sensorimotor signals, including those that encode spatially overlapping perception and behavior, can be accurately segregated to explicit and implicit control processes. METHODOLOGY/PRINCIPAL FINDINGS: We tested human subjects' ability to segregate sensorimotor signals to parallel control processes by requiring dual (explicit and implicit control of the same reaching movement and testing for interference between these processes. Healthy control subjects were able to engage dual explicit and implicit motor control without degradation of performance compared to explicit or implicit control alone. We then asked whether segregation of explicit and implicit motor control can be selectively disrupted by studying dual-control performance in subjects with no clinically manifest neurologic deficits in the presymptomatic stage of Huntington's disease (HD. These subjects performed successfully under either explicit or implicit control alone, but were impaired in the dual-control condition. CONCLUSION/SIGNIFICANCE: The human nervous system can exert dual control on a single action, and is therefore able to accurately segregate sensorimotor signals to explicit and implicit control. The impairment observed in the presymptomatic stage of HD points to a possible crucial contribution of the striatum to the segregation of sensorimotor signals to multiple control processes.
Optimization of agitation and aeration conditions for maximum virginiamycin production.
Shioya, S; Morikawa, M; Kajihara, Y; Shimizu, H
1999-02-01
To maximize the productivity of virginiamycin, which is a commercially important antibiotic as an animal feed additive, an empirical approach was employed in the batch culture of Streptomyces virginiae. Here, the effects of dissolved oxygen (DO) concentration and agitation speed on the maximum cell concentration at the production phase, as well as on the productivity of virginiamycin, were investigated. To maintain the DO concentration in the fermentor at a certain level, either the agitation speed or the inlet oxygen concentration of the supply gas was manipulated. It was found that increasing the agitation speed had a positive effect on the antibiotic productivity independent of the DO concentration. The optimum DO concentration, agitation speed and addition of an autoregulator, virginiae butanolide C (VB-C), were determined to maximize virginiamycin productivity. The optimal strategy was to start the cultivation at 450 rpm and to continue until the DO concentration reached 80%. After reaching 80%, the DO concentration was maintained at this level by changing the agitation speed, up to a maximum of 800 rpm. The addition of an optimal amount of the autoregulator VB-C in an experiment resulted in the maximal production of virginiamycin M (399 mg/l), which was about 1.8-fold those obtained previously.
A case of rapid rock riverbed incision in a coseismic uplift reach and its implications
Huang, Ming-Wan; Pan, Yii-Wen; Liao, Jyh-Jong
2013-02-01
During the 1999 Chi-Chi earthquake (Mw = 7.6) in Taiwan, the coseismic displacement induced fault scarps and a pop-up structure in the Taan River. The fault scarps across the river experienced maximum vertical slip of 10 m, which disturbed the dynamic equilibrium of the fluvial system. As a result, rapid incision in the weak bedrock, with a maximum depth of 20 m, was activated within a decade after its armor layer was removed. This case provides an excellent opportunity for closely tracking and recording the progressive evolution of river morphology that is subjected to coseismic uplift. Based on multistaged orthophotographs and digital elevation model (DEM) data, the process of morphology evolution in the uplift reach was divided into four consecutive stages. Plucking is the dominant mechanism of bedrock erosion associated with channel incision and knickpoint migration. The astonishingly high rate of knickpoint retreat (KPR), as rapid as a few hundred meters per year, may be responsible for the rapid incision in the main channel. The reasons for the high rate of KPR are discussed in depth. The total length of the river affected by the coseismic uplift is 5 km: 1 km in the uplift reach and 4 km in the downstream reach. The downstream reach was affected by a reduction in sediment supply and increase in stream power. The KPR cut through the uplift reach within roughly a decade; further significant flooding in the future will mainly cause widening instead of deepening of the channel.
Spiking and LFP activity in PRR during symbolically instructed reaches
2011-01-01
The spiking activity in the parietal reach region (PRR) represents the spatial goal of an impending reach when the reach is directed toward or away from a visual object. The local field potentials (LFPs) in this region also represent the reach goal when the reach is directed to a visual object. Thus PRR is a candidate area for reading out a patient's intended reach goals for neural prosthetic applications. For natural behaviors, reach goals are not always based on the location of a visual obj...
What can be learnt from an ecotoxicity database in the framework of the REACh regulation?
Henegar, Adina; Mombelli, Enrico [Unite Modeles pour l' Ecotoxicologie et la Toxicologie (METO), INERIS, Parc Technologique Alata, BP2, 60550 Verneuil-en-Halatte (France); Pandard, Pascal [Unite Expertise et Essais en Ecotoxicologie (EXES), INERIS, Parc Technologique Alata, BP2, 60550 Verneuil-en-Halatte (France); Pery, Alexandre R.R., E-mail: alexandre.pery@ineris.fr [Unite Modeles pour l' Ecotoxicologie et la Toxicologie (METO), INERIS, Parc Technologique Alata, BP2, 60550 Verneuil-en-Halatte (France)
2011-01-01
Since REACh applies in all of EU, special emphasis has been put on the reduction of systematic ecotoxicity testing. In this context, it is important to extract a maximum of information from existing ecotoxicity databases in order to propose alternative methods aimed at replacing and reducing experimental testing. Consequently, we analyzed a database of new chemicals registered in France and Europe during the last twenty years reporting aquatic ecotoxicity data with respect to three trophic levels (i.e., Algae EC50 72 h, Daphnia EC50 48 h and Fish LC50 96 h). In order to ensure the relevance of the comparison between these three experimental tests, we performed a stringent data selection based on the pertinence and quality of available ecotoxicological information. At the end of this selection, less than 5% of the initial number of chemicals was retained for subsequent analysis. Such an analysis showed that fish was the least sensitive trophic level, whereas Daphnia had the highest sensitivity. Moreover, thanks to an analysis of the relative sensitivity of trophic levels, it was possible to establish that respective correction factors of 50 and 10 would be necessary if only one or two test values were available. From a physicochemical point of view, it was possible to characterize two significant correlations relating the sensitivity of the aforementioned trophic levels with the chemical structure of the retained substances. This analysis showed that algae displayed a higher sensitivity towards chemicals containing acid fragments whereas fish presented a higher sensitivity towards chemicals containing aromatic ether fragments. Overall, our work suggests that statistical analysis of historical data combined with data yielded by the REACh regulation should permit the derivation of robust safety factors, testing strategies and mathematical models. These alternative methods, in turn, could allow a replacement and reduction of ecotoxicological testing. - Research
Keil, Nina M; Pommereau, Marc; Patt, Antonia; Wechsler, Beat; Gygax, Lorenz
2017-02-01
Confined goats spend a substantial part of the day feeding. A poorly designed feeding place increases the risk of feeding in nonphysiological body postures, and even injury. Scientifically validated information on suitable dimensions of feeding places for loose-housed goats is almost absent from the literature. The aim of the present study was, therefore, to determine feeding place dimensions that would allow goats to feed in a species-appropriate, relaxed body posture. A total of 27 goats with a height at the withers of 62 to 80 cm were included in the study. Goats were tested individually in an experimental feeding stall that allowed the height difference between the feed table, the standing area of the forelegs, and a feeding area step (difference in height between forelegs and hind legs) to be varied. The goats accessed the feed table via a palisade feeding barrier. The feed table was equipped with recesses at varying distances to the feeding barrier (5-55 cm in 5-cm steps) at angles of 30°, 60°, 90°, 120°, or 150° (feeding angle), which were filled with the goats' preferred food. In 18 trials, balanced for order across animals, each animal underwent all possible combinations of feeding area step (3 levels: 0, 10, and 20 cm) and of difference in height between feed table and standing area of forelegs (6 levels: 0, 5, 10, 15, 20, and 25 cm). The minimum and maximum reach at which the animals could reach feed on the table with a relaxed body posture was determined for each combination. Statistical analysis was performed using mixed-effects models. The animals were able to feed with a relaxed posture when the feed table was at least 10 cm higher than the standing height of the goats' forelegs. Larger goats achieved smaller minimum reaches and minimum reach increased if the goats' head and neck were angled. Maximum reach increased with increasing height at withers and height of the feed table. The presence of a feeding area step had no influence on minimum and
Is There a Maximum Mass for Black Holes in Galactic Nuclei?
Inayoshi, Kohei; Haiman, Zoltán
2016-09-01
The largest observed supermassive black holes (SMBHs) have a mass of {M}{{BH}}≃ {10}10 {\\text{}}{M}⊙ , nearly independent of redshift, from the local (z≃ 0) to the early (z\\gt 6) universe. We suggest that the growth of SMBHs above a few × {10}10 {\\text{}}{M}⊙ is prevented by small-scale accretion physics, independent of the properties of their host galaxies or of cosmology. Growing more massive BHs requires a gas supply rate from galactic scales onto a nuclear region as high as ≳ {10}3 {M}⊙ {{{yr}}}-1. At such a high accretion rate, most of the gas converts to stars at large radii (˜10-100 pc), well before reaching the BH. We adopt a simple model for a star-forming accretion disk and find that the accretion rate in the subparsec nuclear region is reduced to the smaller value of at most a few × {M}⊙ {{{yr}}}-1. This prevents SMBHs from growing above ≃ {10}11 {\\text{}}{M}⊙ in the age of the universe. Furthermore, once an SMBH reaches a sufficiently high mass, this rate falls below the critical value at which the accretion flow becomes advection dominated. Once this transition occurs, BH feeding can be suppressed by strong outflows and jets from hot gas near the BH. We find that the maximum SMBH mass, given by this transition, is between {M}{{BH,max}}≃ (1{--}6)× {10}10 {\\text{}}{M}⊙ , depending primarily on the efficiency of angular momentum transfer inside the galactic disk, and not on other properties of the host galaxy.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Reaching remote areas in Latin America.
Jaimes, R
1994-01-01
Poor communities in remote and inaccessible areas tend to not only be cut off from family planning education and services, but they are also deprived of basic primary health care services. Efforts to bring family planning to such communities and populations should therefore be linked with other services. The author presents three examples of programs to bring effective family planning services to remote communities in Central and South America. Outside of the municipal center in the Tuxtlas region of Mexico, education and health levels are low and people live according to ancient customs. Ten years ago with the help of MEXFAM, the IPPF affiliate in Mexico, two social promoters established themselves in the town of Catemaco to develop a community program of family planning and health care offering education and prevention to improve the quality of people's lives. Through their health brigades taking health services to towns without an established health center, the program has influenced an estimated 100,000 people in 50 villages and towns. The program also has a clinic. In Guatemala, the Family Welfare Association (APROFAM) gave bicycles to 240 volunteer health care workers to facilitate their outreach work in rural areas. APROFAM since 1988 has operated an integrated program to treat intestinal parasites and promote family planning in San Lucas de Toliman, an Indian town close to Lake Atitlan. Providing health care to more than 10,000 people, the volunteer staff has covered the entire department of Solola, reaching each family in the area. Field educators travel on motorcycles through the rural areas of Guatemala coordinating with the health volunteers the distribution of contraceptives at the community level. The Integrated Project's Clinic was founded in 1992 and currently carries out pregnancy and Pap tests, as well as general lab tests. Finally, Puna is an island in the middle of the Gulf of Guayaquil, Ecuador. Women on the island typically have 10
XMM classroom competitions : reaching for the stars!
1999-09-01
Partnered by a unique education network 'European Schoolnet'(*), ESA is today launching these three competitions for schools (age range: 8 to final year) in its Member States: draw a telescope, describe the benefits of space-based astronomy or produce an astronomy observation proposal. Details can be found on the special competition website : http://sci.esa.int/xmm/competition "Draw me a telescope!" This competition for 8 to 12 year-olds asks the class to draw a telescope (inside a 20 - 50 cm diameter circle). The 14 winning entries, one per Member State, will be included in a specially-designed official XMM mission logo to go on the Ariane-5 launcher fairing for official unveiling on launch day. A representative of each winning class will be invited to Kourou for the launch. Deadline for entries : 8 October 1999. For full information on how to enter see : http://sci.esa.int/xmm/competition "What's new, Mr Galileo?" The essay competition for 13 to 15 year-olds challenges an English class, writing in the international language of space, to submit a single page (500 words maximum) description of space-based astronomy and its benefits for humanity. The 14 winners, one per Member States, will be invited to Kourou to visit the Guiana Space Centre, Europe's spaceport, and witness final XMM launch preparations. Deadline for entries : 15 October 1999. For full information on how to enter see : http://sci.esa.int/xmm/competition. "Stargazing" In the final-year class competition, ESA is providing a unique opportunity to use the XMM telescope. Here, the physics class, assisted by the scientific community, has to submit an observation project. The 14 winning proposals will be put into practice in 2000 at a summer camp. Further details will be announced once XMM is in orbit. Note to editors: The X-ray Multi-Mirror mission is the second Cornerstone of ESA's Horizon 2000 Plus science programme. The telescope will revolutionise cosmic X-ray astronomy by harvesting far more X
MaxOcc: a web portal for maximum occurrence analysis.
Bertini, Ivano; Ferella, Lucio; Luchinat, Claudio; Parigi, Giacomo; Petoukhov, Maxim V; Ravera, Enrico; Rosato, Antonio; Svergun, Dmitri I
2012-08-01
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocc . It can be used freely upon registration to the grid with a digital certificate.
Study of maximum pressure for composite hepta-tubular powders
M. C. Gupta
1959-10-01
Full Text Available In this paper the expressions for maximum pressure occurring positions in the case of composite hepta-tubular powers used in conventional guns and the corresponding conditions have been derived under certain conditions, viz., the value of n, the ratio of specific heats, has been assumed to be the same for both the charges and the covolume corrections have not been neglected.
Erich Regener and the maximum in ionisation of the atmosphere
Carlson, P
2014-01-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under-water and in the atmosphere. He discovered, along with one of his students, Georg Pfotzer, the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students and through...
Sengbusch, E; Pérez-Andújar, A; DeLuca, P M; Mackie, T R
2009-02-01
Several compact proton accelerator systems for use in proton therapy have recently been proposed. Of paramount importance to the development of such an accelerator system is the maximum kinetic energy of protons, immediately prior to entry into the patient, that must be reached by the treatment system. The commonly used value for the maximum kinetic energy required for a medical proton accelerator is 250 MeV, but it has not been demonstrated that this energy is indeed necessary to treat all or most patients eligible for proton therapy. This article quantifies the maximum kinetic energy of protons, immediately prior to entry into the patient, necessary to treat a given percentage of patients with rotational proton therapy, and examines the impact of this energy threshold on the cost and feasibility of a compact, gantry-mounted proton accelerator treatment system. One hundred randomized treatment plans from patients treated with IMRT were analyzed. The maximum radiological pathlength from the surface of the patient to the distal edge of the treatment volume was obtained for 180 degrees continuous arc proton therapy and for 180 degrees split arc proton therapy (two 90 degrees arcs) using CT# profiles from the Pinnacle (Philips Medical Systems, Madison, WI) treatment planning system. In each case, the maximum kinetic energy of protons, immediately prior to entry into the patient, that would be necessary to treat the patient was calculated using proton range tables for various media. In addition, Monte Carlo simulations were performed to quantify neutron production in a water phantom representing a patient as a function of the maximum proton kinetic energy achievable by a proton treatment system. Protons with a kinetic energy of 240 MeV, immediately prior to entry into the patient, were needed to treat 100% of patients in this study. However, it was shown that 90% of patients could be treated at 198 MeV, and 95% of patients could be treated at 207 MeV. Decreasing the
A hybrid solar panel maximum power point search method that uses light and temperature sensors
Ostrowski, Mariusz
2016-04-01
Solar cells have low efficiency and non-linear characteristics. To increase the output power solar cells are connected in more complex structures. Solar panels consist of series of connected solar cells with a few bypass diodes, to avoid negative effects of partial shading conditions. Solar panels are connected to special device named the maximum power point tracker. This device adapt output power from solar panels to load requirements and have also build in a special algorithm to track the maximum power point of solar panels. Bypass diodes may cause appearance of local maxima on power-voltage curve when the panel surface is illuminated irregularly. In this case traditional maximum power point tracking algorithms can find only a local maximum power point. In this article the hybrid maximum power point search algorithm is presented. The main goal of the proposed method is a combination of two algorithms: a method that use temperature sensors to track maximum power point in partial shading conditions and a method that use illumination sensor to track maximum power point in equal illumination conditions. In comparison to another methods, the proposed algorithm uses correlation functions to determinate the relationship between values of illumination and temperature sensors and the corresponding values of current and voltage in maximum power point. In partial shading condition the algorithm calculates local maximum power points bases on the value of temperature and the correlation function and after that measures the value of power on each of calculated point choose those with have biggest value, and on its base run the perturb and observe search algorithm. In case of equal illumination algorithm calculate the maximum power point bases on the illumination value and the correlation function and on its base run the perturb and observe algorithm. In addition, the proposed method uses a special coefficient modification of correlation functions algorithm. This sub
Describing Adequacy of cure with maximum hardness ratios and non-linear regression.
Bouschlicher, Murray; Berning, Kristen; Qian, Fang
2008-01-01
Knoop Hardness (KH) ratios (HR) > or = 80% are commonly used as criteria for the adequate cure of a composite. These per-specimen HRs can be misleading, as both numerator and denominator may increase concurrently, prior to reaching an asymptotic, top-surface maximum hardness value (H(MAX)). Extended cure times were used to establish H(MAX) and descriptive statistics, and non-linear regression analysis were used to describe the relationship between exposure duration and HR and predict the time required for HR-H(MAX) = 80%. Composite samples 2.00 x 5.00 mm diameter (n = 5/grp) were cured for 10 seconds, 20 seconds, 40 seconds, 60 seconds, 90 seconds, 120 seconds, 180 seconds and 240 seconds in a 2-composite x 2-light curing unit design. A microhybrid (Point 4, P4) or microfill resin (Heliomolar, HM) composite was cured with a QTH or LED light curing unit and then stored in the dark for 24 hours prior to KH testing. Non-linear regression was calculated with: H = (H(MAX)-c)(1-e(-kt)) +c, H(MAX) = maximum hardness (a theoretical asymptotic value), c = constant (t = 0), k = rate constant and t = exposure duration describes the relationship between radiant exposure (irradiance x time) and HRs. Exposure durations for HR-H(MAX) = 80% were calculated. Two-sample t-tests for pairwise comparisons evaluated relative performance of the light curing units for similar surface x composite x exposure (10-90s). A good measure of goodness-of-fit of the non-linear regression, r2, ranged from 0.68-0.95. (mean = 0.82). Microhybrid (P4) exposure to achieve HR-H(MAX = 80% was 21 seconds for QTH and 34 seconds for the LED light curing unit. Corresponding values for microfill (HM) were 71 and 74 seconds, respectively. P4 HR-H(MAX) of LED vs QTH was statistically similar for 10 to 40 seconds, while HM HR-H(MAX) of LED was significantly lower than QTH for 10 to 40 seconds. It was concluded that redefined hardness ratios based on maximum hardness used in conjunction with non-linear regression
2009-10-01
horizontal vs. vertical. • According to the theory of Planned Behavior ( Ajzen , 1988, 2002) attitude, subjective norms and perceived control...determines intention which may end in behavior . 7 Defining Human Values Cross-cultural theories on values emerged in the 80s developed by three main...attitudes with social structure. 4 Defining Human Values • According to Parsons (Parsons & Shils, 1951), values instigate behavior . • In line
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Emerson Alexandrino
2005-12-01
, decrescendo posteriormente.It was assessed the evolution of tillering, forage biomass, leaf area index (LAI, interception of photosynthetically active radiation (IPAR, efficiency of radiation use (ERU in Panicum maximum "cv" Mombaça during the regrowth period of the grass, in the Summer and Autumn seasons. Similarly, grass growth indices were assessed: net assimilation rate (NAR, leaf area ratio (LAR and relative growth rate (RGR. All these variables were estimated from field observations taken on the 7th, 14th, 21st, 28th, 35th, 42nd and 49th days of the regrowth period (treatments, in the Summer and Autumn seasons. The experimental design was completely randomized with four replications. One area of 1,200m² was used in each season; 28 and 24 sampling points were chosen in the Summer and the Autunm seasons, respectively, for their similarity regarding canopy height and soil cover condition and randomly assigned to the treatments. The grass tillering was more intense in the first regrowth week and declined afterwards to reach negligible values from the fourth week on. Interception of the photosynthetically active radiation evolved in an asymptotic manner reaching highest value of 96%, without difference between Summer and Autumn. Leaf area index figures showed the same pattern in the Summer and Autumn, reaching the values of 8 and 4, respectively; on the other hand, forage biomass responded quadractically to the duration of the regrowth period. Radiation use efficiency reached the values of 1.76 and 0.54gDM/MJ in Summer and Autumn, respectively. RGR and NAR figures decreased in an asymptotic pattern in both seasons, with higher values in the Summer, while LAR figures increased initially in the first four weeks reaching values of 0.017 and 0.013m²/g towards the 28th day of the regrowth period, respectively in Autumn and Summer.
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
RLC并联电路阻抗最大值的讨论%Discussion on Maximum Impedance in RL C Parallel Circuit
赵尚兴; 马庆
2014-01-01
To study maximum impedance and the impedance in resonance in RL C parallel circuit respectively in three situations ,the relationship between impedance and inductance ,capacitor and power frequency is discussed .An account is given that maximum impedance is not reached in resonance by adjusting power frequency or inductance .The thesis has something referential value to the circuit course teaching .%讨论了RL C并联电路阻抗值随电感、电容，以及电源频率变化的关系，给出了3种情况下最大阻抗值和谐振时的阻抗值，论证了调节电源频率或电感达到谐振时，其阻抗模并非最大值。
Poor shape perception is the reason reaches-to-grasp are visually guided online.
Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P
2008-08-01
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Far-Reaching Impacts of African Dust- A Calipso Perspective
Yu, Hongbin; Chin, Mian; Yuan, Tianle; Bian, Huisheng; Prospero, Joseph; Omar, Ali; Remer, Lorraine; Winker, David; Yang, Yuekui; Zhang, Yan; Zhang, Zhibo
2014-01-01
African dust can transport across the tropical Atlantic and reach the Amazon basin, exerting far-reaching impacts on climate in downwind regions. The transported dust influences the surface-atmosphere interactions and cloud and precipitation processes through perturbing the surface radiative budget and atmospheric radiative heating and acting as cloud condensation nuclei and ice nuclei. Dust also influences biogeochemical cycle and climate through providing nutrients vital to the productivity of ocean biomass and Amazon forests. Assessing these climate impacts relies on an accurate quantification of dust transport and deposition. Currently model simulations show extremely large diversity, which calls for a need of observational constraints. Kaufman et al. (2005) estimated from MODIS aerosol measurements that about 144 Tg of dust is deposited into the tropical Atlantic and 50 Tg of dust into the Amazon in 2001. This estimated dust import to Amazon is a factor of 3-4 higher than other observations and models. However, several studies have argued that the oversimplified characterization of dust vertical profile in the study would have introduced large uncertainty and very likely a high bias. In this study we quantify the trans-Atlantic dust transport and deposition by using 7 years (2007-2013) observations from CALIPSO lidar. CALIPSO acquires high-resolution aerosol extinction and depolarization profiles in both cloud-free and above-cloud conditions. The unique CALIPSO capability of profiling aerosols above clouds offers an unprecedented opportunity of examining uncertainties associated with the use of MODIS clear-sky data. Dust is separated from other types of aerosols using the depolarization measurements. We estimated that on the basis of 7-year average, 118142 Tg of dust is deposited into the tropical Atlantic and 3860 Tg of dust into the Amazon basin. Substantial interannual variations are observed during the period, with the maximum to minimum ratio of about 1
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
ERF1 -- Enhanced River Reach File 1.2
U.S. Geological Survey, Department of the Interior — U.S. Environmental Protection Agency's River Reach File 1 (RF1)to ensure the hydrologic integrity of the digital reach traces and to quantify the mean water time of...
Minetti, Andrea; Hurtado, Northan; Grais, Rebecca F; Ferrari, Matthew
2014-01-15
Current mass vaccination campaigns in measles outbreak response are nonselective with respect to the immune status of individuals. However, the heterogeneity in immunity, due to previous vaccination coverage or infection, may lead to potential bias of such campaigns toward those with previous high access to vaccination and may result in a lower-than-expected effective impact. During the 2010 measles outbreak in Malawi, only 3 of the 8 districts where vaccination occurred achieved a measureable effective campaign impact (i.e., a reduction in measles cases in the targeted age groups greater than that observed in nonvaccinated districts). Simulation models suggest that selective campaigns targeting hard-to-reach individuals are of greater benefit, particularly in highly vaccinated populations, even for low target coverage and with late implementation. However, the choice between targeted and nonselective campaigns should be context specific, achieving a reasonable balance of feasibility, cost, and expected impact. In addition, it is critical to develop operational strategies to identify and target hard-to-reach individuals.
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Proprioceptive recalibration arises slowly compared to reach adaptation.
Zbib, Basel; Henriques, Denise Y P; Cressman, Erin K
2016-08-01
When subjects reach in a novel visuomotor environment (e.g. while viewing a cursor representing their hand that is rotated from their hand's actual position), they typically adjust their movements (i.e. bring the cursor to the target), thus reducing reaching errors. Additionally, research has shown that reaching with altered visual feedback of the hand results in sensory changes, such that proprioceptive estimates of hand position are shifted in the direction of the visual feedback experienced (Cressman and Henriques in J Neurophysiol 102:3505-3518, 2009). This study looked to establish the time course of these sensory changes. Additionally, the time courses of implicit sensory and motor changes were compared. Subjects reached to a single visual target while seeing a cursor that was either aligned with their hand position (50 trials) or rotated 30° clockwise relative to their hand (150 trials). Reach errors and proprioceptive estimates of felt hand position were assessed following the aligned reach training trials and at seven different times during the rotated reach training trials by having subjects reach to the target without visual feedback, and provide estimates of their hand relative to a visual reference marker, respectively. Results revealed a shift in proprioceptive estimates throughout the rotated reach training trials; however, significant sensory changes were not observed until after 70 trials. In contrast, results showed a greater change in reaches after a limited number of reach training trials with the rotated cursor. These findings suggest that proprioceptive recalibration arises more slowly than reach adaptation.
Reach/frequency for printed media: Personal probabilities or models
Mortensen, Peter Stendahl
2000-01-01
The author evaluates two different ways of estimating reach and frequency of plans for printed media. The first assigns reading probabilities to groups of respondents and calculates reach and frequency by simulation. the second estimates parameters to a model for reach/frequency. It is concluded...
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
On the maximum mass of magnetised white dwarfs
Chatterjee, D; Chamel, N; Novak, J; Oertel, M
2016-01-01
We develop a detailed and self-consistent numerical model for extremely-magnetised white dwarfs, which have been proposed as progenitors of overluminous Type Ia supernovae. This model can describe fully-consistent equilibria of magnetic stars in axial symmetry, with rotation, general-relativistic effects and realistic equations of state (including electron-ion interactions and taking into account Landau quantisation of electrons due to the magnetic field). We study the influence of each of these ingredients onto the white dwarf structure and, in particular, on their maximum mass. We perform an extensive stability analysis of such objects, with their highest surface magnetic fields reaching $\\sim 10^{13}~G$ (at which point the star adopts a torus-like shape). We confirm previous speculations that although very massive strongly magnetised white dwarfs could potentially exist, the onset of electron captures and pycnonuclear reactions may severely limit their stability. Finally, the emission of gravitational wave...
张军谋; 石惠春
2012-01-01
根据民勤县农业生态经济的实际情况,运用能值研究的方法,选取1994—2003年民勤县农业生态破坏较为典型的时期,对该地区这10a来农业生态系统内的相关能流组分及其指标进行了调查和分析。在这一时期,民勤县农业生态经济系统年均能值投入1.51E＋21sej,其中环境资源贡献率仅为22.47%,可更新环境资源仅占总能值投入的15.00%,而人类社会经济投入工业辅助能值在总能值投入中却达57.20%。通过比较分析认为,民勤县能值密度一直处于较低水平。另外,从环境载荷率（ELR）来看,民勤县农业生态系统在河西地区范围内已经表现出石化农业的典型特征。从可持续发展指数（ESI）来看,这10a民勤县农业生态系统发展的可持续性不仅很差而且有逐年下降的趋势。%Based on the situation of agricultural eco-economy in Minqin County,select the period from 1994 to 2003,when Minqin County was typical in the damages to ecosystem in the region,using energy value method investigate and analyze the components and their indicators related to energy flow within the agro-ecosystems in that 10 years.In the period,the annual energy inputs in agricultural eco-economic system of the county was 1.51E ＋21 sej,of which contribution of environmental resources was only 22.47% and renewable environmental resources,only 15.00% of the total energy input,while industry supplemental energy in socio-economic inputs reached as high as 57.20% of the total energy.By comparative analysis,Minqin energy density was found to be in a lower level.In view of the environmental loading ratio（ELR）,the agricultural ecosystem of Minqin County presented a typical characteristic of petrochemical agriculture in the Hexi region.In view of the energy-based sustainability index（ESI）,the sustainability of the agricultural ecosystem development of Minqin County in the 10 years was not only poor,but also showed a declining trend
Spiking and LFP activity in PRR during symbolically instructed reaches.
Hwang, Eun Jung; Andersen, Richard A
2012-02-01
The spiking activity in the parietal reach region (PRR) represents the spatial goal of an impending reach when the reach is directed toward or away from a visual object. The local field potentials (LFPs) in this region also represent the reach goal when the reach is directed to a visual object. Thus PRR is a candidate area for reading out a patient's intended reach goals for neural prosthetic applications. For natural behaviors, reach goals are not always based on the location of a visual object, e.g., playing the piano following sheet music or moving following verbal directions. So far it has not been directly tested whether and how PRR represents reach goals in such cognitive, nonlocational conditions, and knowing the encoding properties in various task conditions would help in designing a reach goal decoder for prosthetic applications. To address this issue, we examined the macaque PRR under two reach conditions: reach goal determined by the stimulus location (direct) or shape (symbolic). For the same goal, the spiking activity near reach onset was indistinguishable between the two tasks, and thus a reach goal decoder trained with spiking activity in one task performed perfectly in the other. In contrast, the LFP activity at 20-40 Hz showed small but significantly enhanced reach goal tuning in the symbolic task, but its spatial preference remained the same. Consequently, a decoder trained with LFP activity performed worse in the other task than in the same task. These results suggest that LFP decoders in PRR should take into account the task context (e.g., locational vs. nonlocational) to be accurate, while spike decoders can robustly provide reach goal information regardless of the task context in various prosthetic applications.
Optimal specific wavelength for maximum thrust production in undulatory propulsion.
Nangia, Nishant; Bale, Rahul; Chen, Nelson; Hanna, Yohanna; Patankar, Neelesh A
2017-01-01
What wavelengths do undulatory swimmers use during propulsion? In this work we find that a wide range of body/caudal fin (BCF) swimmers, from larval zebrafish and herring to fully-grown eels, use specific wavelength (ratio of wavelength to tail amplitude of undulation) values that fall within a relatively narrow range. The possible emergence of this constraint is interrogated using numerical simulations of fluid-structure interaction. Based on these, it was found that there is an optimal specific wavelength (OSW) that maximizes the swimming speed and thrust generated by an undulatory swimmer. The observed values of specific wavelength for BCF animals are relatively close to this OSW. The mechanisms underlying the maximum propulsive thrust for BCF swimmers are quantified and are found to be consistent with the mechanisms hypothesized in prior work. The adherence to an optimal value of specific wavelength in most natural hydrodynamic propulsors gives rise to empirical design criteria for man-made propulsors.
The value of value congruence.
Edwards, Jeffrey R; Cable, Daniel M
2009-05-01
Research on value congruence has attempted to explain why value congruence leads to positive outcomes, but few of these explanations have been tested empirically. In this article, the authors develop and test a theoretical model that integrates 4 key explanations of value congruence effects, which are framed in terms of communication, predictability, interpersonal attraction, and trust. These constructs are used to explain the process by which value congruence relates to job satisfaction, organizational identification, and intent to stay in the organization, after taking psychological need fulfillment into account. Data from a heterogeneous sample of employees from 4 organizations indicate that the relationships that link individual and organizational values to outcomes are explained primarily by the trust that employees place in the organization and its members, followed by communication, and, to a lesser extent, interpersonal attraction. Polynomial regression analyses reveal that the relationships emanating from individual and organizational values often deviated from the idealized value congruence relationship that underlies previous theory and research. The authors' results also show that individual and organizational values exhibited small but significant relationships with job satisfaction and organizational identification that bypassed the mediators in their model, indicating that additional explanations of value congruence effects should be pursued in future research. (c) 2009 APA, all rights reserved.
Benício, Kadja; Dias, Fernando A. L.; Gualdi, Lucien P.; Aliverti, Andrea; Resqueti, Vanessa R.; Fregonezi, Guilherme A. F.
2015-01-01
OBJECTIVE: To assess the influence of diaphragmatic activation control (diaphC) on Sniff Nasal-Inspiratory Pressure (SNIP) and Maximum Relaxation Rate of inspiratory muscles (MRR) in healthy subjects. METHOD: Twenty subjects (9 male; age: 23 (SD=2.9) years; BMI: 23.8 (SD=3) kg/m2; FEV1/FVC: 0.9 (SD=0.1)] performed 5 sniff maneuvers in two different moments: with or without instruction on diaphC. Before the first maneuver, a brief explanation was given to the subjects on how to perform the sniff test. For sniff test with diaphC, subjects were instructed to perform intense diaphragm activation. The best SNIP and MRR values were used for analysis. MRR was calculated as the ratio of first derivative of pressure over time (dP/dtmax) and were normalized by dividing it by peak pressure (SNIP) from the same maneuver. RESULTS: SNIP values were significantly different in maneuvers with and without diaphC [without diaphC: -100 (SD=27.1) cmH2O/ with diaphC: -72.8 (SD=22.3) cmH2O; p<0.0001], normalized MRR values were not statistically different [without diaphC: -9.7 (SD=2.6); with diaphC: -8.9 (SD=1.5); p=0.19]. Without diaphC, 40% of the sample did not reach the appropriate sniff criteria found in the literature. CONCLUSION: Diaphragmatic control performed during SNIP test influences obtained inspiratory pressure, being lower when diaphC is performed. However, there was no influence on normalized MRR. PMID:26578254
Extreme Value Theory and Value at Risk
Viviana Fernandez
2003-03-01
Full Text Available Value at Risk (VaR is a measure of the maximum potential change in value of a portfolio of financial assets with a given probability over a given time horizon. VaR became a key measure of market risk since the Basle Committee stated that banks should be able to cover losses on their trading portfolios over a ten-day horizon, 99 percent of the time. A common practice is to compute VaR by assuming that changes in value of the portfolio are normally distributed, conditional on past in-formation. However, assets returns usually come from fat-tailed distri-butions. Therefore, computing VaR under the assumption of conditional normality can be an important source of error. We illustrate this point with Chilean and U.S. returns series by resorting to extreme value theory (EVT and GARCH-type models. In addition, we show that dynamic estimation of empirical quantiles can also give more accurate VaR estimates than quantiles of a standard normal.
The role of pressure anisotropy on the maximum mass of cold compact stars
Karmakar, S.; Mukherjee, S.; Sharma, R.; Maharaj, S.D.
2007-01-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface redshift is analysed in the Vaidya-Tikekar model. It is shown that maximum compactness, redshift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
On the 2m-variable symmetric Boolean functions with maximum algebraic immunity
QU LongJiang; LI Chao
2008-01-01
The properties of the 2m-variable symmetric Boolean functions with maximum al-gebraic immunity are studied in this paper. Their value vectors, algebraic normal forms, and algebraic degrees and weights are all obtained. At last, some necessary conditions for a symmetric Boolean function on even number variables to have maximum algebraic immunity are introduced.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Rasmussen, Majken Kirkegaard; Petersen, Marianne Graves
2011-01-01
Stereotypic presumptions about gender affect the design process, both in relation to how users are understood and how products are designed. As a way to decrease the influence of stereotypic presumptions in design process, we propose not to disregard the aspect of gender in the design process......, as the perspective brings valuable insights on different approaches to technology, but instead to view gender through a value lens. Contributing to this perspective, we have developed Value Representations as a design-oriented instrument for staging a reflective dialogue with users. Value Representations...
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
Maximum Allowable Dynamic Load of Mobile Manipulators with Stability Consideration
Heidary H. R.
2015-09-01
Full Text Available High payload to mass ratio is one of the advantages of mobile robot manipulators. In this paper, a general formula for finding the maximum allowable dynamic load (MADL of wheeled mobile robot is presented. Mobile manipulators operating in field environments will be required to manipulate large loads, and to perform such tasks on uneven terrain, which may cause the system to reach dangerous tip-over instability. Therefore, the method is expanded for finding the MADL of mobile manipulators with stability consideration. Moment-Height Stability (MHS criterion is used as an index for the system stability. Full dynamic model of wheeled mobile base and mounted manipulator is considered with respect to the dynamic of non-holonomic constraint. Then, a method for determination of the maximum allowable loads is described, subject to actuator constraints and by imposing the stability limitation as a new constraint. The actuator torque constraint is applied by using a speed-torque characteristics curve of a typical DC motor. In order to verify the effectiveness of the presented algorithm, several simulation studies considering a two-link planar manipulator, mounted on a mobile base are presented and the results are discussed.
Rotating proto-neutron stars: spin evolution, maximum mass and I-Love-Q relations
Martinon, Grégoire; Gualtieri, Leonardo; Ferrari, Valeria
2014-01-01
Shortly after its birth in a gravitational collapse, a proto-neutron star enters in a phase of quasi-stationary evolution characterized by large gradients of the thermodynamical variables and intense neutrino emission. In few tens of seconds the gradients smooth out while the star contracts and cools down, until it becomes a neutron star. In this paper we study this phase of the proto-neutron star life including rotation, and employing finite temperature equations of state. We model the evolution of the rotation rate, and determine the relevant quantities characterizing the star. Our results show that an isolated neutron star cannot reach, at the end of the evolution, the maximum values of mass and rotation rate allowed by the zero-temperature equation of state. Moreover, a mature neutron star evolved in isolation cannot rotate too rapidly, even if it is born from a proto-neutron star rotating at the mass-shedding limit. We also show that the I-Love-Q relations are violated in the first second of life, but th...
The Effects of Solar Maximum on the Earth's Satellite Population and Space Situational Awareness
Johnson, Nicholas L.
2012-01-01
The rapidly approaching maximum of Solar Cycle 24 will have wide-ranging effects not only on the number and distribution of resident space objects, but also on vital aspects of space situational awareness, including conjunction assessment processes. The best known consequence of high solar activity is an increase in the density of the thermosphere, which, in turn, increases drag on the vast majority of objects in low Earth orbit. The most prominent evidence of this is seen in a dramatic increase in space object reentries. Due to the massive amounts of new debris created by the fragmentations of Fengyun-1C, Cosmos 2251 and Iridium 33 during the recent period of Solar Minimum, this effect might reach epic levels. However, space surveillance systems are also affected, both directly and indirectly, historically leading to an increase in the number of lost satellites and in the routine accuracy of the calculation of their orbits. Thus, at a time when more objects are drifting through regions containing exceptionally high-value assets, such as the International Space Station and remote sensing satellites, their position uncertainties increase. In other words, as the possibility of damaging and catastrophic collisions increases, our ability to protect space systems is degraded. Potential countermeasures include adjustments to space surveillance techniques and the resetting of collision avoidance maneuver thresholds.
Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model
Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong
2016-07-01
We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (autonomous heat engine performing work against the external driving force. Linearity of the system enables us to examine thermodynamic properties of the engine analytically. We find that the efficiency of the engine at maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.
Susanne Wegener
Full Text Available After recanalization, cerebral blood flow (CBF can increase above baseline in cerebral ischemia. However, the significance of post-ischemic hyperperfusion for tissue recovery remains unclear. To analyze the course of post-ischemic hyperperfusion and its impact on vascular function, we used magnetic resonance imaging (MRI with pulsed arterial spin labeling (pASL and measured CBF quantitatively during and after a 60 minute transient middle cerebral artery occlusion (MCAO in adult rats. We added a 5% CO2 - challenge to analyze vasoreactivity in the same animals. Results from MRI were compared to histological correlates of angiogenesis. We found that CBF in the ischemic area recovered within one day and reached values significantly above contralateral thereafter. The extent of hyperperfusion changed over time, which was related to final infarct size: early (day 1 maximal hyperperfusion was associated with smaller lesions, whereas a later (day 4 maximum indicated large lesions. Furthermore, after initial vasoparalysis within the ischemic area, vasoreactivity on day 14 was above baseline in a fraction of animals, along with a higher density of blood vessels in the ischemic border zone. These data provide further evidence that late post-ischemic hyperperfusion is a sequel of ischemic damage in regions that are likely to undergo infarction. However, it is transient and its resolution coincides with re-gaining of vascular structure and function.
Scheen, A J; Schmitt, H; Jiang, H H; Ivanyi, T
2017-02-01
To evaluate factors associated with reaching or not reaching target glycated haemoglobin (HbA1c) levels by analysing the respective contributions of fasting hyperglycaemia (FHG), also referred to as basal hyperglycaemia, vs postprandial hyperglycaemia (PHG) before and after initiation of a basal or premixed insulin regimen in patients with type 2 diabetes. This post-hoc analysis of insulin-naïve patients in the DURABLE study randomised to receive either insulin glargine or insulin lispro mix 25 evaluated the percentages of patients achieving a target HbA1c of reached the target HbA1c. The higher the HbA1c quartile, the greater was the decrease in HbA1c, but also the smaller the percentage of patients achieving the target HbA1c. HbA1c and FHG decreased more in patients reaching the target, resulting in significantly lower values at endpoint in all baseline HbA1c quartiles with either insulin treatment. Patients not achieving the target HbA1c had slightly higher insulin doses, but lower total hypoglycaemia rates. Smaller decreases in FHG were associated with not reaching the target HbA1c, suggesting a need to increase basal or premixed insulin doses to achieve targeted fasting plasma glucose and improve patient response before introducing more intensive prandial insulin regimens. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Postural control during standing reach in children with Down syndrome.
Chen, Hao-Ling; Yeh, Chun-Fu; Howe, Tsu-Hsin
2015-03-01
The purpose of the present study was to investigate the dynamic postural control of children with Down syndrome (DS). Specifically, we compared postural control and goal-directed reaching performance between children with DS and typically developing children during standing reach. Standing reach performance was analyzed in three main phases using the kinematic and kinetic data collected from a force plate and a motion capture system. Fourteen children with DS, age and gender matched with fourteen typically developing children, were recruited for this study. The results showed that the demand of the standing reach task affected both dynamic postural control and reaching performance in children with DS, especially in the condition of beyond arm's length reaching. More postural adjustment strategies were recruited when reaching distance was beyond arm's length. Children with DS tended to use inefficient and conservative strategies for postural stability and reaching. That is, children with DS perform standing reach with increased reaction and execution time and decreased amplitudes of center of pressure displacements. Standing reach resembled functional balance that is required in daily activities. It is suggested to be considered as a part of strength and balance training program with graded task difficulty.
Seepage investigation on selected reaches of Fish Creek, Teton County, Wyoming, 2004
Wheeler, Jerrod D.; Eddy-Miller, Cheryl A.
2005-01-01
A seepage investigation was conducted on Fish Creek, a tributary to the Snake River in Teton County in western Wyoming, near Wilson. Mainstem, return flow, tributary, spring, and diversion sites were selected and measured on six reaches along Fish Creek. Flow was measured under two flow regimes, high flow in August 2004 and base flow in November 2004. During August 17-19, 2004, 20 sites had quantifiable discharge with median values ranging from 0.93 to 384 ft3/s for the 14 mainstem sites on Fish Creek, and from 0.35 to 12.2 ft3/s for the 5 return, spring, and tributary sites (inflows). The discharge was 2.23 ft3/s for the single diversion site (outflow). Estimated gains or losses from ground water were calculated for all reaches using the median discharge values and the estimated measurement errors. Reach 1 had a calculated gain in discharge from ground water (23.8 ?3.3 ft3/s). Reaches 2-6 had no calculated gains in flow, greater than the estimated error, that could be attributed to ground water. A second set of measurements were made under base-flow conditions during November 3-4, 2004. Twelve of the 20 sites visited in August 2004 were flowing and were measured. All of the Reach 1 sites near Teton Village were dry. Median discharge values ranged from 10.3 to 70.0 ft3/s on the nine Fish Creek mainstem sites, and from 2.32 to 3.71 ft3/s on the three return, spring, and tributary sites (inflows). Reaches 2, 3 and 6 had a gain from ground water. Reaches 4 and 5 had no calculated gains in flow, greater than the estimated error, that could be attributed to ground water.
张健楠; 刘洋; 高剑波; 谢新立; 郭丹丹; 李佳音
2016-01-01
目的:探讨人表皮生长因子受体2(human epidermal growth factor receptor 2,HER2)表达程度与正电子发射断层扫描(positron emission tomography,PET)-CT标化摄取值(standardized uptake value,SUV)max的关系,从而定量评价胃癌组织中HER2表达情况,进而间接评价胃癌的生物学特性.方法:回顾性分析57例郑州大学第一附属医院胃腺癌患者的PET-CT扫描资料,测量最大标准摄取值.应用免疫组织化学方法检测胃癌中HER2的表达,用统计学方法分析SUVmax与HER2的相关性.结果:所有患者HER2阳性者28例,HER2表达阳性率为49.12％,按照统计学方法分析得出HER2阳性组SUVmax高于阴性组,且差异具有统计学意义(8.9357±4.21375 vs 4.6448±3.18597,P=0.000).SUVmax与HER2呈中度正相关,相关系数为0.581.以SUVmax为参考值绘制受试者工作特征曲线,曲线下面积为0.83,根据不同SUVmax值所对应特异度、灵敏度及约登指数可得出,当SUVmax值为5.800时所对应的约登指数越大,灵敏度为82.1％,特异度为79.8％.结论:胃癌组织中SUVmax与胃癌病灶中的HER2的表达具有一定的相关性,能较好的评估胃癌的生物学特性.
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Shi Jingtao; Wu Zhen
2011-01-01
A stochastic maximum principle for the risk-sensitive optimal control prob- lem of jump diffusion processes with an exponential-of-integral cost functional is derived assuming that the value function is smooth, where the diffusion and jump term may both depend on the control. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equations and the maximum condition heavily depend on the risk-sensitive parameter. As applications, a linear-quadratic risk-sensitive control problem is solved by using the maximum principle derived and explicit optimal control is obtained.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum modulation of plasmon-guided modes by graphene gating
Radko, Ilya; Bozhevolnyi, Sergey I.; Grigorenko, Alexander N.
2016-01-01
The potential of graphene in plasmonic electro-optical waveguide modulators has been investigated in detail by finite-element method modelling of various widely used plasmonic waveguiding configurations. We estimated the maximum possible modulation depth values one can achieve with plasmonic...... devices operating at telecom wavelengths and exploiting the optical Pauli blocking effect in graphene. Conclusions and guidelines for optimization of modulation/intrinsic loss trade-off have been provided and generalized for any graphene-based plasmonic waveguide modulators, which should help...
Implementation of the Maximum Entropy Method for Analytic Continuation
Levy, Ryan; Gull, Emanuel
2016-01-01
We present $\\texttt{Maxent}$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.
Implementation of the maximum entropy method for analytic continuation
Levy, Ryan; LeBlanc, J. P. F.; Gull, Emanuel
2017-06-01
We present Maxent, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv3 and extensively documented. This paper shows the use of the programs in detail.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
"Reaching Every Student" with a Pyramid of Intervention Approach: One District's Journey
Howery, Kathy; McClellan, Tony; Pedersen-Bayus, Karen
2013-01-01
This paper presents a description of ongoing work of an Alberta school district that is working to support and enhance effective inclusive practices that reach and teach every student. The district is implementing a Pyramid of Supports model that is built upon four critical elements: a belief in social justice and the value of every child, a…
Hybrid RSOA and fibre raman amplified long reach feeder link for WiMAX-on-fibre
Amaya Fernández, Ferney Orlando; Martinez, Javier; Yu, Xianbin;
2009-01-01
A distributed fibre Raman amplified long reach optical access feeder link using a reflective semiconductor optical amplifier in the remote base station is experimentally demonstrated for supporting WiMAXover- fibre transmission. The measured values for the error vector magnitude for quadrature ph...
Spatial curvature endgame: Reaching the limit of curvature determination
Leonard, C. Danielle; Bull, Philip; Allison, Rupert
2016-07-01
Current constraints on spatial curvature show that it is dynamically negligible: |ΩK|≲5 ×10-3 (95% C.L.). Neglecting it as a cosmological parameter would be premature however, as more stringent constraints on ΩK at around the 10-4 level would offer valuable tests of eternal inflation models and probe novel large-scale structure phenomena. This precision also represents the "curvature floor," beyond which constraints cannot be meaningfully improved due to the cosmic variance of horizon-scale perturbations. In this paper, we discuss what future experiments will need to do in order to measure spatial curvature to this maximum accuracy. Our conservative forecasts show that the curvature floor is unreachable—by an order of magnitude—even with Stage IV experiments, unless strong assumptions are made about dark energy evolution and the Λ CDM parameter values. We also discuss some of the novel problems that arise when attempting to constrain a global cosmological parameter like ΩK with such high precision. Measuring curvature down to this level would be an important validation of systematics characterization in high-precision cosmological analyses.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Decoding Grasping Movements from the Parieto-Frontal Reaching Circuit in the Nonhuman Primate.
Nelissen, Koen; Fiave, Prosper Agbesi; Vanduffel, Wim
2017-02-18
Prehension movements typically include a reaching phase, guiding the hand toward the object, and a grip phase, shaping the hand around it. The dominant view posits that these components rely upon largely independent parieto-frontal circuits: a dorso-medial circuit involved in reaching and a dorso-lateral circuit involved in grasping. However, mounting evidence suggests a more complex arrangement, with dorso-medial areas contributing to both reaching and grasping. To investigate the role of the dorso-medial reaching circuit in grasping, we trained monkeys to reach-and-grasp different objects in the dark and determined if hand configurations could be decoded from functional magnetic resonance imaging (MRI) responses obtained from the reaching and grasping circuits. Indicative of their established role in grasping, object-specific grasp decoding was found in anterior intraparietal (AIP) area, inferior parietal lobule area PFG and ventral premotor region F5 of the lateral grasping circuit, and primary motor cortex. Importantly, the medial reaching circuit also conveyed robust grasp-specific information, as evidenced by significant decoding in parietal reach regions (particular V6A) and dorsal premotor region F2. These data support the proposed role of dorso-medial "reach" regions in controlling aspects of grasping and demonstrate the value of complementing univariate with more sensitive multivariate analyses of functional MRI (fMRI) data in uncovering information coding in the brain. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sløk-Madsen, Stefan Kirkegaard; Christensen, Jesper
The world over classrooms in business schools are being taught that corporate values can impact performance. The argument is typically that culture matter more than strategy plans and culture can be influenced and indeed changed by a shared corporate value set. While the claim seems intuitively a...... a unique contribution to the effects of investment in shared company values, and to whether agent rationality can be fundamentally changed by committed organizational efforts.......The world over classrooms in business schools are being taught that corporate values can impact performance. The argument is typically that culture matter more than strategy plans and culture can be influenced and indeed changed by a shared corporate value set. While the claim seems intuitively...... and anecdotally true surprisingly little hard evidence has been produced either for or against. This study attempts to rectify this. The study claims that for corporate values to matter they must at least align, and potentially alter, employee decision-making hence their concept of optimality and rational...
Reach/frequency for printed media: Personal probabilities or models
Mortensen, Peter Stendahl
2000-01-01
that, in order to prevent bias, ratings per group must be used as reading probabilities. Nevertheless, in most cases, the estimates are still biased compared with panel data, thus overestimating net ´reach. Models with the same assumptions as with assignments of reading probabilities are presented......The author evaluates two different ways of estimating reach and frequency of plans for printed media. The first assigns reading probabilities to groups of respondents and calculates reach and frequency by simulation. the second estimates parameters to a model for reach/frequency. It is concluded...
Reach Scale Sediment Balance of Goodwin Creek Watershed, Mississippi
Ran, L.; Garcia, T.; Ye, S.; Harman, C. J.; Hassan, M. A.; Simon, A.
2010-12-01
Several reaches of Goodwin Creek, an experimental watershed within the Mississippi river basin, were analyzed for the period 1977-2007 in terms of long-term trends in sediment gain and loss in each reach, the relation of input and output to within-reach sediment fluxes, and the impacts of land use and bank erosion on reach sediment dynamics. Over the period 1977-2007, degradational and aggradational reaches were identified indicating slight vertical adjustment along the mainstream. Lateral adjustment was the main response of the channel to changes in flow and sediment regimes. Event-based sediment load was estimated using suspended concentration data, bedload transport rate, and changes in cross-sectional data. Bank erosion was estimated using cross-sectional data and models. The spatial and temporal patterns of within-reach sediment dynamics correspond closely with river morphology and also reflect basin conditions over the last three decades; thus they are conditioned by coeval trends in climate, hydrology, and land use. The sediment exchange within the mainstream was calculated by the development of reach sediment balances that reveal complex spatial and temporal patterns of sediment dynamics. Sediment load during the rising limb of the hydrograph was slightly higher than those estimated for the falling limb indicating the relative importance of sediment supply on reach sediment dynamic in the basin. Cumulative plots of sediment exchange reveal that major changes in within reach sediment storage are associated with large floods or major inputs from bank erosion.
Design of a wind turbine rotor for maximum aerodynamic efficiency
Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac;
2009-01-01
The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... and a full three-dimensional Navier-Stokes solver. Excellent agreement is obtained using the three models. Global CP reaches a value of slightly above 0.51, while global thrust coefficient CT is 0.87. The local power coefficient Cp increases to slightly above the Betz limit on the inner part of the rotor......; the local thrust coefficient Ct increases to a value above 1.1. This agrees well with the theory of de Vries, which states that including the effect of the low pressure behind the centre of the rotor stemming from the increased rotation, both Cp and Ct will increase towards the root. Towards the tip, both...
Gjerris, Mickey; Gaiani, S.
2015-01-01
and is one of the important contributors to climate change, simply seems wrong. Here we discuss three questions in relation of this almost self-evident fact: (1) different definitions of food waste and the difficulties in reaching a global definition, how desirable it might be; (2) different ways...... of preventing food waste from the individual to the international level and the importance of examining the values behind different strategies; and (3) ethical challenges in relation to food waste and the opportunity to utilize the indignation that many feel when confronted with food waste to re...
Ke, Jau-Chuan; Lin, Chuen-Horng
2008-11-01
We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.
Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion
Poljak, Nikola
2016-11-01
The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.
The problem of the maximum volumes and particle horizon in the Friedmann universe model
Gong, S. M.
1989-08-01
The maximum volume of the closed Friedmann universe is further investigated and is shown to be 2 x pi squared x R cubed (t), instead of pi squared x R cubed (t) as found previously. This discrepancy comes from the incomplete use of the volume formula of 3-dimensional spherical space in the astronomical literature. Mathematically, the maximum volume exists at any cosmic time t in a 3-dimensional spherical case. However, the Friedmann closed universe in expansion reaches its maximum volume only at the time of the maximum scale factor. The particle horizon has no limitation for the farthest objects in the closed Friedmann universe if the proper distance of objects is compared with the particle horizon as is should be. This leads to absurdity if the luminosity distance of objects is compared with the proper distance of the particle horizon.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Should these potential CMR substances have been registered under REACH?
Wedebye, Eva Bay; Nikolov, Nikolai Georgiev; Dybdahl, Marianne;
2013-01-01
(Q)SAR models were applied to screen around 68,000 REACH pre-registered substances for CMR properties (carcinogenic, mutagenic or toxic to reproduction). Predictions from 14 relevant models were combined to reach overall calls for C, M and R. Combining predictions may reduce “noise” and increase...
Guaranteed performance in reaching mode of sliding mode controlled systems
G K Singh; K E Holé
2004-02-01
Conventionally, the parameters of a sliding mode controller (SMC) are selected so as to reduce the time spent in the reaching mode. Although, an upper bound on the time to reach (reaching time) the sliding surface is easily derived, performance guarantee in the state/error space needs more consideration. This paper addresses the design of constant plus proportional rate reaching law-based SMC for second-order nonlinear systems. It is shown that this controller imposes a bounding second-order error-dynamics, and thus guarantees robust performance during the reaching phase. The choice of the controller parameters based on the time to reach a desirable level of output tracking error (OTE), rather than on the reaching time is proposed. Using the Lyapunov theory, it is shown that parameter selections, based on the reaching time criterion, may need substantially larger time to achieve the OTE. Simulation results are presented for a nonlinear spring-massdamper system. It is seen that parameter selections based on the proposed OTE criterion, result in substantially quicker tracking, while using similar levels of control effort.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Enzyme kinetics and the maximum entropy production principle.
Dobovišek, Andrej; Zupanović, Paško; Brumen, Milan; Bonačić-Lošić, Zeljana; Kuić, Domagoj; Juretić, Davor
2011-03-01
A general proof is derived that entropy production can be maximized with respect to rate constants in any enzymatic transition. This result is used to test the assumption that biological evolution of enzyme is accompanied with an increase of entropy production in its internal transitions and that such increase can serve to quantify the progress of enzyme evolution. The state of maximum entropy production would correspond to fully evolved enzyme. As an example the internal transition ES↔EP in a generalized reversible Michaelis-Menten three state scheme is analyzed. A good agreement is found among experimentally determined values of the forward rate constant in internal transitions ES→EP for three types of β-Lactamase enzymes and their optimal values predicted by the maximum entropy production principle, which agrees with earlier observations that β-Lactamase enzymes are nearly fully evolved. The optimization of rate constants as the consequence of basic physical principle, which is the subject of this paper, is a completely different concept from a) net metabolic flux maximization or b) entropy production minimization (in the static head state), both also proposed to be tightly connected to biological evolution.
Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings
Yan, Xiao-Yong
2014-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k_max). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k_max) and consequently also the sha...
MaxOcc: a web portal for maximum occurrence analysis
Bertini, Ivano, E-mail: ivanobertini@cerm.unifi.it; Ferella, Lucio; Luchinat, Claudio, E-mail: luchinat@cerm.unifi.it; Parigi, Giacomo [Magnetic Resonance Center (CERM), University of Florence (Italy); Petoukhov, Maxim V. [EMBL, Hamburg Outstation (Germany); Ravera, Enrico; Rosato, Antonio [Magnetic Resonance Center (CERM), University of Florence (Italy); Svergun, Dmitri I. [EMBL, Hamburg Outstation (Germany)
2012-08-15
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocchttp://py-enmr.cerm.unifi.it/access/index/maxocc. It can be used freely upon registration to the grid with a digital certificate.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Identification of temporal consistency in rating curve data: Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-08-01
In this paper, a methodology is developed to identify consistency of rating curve data based on a quality analysis of model results. This methodology, called Bidirectional Reach (BReach), evaluates results of a rating curve model with randomly sampled parameter sets in each observation. The combination of a parameter set and an observation is classified as nonacceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Based on this classification, conditions for satisfactory behavior of a model in a sequence of observations are defined. Subsequently, a parameter set is evaluated in a data point by assessing the span for which it behaves satisfactory in the direction of the previous (or following) chronologically sorted observations. This is repeated for all sampled parameter sets and results are aggregated by indicating the endpoint of the largest span, called the maximum left (right) reach. This temporal reach should not be confused with a spatial reach (indicating a part of a river). The same procedure is followed for each data point and for different definitions of satisfactory behavior. Results of this analysis enable the detection of changes in data consistency. The methodology is validated with observed data and various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Energy dependence of CP-violation reach for monochromatic neutrino beam
Bernabeu, Jose [IFIC, Universitat de Valencia-CSIC, E-46100, Burjassot, Valencia (Spain); Espinoza, Catalina [IFIC, Universitat de Valencia-CSIC, E-46100, Burjassot, Valencia (Spain)], E-mail: m.catalina.espinoza@uv.es
2008-06-26
The ultimate goal of future neutrino facilities is the determination of CP violation in neutrino oscillations. Besides |U(e3)|{ne}0, this will require precision experiments with a very intense neutrino source and energy control. With this objective in mind, the creation of monochromatic neutrino beams from the electron capture decay of boosted ions by the SPS of CERN has been proposed. We discuss the capabilities of such a facility as a function of the energy of the boost and the baseline for the detector. We compare the physics potential for two different configurations: (I) {gamma}=90 and {gamma}=195 (maximum achievable at present SPS) to Frejus; (II) {gamma}=195 and {gamma}=440 (maximum achievable at upgraded SPS) to Canfranc. We conclude that the SPS upgrade to 1000 GeV is important to reach a better sensitivity to CP violation iff it is accompanied by a longer baseline.
J. K. Gupta
Full Text Available VHF Faraday rotation (FR and amplitude scintillation data recorded simultaneously during May 1978–December 1980 at Delhi (28.63° N, 77.22° E; Dip 42.44° N is analyzed in order to study the Faraday polarization fluctuations (FPFs and their dependence on the occurrence of post sunset secondary maximum (PSSM and amplitude scintillations. It is noted that FPFs are observed only when both PSSM and scintillations also occur simultaneously. FPFs are observed only during winter and the equinoctial months of high sunspot years. FPFs events are associated with intense scintillation activity, which is characterized by sudden onsets and abrupt endings, and are observed one to three hours after the local sunset. When FPFs and scintillation data from Delhi is compared with the corresponding data from a still lower latitude station, Hyderabad (17.35° N, 78.45° E, it is found that the occurrence of FPFs and scintillations at Delhi is conditional to their prior occurrence at Hyderabad, which indicates their production by a plasma bubble and the as-sociated irregularities generated initially over the magnetic equator. In addition, FPFs and scintillation data for October 1979, when their occurrence was maximum, is also examined in relation to daytime (11:00 LT electrojet strength (EEj values and evening hour h’F from an equatorial location, Kodaikanal (10.3° N, 77.5° E. It is interesting to note that FPFs and scintillations are most likely observed when the EEj was 100 nT or more and h’F reaches around 500 km. These results show that EEj and evening hours h’F values over the magnetic equator are important parameters for predicting FPFs and scintillation activity at locations such as Delhi, where scintillation activity is much more intense as compared to the equatorial region due to the enhanced back-ground ionization due to the occurrence of PSSM.
Key words. Ionosphere (equatorial ionosphere; ionospheric irregularities – Radio science
Extreme value analysis of annual maximum water levels in the Pearl River Delta, China
Qiang ZHANG; Chong-Yu XU; Yongqin David CHEN; Chun-ling LIU
2009-01-01
We analyzed the statistical properties of water level extremes in the Pearl River Delta using five probability distribution functions. Estimation of para-meters was performed using the L-moment technique.Goodness-of-fit was done based on Kolmogorov-Smirnov's statistic D (K-S D). The research results indicate that Wakeby distribution is the best statistical model for description of statistical behaviors of water level extremes in the study region. Statistical analysis indicates that water levels corresponding to different return periods and associated variability tend to be larger in the landward side of the Pearl River Delta and vice versa. A ridge characterized by higher water level can be identified expanding along the West River and the Modaomen channel, showing the impacts of the hydrologic process of the West River basin. Trough and higher grades of water level changes can be detected in the region drained by Xi'nanyong channel, Dongping channel, and mainstream of Pearl River. The Pearl River Delta region is character-ized by low-lying topography and a highly-advanced socio-economy, and is heavily populated, being prone to flood hazards and flood inundation due to rising sea level and typhoons. Therefore, sound and effective counter-measures should be made for human mitigation to natural hazards such as floods and typhoons.
Optimizing nitrogen rates in the midwestern United States for maximum ecosystem value
Patrick M. Ewing
2015-03-01
Full Text Available The importance of corn production to the midwestern United States cannot be overestimated. However, high production requires high nitrogen fertilization, which carries costs to environmental services such as water quality. Therefore, a trade-off exists between the production of corn yield and water quality. We used the Groundwater Vulnerability Assessment for Shallow depths and Crop Environment Resource Synthesis-Maize models to investigate the nature of this trade-off while testing the Simple Analytic Framework trade-offs featured in this Special Feature. First, we estimated the current levels of yield and water quality production in northeastern Iowa and southern Minnesota at the 1-square-kilometer, county, and regional scales. We then constructed an efficiency frontier from optimized nitrogen application patterns to maximize the production of both yield and water quality. Results highlight the context dependency of this trade-off, but show room for increasing the production of both services to the benefit of all stakeholders. We discuss these results in the context of spatial scale, biophysical limitations to the production of services, and stakeholder outcomes given disparate power balances and biophysical contexts.
C. Zhou (Chen)
2008-01-01
textabstractIn the 18th century, statisticians sometimes worked as consultants to gamblers. In order to answer questions like "If a fair coin is flipped 100 times, what is the probability of getting 60 or more heads?", Abraham de Moivre discovered the so-called "normal curve". Independently, Pierre-
Defining a Threshold Value for Maximum Spatial Information Loss of Masked Geo-Data
Ourania Kounadi
2015-04-01
Full Text Available Geographical masks are a group of location protection methods for the dissemination and publication of confidential and sensitive information, such as health- and crime-related geo-referenced data. The use of such masks ensures that privacy is protected for the individuals involved in the datasets. Nevertheless, the protection process introduces spatial error to the masked dataset. This study quantifies the spatial error of masked datasets using two approaches. First, a perceptual survey was employed where participants ranked the similarity of a diverse sample of masked and original maps. Second, a spatial statistical analysis was performed that provided quantitative results for the same pairs of maps. Spatial statistical similarity is calculated with three divergence indices that employ different spatial clustering methods. All indices are significantly correlated with the perceptual similarity. Finally, the results of the spatial analysis are used as the explanatory variable to estimate the perceptual similarity. Three prediction models are created that indicate upper boundaries for the spatial statistical results upon which the masked data are perceived differently from the original data. The results of the study aim to help potential “maskers” to quantify and evaluate the error of confidential masked visualizations.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Chiara eBegliomini
2014-09-01
Full Text Available Experimental evidence suggests the existence of a sophisticated brain circuit specifically dedicated to reach-to-grasp planning and execution, both in human and non human primates (Castiello, 2005. Studies accomplished by means of neuroimaging techniques suggest the hypothesis of a dichotomy between a reach-to-grasp circuit, involving the intraparietal area (AIP, the dorsal and ventral premotor cortices (PMd and PMv - Castiello and Begliomini, 2008; Filimon, 2010 and a reaching circuit involving the medial intraparietal area (mIP and the Superior Parieto-Occipital Cortex (SPOC (Culham et al., 2006. However, the time course characterizing the involvement of these regions during the planning and execution of these two types of movements has yet to be delineated. A functional magnetic resonance imaging (fMRI study has been conducted, including reach-to grasp and reaching only movements, performed towards either a small or a large stimulus, and Finite Impulse Response model (FIR - Henson, 2003 was adopted to monitor activation patterns from stimulus onset for a time window of 10 seconds duration. Data analysis focused on brain regions belonging either to the reaching or to the grasping network, as suggested by Castiello & Begliomini (2008.Results suggest that reaching and grasping movements planning and execution might share a common brain network, providing further confirmation to the idea that the neural underpinnings of reaching and grasping may overlap in both spatial and temporal terms (Verhagen et al., 2013.
Proprioceptive body illusions modulate the visual perception of reaching distance.
Agustin Petroni
Full Text Available The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide-without engaging in explicit action-whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas.
The impact of REACH on classification for human health hazards.
Oltmanns, J; Bunke, D; Jenseit, W; Heidorn, C
2014-11-01
The REACH Regulation represents a major piece of chemical legislation in the EU and requires manufacturers and importers of chemicals to assess the safety of their substances. The classification of substances for their hazards is one of the crucial elements in this process. We analysed the effect of REACH on classification for human health endpoints by comparing information from REACH registration dossiers with legally binding, harmonised classifications. The analysis included 142 chemicals produced at very high tonnages in the EU, the majority of which have already been assessed in the past. Of 20 substances lacking a harmonised classification, 12 chemicals were classified in REACH registration dossiers. More importantly, 37 substances with harmonised classifications for human health endpoints had stricter classifications in registration dossiers and 29 of these were classified for at least one additional endpoint not covered by the harmonised classification. Substance-specific analyses suggest that one third of these additional endpoints emerged from experimental studies performed to fulfil information requirements under REACH, while two thirds resulted from a new assessment of pre-REACH studies. We conclude that REACH leads to an improved hazard characterisation even for substances with a potentially good data basis.
Proprioceptive Body Illusions Modulate the Visual Perception of Reaching Distance
Petroni, Agustin; Carbajal, M. Julia; Sigman, Mariano
2015-01-01
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas. PMID:26110274
Bogdan Cosmin Gomoi; Lavinia Denisia Cuc; Robert Almaşi
2014-01-01
When taking into consideration the issue of defining the “fair value” concept, those less experimented in the area often fall in the “price trap”, which is considered as an equivalent of the fair value of financial structures. This valuation basis appears as a consequence of the trial to provide an “accurate image” by the financial statements and, also, as an opportunity for the premises offered by the activity continuing principle. The specialized literature generates ample controversies reg...
50 years sets with positive reach - a survey -
Christoph Thäle
2008-09-01
Full Text Available The purpose of this paper is to summarize results on various aspects of sets with positive reach, which are up to now not available in such a compact form. After recalling briefly the results before 1959, sets with positive reach and their associated curvature measures are introduced. We develop an integral and current representation of these curvature measures and show how the current representation helps to prove integralgeometric formulas, such as the principal kinematic formula. Also random sets with positive reach and random mosaics (or the more general random cell-complexes with general cell shape are considered.
REACH Basics for Chinese Producers of Electric Household Appliances
Dr.Klaus W.Mehl
2008-01-01
The following article explains the EU chemical regulation "REACH', explicates the requirements that Chinese producers are facing, and shows how they can fulfill the requirements and secure their access to the EU market. The consequences of failing to fulfill REACH requirements are given in REACH Article 5: No data, no market: ... substances ... in articles ... shall not be ... placed on the market unless they have been registered In other words: Without registration of chemicals Chinese producers of electric household appliances may loose their EU market.
Maximum entropy, word-frequency, Chinese characters, and multiple meanings.
Yan, Xiaoyong; Minnhagen, Petter
2015-01-01
The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (k(max)). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, k(max)) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, k(max)), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf's law, the Simon-model for texts and the present results are discussed.
On Global Magnetic ``Monopoly'' Near Solar Cycle Maximums
Kryvodubskyj, V.
During last maximums of the solar activity the both poles of the polar magnetic field had the same polarity. Since in the turbulent α Ω -dynamo model the excitation thresholds of the periodic dipole and quadrupole modes of the poloidal madnetic field (PMF) are rather close [Parker E. N.: 1971, Ap.J. V. 164, p. 491] then it is possible that the quadrupole mode may be excited due to variations of physical parameters in a some regions of the solar convection zone (SCZ). The pattern of the excited modes (dipole, quadrupole, octupole, etc.) is determined by the values of wave number of the Parker's dynamo-wave. We calculated these values for the SCZ model by Stix (1989) [Stix M.: 1989, The Sun. Berlin, p. 200] in the vicinity of solar tachocline (a region of strong shear of angular velocity at the base of the SCZ) with using our estimation of the helical turbulence parameter [Krivodubskij V. N.: 1998, Astron. Reports V. 42, No 1, p. 122] and values of the radial gradient of the angular velocity obtained from the newer helioseismic measurements (during rising phase of 23th solar cycle: 1995-1999) [Howe R.,Christensen-Dalsgaard J., Hill F. et al.: 2000, Science. V. 287, p. 2456]. It is found out that at low latitudes dynamo mechanism produces rather the dipole (wave number ≈ -7), the main antisymmetric, relatively to equatorial plane, mode of the PMF; while at the latitudes higher than 50o the conditions are more favourable for exciting of the quadrupole (wave number ≈ +8), the lowest symmetric mode. Arised north-south magnetic structure asymmetry gives an opportunity to explain the space magnetic anomaly of the PMF (``monopoly'') observed near solar cycle maximums.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Westhoff, M.; Erpicum, S.; Archambeau, P.; Pirotton, M.; Zehe, E.; Dewals, B.
2015-12-01
Power can be performed by a system driven by a potential difference. From a given potential difference, the power that can be subtracted is constraint by the Carnot limit, which follows from the first and second laws of thermodynamics. If the system is such that the flux producing power (with power being the flux times its driving potential difference) also influences the potential difference, a maximum in power can be obtained as a result of the trade-off between the flux and the potential difference. This is referred to as the maximum power principle. It has already been shown that the atmosphere operates close to this maximum power limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state of maximum power, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells. The aim of this study is to test if the soil's effective hydraulic conductivity also adapts in such a way that it produces maximum power. However, the soil's hydraulic conductivity adapts differently; for example by the creation of preferential flow paths. Here, this process is simulated in a lab experiment, which focuses on preferential flow paths created by piping. In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles, with the aim to test if the effective hydraulic conductivity of the sand bed can be predicted with the maximum power principle. The experimental setup consists of two freely draining reservoir connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. The results will indicate whether the maximum power principle does apply for groundwater flow and how it should be applied. Because of the different way of adaptation of flow conductivity, the results differ from that of the
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
McMillan, John; Walker, Simon; Hope, Tony
2014-01-01
This article argues that hope is of value in clinical ethics and that it can be important for clinicians to be sensitive to both the risks of false hope and the importance of retaining hope. However, this sensitivity requires an understanding of the complexity of hope and how it bears on different aspects of a well-functioning doctor-patient relationship. We discuss hopefulness and distinguish it, from three different kinds of hope, or 'hopes for', and then relate these distinctions back to differing accounts of autonomy. This analysis matters because it shows how an overly narrow view of the ethical obligations of a clinician to their patient, and autonomy, might lead to scenarios where patients regret the choices they make.
Implementing Target Value Design.
Alves, Thais da C L; Lichtig, Will; Rybkowski, Zofia K
2017-04-01
An alternative to the traditional way of designing projects is the process of target value design (TVD), which takes different departure points to start the design process. The TVD process starts with the client defining an allowable cost that needs to be met by the design and construction teams. An expected cost in the TVD process is defined through multiple interactions between multiple stakeholders who define wishes and others who define ways of achieving these wishes. Finally, a target cost is defined based on the expected profit the design and construction teams are expecting to make. TVD follows a series of continuous improvement efforts aimed at reaching the desired goals for the project and its associated target value cost. The process takes advantage of rapid cycles of suggestions, analyses, and implementation that starts with the definition of value for the client. In the traditional design process, the goal is to identify user preferences and find solutions that meet the needs of the client's expressed preferences. In the lean design process, the goal is to educate users about their values and advocate for a better facility over the long run; this way owners can help contractors and designers to identify better solutions. This article aims to inform the healthcare community about tools and techniques commonly used during the TVD process and how they can be used to educate and support project participants in developing better solutions to meet their needs now as well as in the future.
The Value Landscape in Ecosystem Services: Value, Value Wherefore Art Thou Value?
Adam P. Hejnowicz
2017-05-01
Full Text Available Ecosystem services has risen to become one of the preeminent global policy discourses framing the way we conceive and articulate environment–society relations, integral to the form and function of a number of far-reaching international policies such as the Aichi 2020 Biodiversity Targets and the recently adopted Sustainable Development Goals. Value; its pursuit, definition, quantification, monetization, multiplicity and uncertainty, both in terms of meaning and attribution, is fundamental to the economic foundations of ecosystem services and a core feature driving its inclusion across multiple policy domains such as environmental management and conservation. Distilling current knowledge and developments in this arena are thus highly prescient. In this article, we cast a critical eye over the evidence base and aim to provide a comprehensive synthesis of what values are, why they are important and the methodological approaches employed to elicit them (including their pros and cons and the arguments for and against. We also illustrate the current ecosystem service value landscape, highlight some of the fundamental challenges in discerning and applying values, and outline future research activities. In so doing, we further advance ecosystem valuation discourse, contribute to wider debates linking ecosystem services and sustainability and strengthen connections between ecosystem services and environmental policy.
Application of chemical toxicity distributions to ecotoxicology data requirements under REACH.
Williams, E Spencer; Berninger, Jason P; Brooks, Bryan W
2011-08-01
The European Union's REACH regulation has further highlighted the lack of ecotoxicological data for substances in the marketplace. The mandates under REACH (registration, evaluation, authorization, and restriction of chemicals) to produce data and minimize testing on vertebrates present an impetus for advanced hazard assessment techniques using read-across. Research in our group has recently focused on probabilistic ecotoxicological hazard assessment approaches using chemical toxicity distributions (CTDs). Using available data for chemicals with similar modes of action or within a chemical class may allow for selection of a screening point value (SPV) for development of environmental safety values, based on a probabilistic distribution of toxicity values for a specific endpoint in an ecological receptor. Ecotoxicity data for acetylcholinesterase inhibitors and surfactants in Daphnia magna and Pimephales promelas were gathered from several data sources, including the U.S. Environmental Protection Agency's ECOTOX and Pesticides Ecotoxicity databases, the peer-reviewed literature, and the Human and Environmental Risk Assessment (HERA) project. Chemical toxicity distributions were subsequently developed, and the first and fifth centiles were used as SPVs for the development of screening-predicted no-effect concentrations (sPNECs). The first and fifth centiles of these distributions were divided by an assessment factor of 1,000, as recommended by REACH guidance. Use of screening values created using these techniques could support the processes of data dossier development and environmental exposure assessment, allowing for rigorous prioritization in testing and monitoring to fill data gaps.
BINDER DRAINAGE TEST FOR POROUS MIXTURES MADE BY VARYING THE MAXIMUM AGGREGATE SIZES
Hardiman Hardiman
2004-01-01
Full Text Available Binder drainage occurs with mixes of small aggregate surface area particularly porous asphalt. The binder drainage test, developed by the Transport Research Laboratory, UK, is commonly used to set an upper limit on the acceptable binder content for a porous mix. This paper presents the results of a laboratory investigation to determine the effects of different binder types on the binder drainage characteristics of porous mix made of various maximum aggregate sizes 20, 14 and 10 mm. Two types of binder were used, conventional 60/70 pen bitumen, and styrene butadiene styrene (SBS modified bitumen. The amount of binder lost through drainage after three hours at the maximum mixing temperature were measured in duplicate for mixes of different maximum sizes and binder contents. The maximum mixing temperature adopted depends on the types of binder used. The retained binder is plotted against the initial mixed binder content, together with the line of equality where the retained binder equals the mixed binder content. The results indicate the significant contribution of using SBS modified bitumen to increase the target bitumen binder content. Their significance is discussed in terms of target binder content, the critical binder content, the maximum mixed binder content and the maximum retained binder content values obtained from the binder drainage test. It was concluded that increasing maximum aggregate sizes decrease the maximum retained binder content, critical binder content, target binder content, maximum mixed binder content, and mixed content for both binders, but however for all mixtures, SBS is the highest.
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to
Reaching and Teaching: A Study in Audience Targeting.
Ritter, Ellen M.; Welch, Diane T.
1988-01-01
Describes a project conducted by the Texas Agricultural Extension Service to market the Family Day Home Care Providers Program to an unknown clientele. Discusses the problems involved in identifying and reaching the target audience. (JOW)
Stream Habitat Reach Summary - North Coast [ds63
California Department of Resources — The shapefile is based on habitat unit level data summarized at the stream reach level. The database represents salmonid stream habitat surveys from 645 streams of...
Helping the Library Reach Out to the Future
... Issues Helping the Library Reach Out to the Future Past Issues / Fall 2007 Table of Contents For ... of this page please turn Javascript on. Encouraging future medical researchers: (l-r) NLM Director Dr. Donald ...
Hanford Reach - Snively Basin Rye Field Rehabilitation 2014
US Fish and Wildlife Service, Department of the Interior — The Snively Basin area of the Arid Lands Ecology Reserve within the Hanford Reach National Monument was historically used to farm cereal rye (Secale cereale), among...
PNW River Reach Files -- 1:100k Waterbodies (polygons)
Pacific States Marine Fisheries Commission — This feature class includes the POLYGON waterbody features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes...
Reach tracking reveals dissociable processes underlying cognitive control.
Erb, Christopher D; Moher, Jeff; Sobel, David M; Song, Joo-Hyun
2016-07-01
The current study uses reach tracking to investigate how cognitive control is implemented during online performance of the Stroop task (Experiment 1) and the Eriksen flanker task (Experiment 2). We demonstrate that two of the measures afforded by reach tracking, initiation time and reach curvature, capture distinct patterns of effects that have been linked to dissociable processes underlying cognitive control in electrophysiology and functional neuroimaging research. Our results suggest that initiation time reflects a response threshold adjustment process involving the inhibition of motor output, while reach curvature reflects the degree of co-activation between response alternatives registered by a monitoring process over the course of a trial. In addition to shedding new light on fundamental questions concerning how these processes contribute to the cognitive control of behavior, these results present a framework for future research to investigate how these processes function across different tasks, develop across the lifespan, and differ among individuals. Copyright © 2016 Elsevier B.V. All rights reserved.
Birth Defects from Zika More Far-Reaching Than Thought
... gov/news/fullstory_162538.html Birth Defects From Zika More Far-Reaching Than Thought Studies found greater ... 14, 2016 WEDNESDAY, Dec. 14, 2016 (HealthDay News) -- Zika's ability to damage the infant brain may be ...
Monitoring Weather Station Fire Rehabilitation Treatments: Hanford Reach National Monument
US Fish and Wildlife Service, Department of the Interior — The Weather Station Fire (July, 2005) burned across 4,918 acres in the Saddle Mountain Unit of the Hanford Reach National Monument, which included parts of the...
PNW River Reach Files -- 1:100k Watercourses (arcs)
Pacific States Marine Fisheries Commission — This feature class includes the ARC features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes are also...
Optical technologies in extended-reach access networks
Wong, Elaine; Amaya Fernández, Ferney Orlando; Tafur Monroy, Idelfonso
2009-01-01
The merging of access and metro networks has been proposed as a solution to lower the unit cost of customer bandwidth. This paper reviews some of the recent advances and challenges in extended-reach optical access networks....
Hanford Reach - Strategic Control of Phragmites Within Saddle Mountain Lakes
US Fish and Wildlife Service, Department of the Interior — The Saddle Lakes Fire of 2015 burned 14,200 acres of habitat on Saddle Mountain National Wildlife Refuge, part of the Hanford Reach National Monument. Within the...
Hanford Reach - Snively Basin Rye Field Rehabilitation 2012
US Fish and Wildlife Service, Department of the Interior — The Snively Basin area of the Arid Lands Ecology Reserve (ALE) within the Hanford Reach National Monument was historically used to farm cereal rye, among other...
Bogdan Cosmin Gomoi
2014-12-01
Full Text Available When taking into consideration the issue of defining the “fair value” concept, those less experimented in the area often fall in the “price trap”, which is considered as an equivalent of the fair value of financial structures. This valuation basis appears as a consequence of the trial to provide an “accurate image” by the financial statements and, also, as an opportunity for the premises offered by the activity continuing principle. The specialized literature generates ample controversies regarding the “fair value” concept and the “market value” concept. The paper aims to debate this issue, taking into account various opinions.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
RiverCare communication strategy for reaching beyond
Cortes Arevalo, Juliette; den Haan, Robert Jan; Berends, Koen; Leung, Nick; Augustijn, Denie; Hulscher, Suzanne J. M. H.
2017-04-01
Effectively communicating river research to water professionals and researchers working in multiple disciplines or organizations is challenging. RiverCare studies the mid-term effects of innovative river interventions in the Netherlands to improve river governance and sustainable management. A total of 21 researchers working at 5 universities are part of the consortium, which also includes research institutes, consultancies, and water management authorities. RiverCare results do not only benefit Dutch river management, but can also provide useful insights to challenges abroad. Dutch partner organizations actively involved in RiverCare are our direct users. However, we want to reach water professionals from the Netherlands and beyond. To communicate with and disseminate to these users, we set up a communication strategy that includes the following approaches : (1) Netherlands Centre of River studies (NCR) website to announce activities post news, not limited to RiverCare; (2) A RiverCare newsletter that is published twice per year to update about our progress and activities; (3) A multimedia promotional providing a 'first glance' of RiverCare. It consists of four video episodes and an interactive menu; (4) An interactive knowledge platform to provide access, explain RiverCare results and gather feedback about the added value and potential use of these results; and (5) A serious gaming environment titled Virtual River where actors can play out flood scaling intervention and monitoring strategies to assess maintenance scenarios. The communication strategy and related approaches are being designed and developed during the project. We use participatory methods and systematic evaluation to understand communication needs and to identify needs for improvement. As a first step, RiverCare information is provided via the NCR website. The active collaboration with the NCR is important to extend communication efforts beyond the RiverCare consortium and after the program ends
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
The maximum contribution to reionization from metal-free stars
Rozas, J M; Salvador-Solé, E; Rozas, Jose M.; Miralda-Escude, Jordi; Salvador-Sole, Eduard
2005-01-01
We estimate the maximum contribution to reionization from the first generation of massive stars, with zero metallicity, under the assumption that one of these stars forms with a fixed mass in every collapsed halo in which metal-free gas is able to cool. We assume that any halo that has already had stars previously formed in one of their halo progenitors will form only stars with metals, which are assigned an emissivity of ionizing radiation equal to that determined at z=4 from the measured intensity of the ionizing background. We examine the impact of molecular hydrogen photodissociation (which tends to reduce cooling when a photodissociating background is produced by the first stars) and X-Ray photoheating (which heats the atomic medium, raising the entropy of the gas before it collapses into halos). We find that in the CDM$\\Lambda$ model supported by present observations, and even assuming no negative feedbacks for the formation of metal-free stars, a reionized mass fraction of 50% is not reached until reds...
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.
2017-04-01
When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach). This methodology considers a period to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the model behaves satisfactory. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each station, a BReach analysis is performed and subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear consistent with this knowledge of historical changes and facilitates thus a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).
Whole-Body Reaching Movements Formulated by Minimum Muscle-Tension Change Criterion.
Kudo, Naoki; Choi, Kyuheong; Kagawa, Takahiro; Uno, Yoji
2016-05-01
It is well known that planar reaching movements of the human shoulder and elbow joints have invariant features: roughly straight hand paths and bell-shaped velocity profiles. The optimal control models with the criteria of smoothness or precision, which determine a unique movement pattern, predict such features of hand trajectories. In this letter on expanding the research on simple arm reaching movements, we examine whether the smoothness criteria can be applied to whole-body reaching movements with many degrees of freedom. Determining a suitable joint trajectory in the whole-body reaching movement corresponds to the optimization problem with constraints, since body balance must be maintained during a motion task. First, we measured human joint trajectories and ground reaction forces during whole-body reaching movements, and confirmed that subjects formed similar movements with common characteristics in the trajectories of the hand position and body center of mass. Second, we calculated the optimal trajectories according to the criteria of torque and muscle-tension smoothness. While the minimum torque change trajectories were not consistent with the experimental data, the minimum muscle-tension change model was able to predict the stereotyped features of the measured trajectories. To explore the dominant effects of the extension from the torque change to the muscle-tension change, we introduced a weighted torque change cost function. Considering the maximum voluntary contraction (MVC) force of the muscle as the weighting factor of each joint torque, we formulated the weighted torque change cost as a simplified version of the minimum muscle-tension change cost. The trajectories owing to the minimum weighted torque change criterion also showed qualitative agreement with the common features of the measured data. Proper estimation of the MVC forces in the body joints is essential to reproduce human whole-body movements according to the minimum muscle-tension change
The Cognition of Maximal Reach Distance in Parkinson’s Disease
Satoru Otsuki
2016-01-01
Full Text Available This study aimed to investigate whether the cognition of spatial distance in reaching movements was decreased in patients with Parkinson’s disease (PD and whether this cognition was associated with various symptoms of PD. Estimated and actual maximal reaching distances were measured in three directions in PD patients and healthy elderly volunteers. Differences between estimated and actual measurements were compared within each group. In the PD patients, the associations between “error in cognition” of reaching distance and “clinical findings” were also examined. The results showed that no differences were observed in any values regardless of dominance of hand and severity of symptoms. The differences between the estimated and actual measurements were negatively deviated in the PD patients, indicating that they tended to underestimate reaching distance. “Error in cognition” of reaching distance correlated with the items of posture in the motor section of the Unified Parkinson’s Disease Rating Scale. This suggests that, in PD patients, postural deviation and postural instability might affect the cognition of the distance from a target object.
The maximum sizes of large scale structures in alternative theories of gravity
Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N
2016-01-01
The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.
Bärnighausen, Till; Bloom, David E.; Cafiero-Fonseca, Elizabeth T.; O’Brien, Jennifer Carroll
2014-01-01
Vaccination has led to remarkable health gains over the last century. However, large coverage gaps remain, which will require significant financial resources and political will to address. In recent years, a compelling line of inquiry has established the economic benefits of health, at both the individual and aggregate levels. Most existing economic evaluations of particular health interventions fail to account for this new research, leading to potentially sizable undervaluation of those interventions. In line with this new research, we set forth a framework for conceptualizing the full benefits of vaccination, including avoided medical care costs, outcome-related productivity gains, behavior-related productivity gains, community health externalities, community economic externalities, and the value of risk reduction and pure health gains. We also review literature highlighting the magnitude of these sources of benefit for different vaccinations. Finally, we outline the steps that need to be taken to implement a broad-approach economic evaluation and discuss the implications of this work for research, policy, and resource allocation for vaccine development and delivery. PMID:25136129
New downshifted maximum in stimulated electromagnetic emission spectra
Sergeev, Evgeny; Grach, Savely
A new spectral maximum in spectra of stimulated electromagnetic emission of the ionosphere (SEE, [1]) was detected in experiments at the SURA facility in 2008 for the pump frequencies f0 4.4-4.5 MHz, most stably for f0 = 4.3 MHz, the lowest possible pump frequency at the SURA facility. The new maximum is situated at frequency shifts ∆f -6 kHz from the pump wave frequency f0 , ∆f = fSEE - f0 , somewhat closer to the f0 than the well known [2,3] Downshifted Maximum in the SEE spectrum at ∆f -9 kHz. The detection and detailed study of the new feature (which we tentatively called the New Downshifted Maximum, NDM) became possible due to high frequency resolution in spectral analysis. The following properties of the NDM are established. (i) The NDM appears in the SEE spectra simultaneously with the DM and UM features after the pump turn on (recall that the less intensive Upshifted Maximum, UM, is situated at ∆f +(6-8) kHz [2,3]). The NDM can't be attributed to 1 DM [4] or Narrow Continuum Maximum (NCM, 2 [5]) SEE features, as well as to splitted DM near gyroharmonics [2]. (ii) The NDM is observed as prominent feature for maximum pump power of the SURA facility P ≈ 120 MW ERP, for which the DM is almost covered by the Broad Continuum SEE feature [2,3]. For P ˜ 30-60 MW ERP the DM and NDM have comparable intensities. For the lesser pump power the DM prevails in the SEE spectrum, while the NDM becomes invisible being covered by the thermal Narrow Continuum feature [2]. (iii) The NDM is exactly symmetrical for the UM relatively to f0 when the former one is observed, although the UM frequency offset increases up to ∆fUM ≈ +9 kHz with a decrease of the pump power up to P ≈ 4 MW ERP. The DM formation in the SEE spectrum is attributed to a three-wave interaction between the upper and lower hybrid waves in the ionosphere, and the lower hybrid frequency ( 7 kHz) determines the frequency offset of the DM high frequency flank [2,6]. The detection of the NDM with
CASSAVA BREEDING I: THE VALUE OF BREEDING VALUE
Hernán Ceballos
2016-08-01
Full Text Available Breeding cassava relies on several selection stages (single row trial-SRT; preliminary; advanced; and uniform yield trials - UYT. This study uses data from 14 years of evaluations. From more than 20,000 genotypes initially evaluated only 114 reached the last stage. The objective was to assess how the data at SRT could be used to predict the probabilities of genotypes reaching the UYT. Phenotypic data from each genotype at SRT was integrated into the selection index (SIN used by the cassava breeding program. Average SIN from all the progenies derived from each progenitor was then obtained. Average SIN is an approximation of the breeding value of each progenitor. Data clearly suggested that some genotypes were better progenitors than others (e.g. high number of their progenies reaching the UYT, suggesting important variation in breeding values of progenitors. However, regression of average SIN of each parental genotype on the number of their respective progenies reaching UYT resulted in a negligible coefficient of determination (r2 = 0.05. Breeding value (e.g. average SIN at SRT was not efficient predicting which genotypes were more likely to reach the UYT stage. Number of families and progenies derived from a given progenitor were more efficient predicting the probabilities of the progeny from a given parent reaching the UYT stage. Large within-family genetic variation tends to mask the true breeding value of each progenitor. The use of partially inbred progenitors (e.g. S1 or S2 genotypes would reduce the within-family genetic variation thus making the assessment of breeding value more accurate. Moreover, partial inbreeding of progenitors can improve the breeding value of the original (S0 parental material and sharply accelerate genetic gains. For instance, homozygous S1 genotypes for the dominant resistance to cassava mosaic disease could be generated and selected. All gametes from these selected S1 genotypes would carry the desirable allele
Cassava Breeding I: The Value of Breeding Value
Ceballos, Hernán; Pérez, Juan C.; Joaqui Barandica, Orlando; Lenis, Jorge I.; Morante, Nelson; Calle, Fernando; Pino, Lizbeth; Hershey, Clair H.
2016-01-01
Breeding cassava relies on several selection stages (single row trial-SRT; preliminary; advanced; and uniform yield trials—UYT). This study uses data from 14 years of evaluations. From more than 20,000 genotypes initially evaluated only 114 reached the last stage. The objective was to assess how the data at SRT could be used to predict the probabilities of genotypes reaching the UYT. Phenotypic data from each genotype at SRT was integrated into the selection index (SIN) used by the cassava breeding program. Average SIN from all the progenies derived from each progenitor was then obtained. Average SIN is an approximation of the breeding value of each progenitor. Data clearly suggested that some genotypes were better progenitors than others (e.g., high number of their progenies reaching the UYT), suggesting important variation in breeding values of progenitors. However, regression of average SIN of each parental genotype on the number of their respective progenies reaching UYT resulted in a negligible coefficient of determination (r2 = 0.05). Breeding value (e.g., average SIN) at SRT was not efficient predicting which genotypes were more likely to reach the UYT stage. Number of families and progenies derived from a given progenitor were more efficient predicting the probabilities of the progeny from a given parent reaching the UYT stage. Large within-family genetic variation tends to mask the true breeding value of each progenitor. The use of partially inbred progenitors (e.g., S1 or S2 genotypes) would reduce the within-family genetic variation thus making the assessment of breeding value more accurate. Moreover, partial inbreeding of progenitors can improve the breeding value of the original (S0) parental material and sharply accelerate genetic gains. For instance, homozygous S1 genotypes for the dominant resistance to cassava mosaic disease (CMD) could be generated and selected. All gametes from these selected S1 genotypes would carry the desirable allele and
Maximum daily rainfall in South Korea
Saralees Nadarajah; Dongseok Choi
2007-08-01
Annual maxima of daily rainfall for the years 1961–2001 are modeled for five locations in South Korea (chosen to give a good geographical representation of the country). The generalized extreme value distribution is fitted to data from each location to describe the extremes of rainfall and to predict its future behavior. We find evidence to suggest that the Gumbel distribution provides the most reasonable model for four of the five locations considered. We explore the possibility of trends in the data but find no evidence suggesting trends. We derive estimates of 10, 50, 100, 1000, 5000, 10,000, 50,000 and 100,000 year return levels for daily rainfall and describe how they vary with the locations. This paper provides the first application of extreme value distributions to rainfall data from South Korea.
Concept of REACH and impact on evaluation of chemicals.
Foth, H; Hayes, Aw
2008-01-01
Industrial chemicals have been in use for many decades and new products are regularly invented and introduced to the market. Also for decades, many different chemical laws have been introduced to regulate safe handling of chemicals in different use patterns. The patchwork of current regulation in the European Union is to be replaced by the new regulation on industrial chemical control, REACH. REACH stands for registration, evaluation, and authorization of chemicals. REACH entered force on June 1, 2007. REACH aims to overcome limitations in testing requirements of former regulation on industrial chemicals to enhance competitiveness and innovation with regard to manufacture safer substances and to promote the development of alternative testing methods. A main task of REACH is to address data gaps regarding the properties and uses of industrial chemicals. Producers, importers, and downstream users will have to compile and communicate standard information for all chemicals. Information sets to be prepared include safety data sheets (SDS), chemical safety reports (CSR), and chemical safety assessments (CSA). These are designed to guarantee adequate handling in the production chain, in transport and in use and to prevent the substances from being released to and distributed within the environment. Another important aim is to identify the most harmful chemicals and to set incentives to substitute them with safer alternatives. On one hand, REACH will have substantial impact on the basic understanding of the evaluation of chemicals. However, the toxicological sciences can also substantially influence the workability of REACH that supports the transformation of data to the information required to understand and manage acceptable and non acceptable risks in the use of industrial chemicals. The REACH regulation has been laid down in the main document and 17 Annexes of more than 849 pages. Even bigger technical guidance documents will follow and will inform about the rules for
The role of pressure anisotropy on the maximum mass of cold compact stars
Karmakar, S.; Mukherjee, S.; Sharma, R.; Maharaj, S. D.
2007-06-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface red-shift is analysed in the Vaidya--Tikekar model. It is shown that maximum compactness, red-shift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
The role of pressure anisotropy on the maximum mass of cold compact stars
S Karmakar; S Mukherjee; S Sharma; S D Maharaj
2007-06-01
We study the physical features of a class of exact solutions for cold compact anisotropic stars. The effect of pressure anisotropy on the maximum mass and surface red-shift is analysed in the Vaidya–Tikekar model. It is shown that maximum compactness, red-shift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in agreement with observation.
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Efficiency at maximum power for an Otto engine with ideal feedback
Wang, Honghui; He, Jizhou; Wang, Jianhui; Wu, Zhaoqi
2016-10-01
We propose an Otto heat engine that undergoes processes involving a special class of feedback and analyze theoretically its response. We use stochastic thermodynamics to determine the performance characteristics of the heat engine and indicate the possibility that its maximum efficiency can surpass the Carnot value. The analytical expression for efficiency at maximum power, including the effects resulting from feedback, reduces to that previously derived based on an engine without feedback.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
Determining the Tsallis parameter via maximum entropy
Conroy, J. M.; Miller, H. G.
2015-05-01
The nonextensive entropic measure proposed by Tsallis [C. Tsallis, J. Stat. Phys. 52, 479 (1988), 10.1007/BF01016429] introduces a parameter, q , which is not defined but rather must be determined. The value of q is typically determined from a piece of data and then fixed over the range of interest. On the other hand, from a phenomenological viewpoint, there are instances in which q cannot be treated as a constant. We present two distinct approaches for determining q depending on the form of the equations of constraint for the particular system. In the first case the equations of constraint for the operator O ̂ can be written as Tr (FqO ̂)=C , where C may be an explicit function of the distribution function F . We show that in this case one can solve an equivalent maxent problem which yields q as a function of the corresponding Lagrange multiplier. As an illustration the exact solution of the static generalized Fokker-Planck equation (GFPE) is obtained from maxent with the Tsallis enropy. As in the case where C is a constant, if q is treated as a variable within the maxent framework the entropic measure is maximized trivially for all values of q . Therefore q must be determined from existing data. In the second case an additional equation of constraint exists which cannot be brought into the above form. In this case the additional equation of constraint may be used to determine the fixed value of q .
Ingo W Nader
Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.
Properties of Carry Value Transformation
Suryakanta Pal
2012-01-01
Full Text Available Carry Value Transformation (CVT is a model of discrete deterministic dynamical system. In the present study, it has been proved that (1 the sum of any two nonnegative integers is the same as the sum of their CVT and XOR values. (2 the number of iterations leading to either CVT=0 or XOR=0 does not exceed the maximum of the lengths of the two addenda expressed as binary strings. A similar process of addition of modified Carry Value Transformation (MCVT and XOR requires a maximum of two iterations for MCVT to be zero. (3 an equivalence relation is shown to exist on Z×Z which divides the CV table into disjoint equivalence classes.
Kinematic analysis of sprinting pickup acceleration versus maximum sprinting speed
S. MANZER
2016-10-01
Full Text Available Pickup acceleration and maximum sprinting speed are two essential phases of the 100-m sprint with variant sprinting speed, step length, frequency and technique. The aim of the study was to describe and compare the kinematic parameters of both sprint variants. Hypothetically it was assumed to find differences in sprinting speed, step length, flight and contact times as well as between the body angles of different key positions. From 8 female and 8 male (N=16 track and field junior athletes a double stride of both sprint variants was filmed (200 Hz from a sagittal position and the 10-m-sprint time was measured using triple light barriers. Kinematic data for sprinting speed and angles of knee, hip and ankle were compared with an analysis of variance with repeated measures. The sprinting speed was 7.7 m/s and 8.0 m/s (female and 8.4 m/s and 9.2 m/s (male with significantly higher values of step length, flight time and shorter ground contact time during maximum sprinting speed. Because of the longer flight time, it is possible to place the foot closer to the body but with a more extended knee on the ground. These characteristics can be used as orientation for technique training.
Robust stochastic maximum principle: Complete proof and discussions
Poznyak Alex S.
2002-01-01
Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.
Dependence of maximum concentration from chemical accidents on release duration
Hanna, Steven; Chang, Joseph
2017-01-01
Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.
THE MAXIMUM AND MINIMUM DEGREES OF RANDOM BIPARTITE MULTIGRAPHS
Chen Ailian; Zhang Fuji; Li Hao
2011-01-01
In this paper the authors generalize the classic random bipartite graph model, and define a model of the random bipartite multigraphs as follows: let m=m(n) be a positive integer-valued function on n and (n, m; {pk}) the probability space consisting of all the labeled bipartite multigraphs with two vertex sets A={a1,a2,...,an} and B= {b1, b2,..., bm}, in which the numbers taibj of the edges between any two vertices ai∈A and bj∈B are identically distributed independent random variables with distribution P{taibj}=k}=pk, k=0, 1, 2,..., where pk≥0 and ∑ pk=1. They obtain that Xc,d,A, the number of vertices in A with degree between c and d of Gn,m∈ (n, m;{Pk}) has asymptotically Poisson distribution, and answer the following two questions about the space (n,m; {pk}) with {pk} having geometric distribution, binomial distribution and Poisson distribution, respectively. Under which condition for {Pk} can there be a function D(n) such that almost every random multigraph Gnm∈ (n, m; {pk}) has maximum degree D(n) in A? under which condition for {pk} has almost every multigraph Gn,m∈ (n,m;{pk}) a unique vertex of maximum degree in A?
Maximum power analysis of photovoltaic module in Ramadi city
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Maximum power analysis of photovoltaic module in Ramadi city
Majid Shahatha Salim, Jassim Mohammed Najim, Salih Mohammed Salih
2013-01-01
Full Text Available Performance of photovoltaic (PV module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.