Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Exploring high-density baryonic matter: Maximum freeze-out density
Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)
2016-08-15
The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Perceiving action boundaries: Learning effects in perceiving maximum jumping-reach affordances
Ramenzoni, V.C; Davis, T.J; Riley, M.A; Shockley, K
2010-01-01
.... Those estimates were compared with estimates that perceivers made for themselves. In Experiment 1, participants initially underestimated the maximum jumping-reach height both for themselves and for the...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Thermospheric density model biases at the 23rd sunspot maximum
Pardini, C.; Moe, K.; Anselmo, L.
2012-07-01
Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were
Strange Stars: Can Their Crust Reach the Neutron Drip Density?
Hai Fu; Yong-Feng Huang
2003-01-01
The electrostatic potential of electrons near the surface of static strange stars at zero temperature is studied within the frame of the MIT bag model. We find that for QCD parameters within rather wide ranges, if the nuclear crust on the strange star is at a density leading to neutron drip, then the electrostatic potential will be insufficient to establish an outwardly directed electric field, which is crucial for the survival of such a crust. If a minimum gap width of 200 fm is brought in as a more stringent constraint, then our calculations will completely rule out the possibility of such crusts. Therefore, our results argue against the existence of neutron-drip crusts in nature.
Perceiving action boundaries: Learning effects in perceiving maximum jumping-reach affordances
Ramenzoni, V.C.; Davis, T.J.; Riley, M.A.; Shockley, K.
2010-01-01
Coordinating with another person requires that one can perceive what the other is capable of doing. This ability often benefits from opportunities to practice and learn. Two experiments were conducted in which we investigated perceptual learning in the context of perceiving the maximum height to whi
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Investigation on Maximum Available Reach for Different Modulation Formats in WDM-PON Systems
Kurbatska, I.; Bobrovs, V.; Spolitis, S.; Gavars, P.; Ivanovs, G.; Parts, R.
2016-08-01
Considering the growing demand for broadband of access networks, in the present paper we investigate various modulation formats as a way of increasing the performance of optical transmission systems. Non-return-to-zero (NRZ) on-off keying, return-to-zero (RZ) OOK, carrier suppressed RZ (CSRZ) OOK, duobinary (DB), NRZ differential phase shift keying (NRZDPSK), RZ-DPSK and CSRZ-DPSK formats are compared using the maximal achievable reach with bit error rate less than 10-9 as a criterion. Simulations are performed by using OptSim software tool. It is shown that using the transmission system without dispersion compensation the best results are shown by duobinary and CSRZ-OOK modulation formats, but with the system using dispersion compensating fiber (DCF) the longest transmission distance is achieved by RZ-DPSK modulation format. By investigating the influence of channel spacing for best-performed modulation formats, network reach decrease for transmission systems with DCF fiber has been observed due to channel crosstalk.
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Maximum flux density of the gyrosynchrotron spectrum in a nonuniform source
Ai-Hua Zhou; Rong-Chuan Wang; Cheng-Wen Shao
2009-01-01
The maximum flux density of a gyrosynchrotron radiation spectrum in a mag- netic dip|oe model with self absorption and gyroresonance is calculated. Our calculations show that the maximum flux density of the gyrosynchrotron spectrum increases with in- creasing low-energy cutoff, number density, input depth of energetic electrons, magnetic field strength and viewing angle, and with decreasing energy spectral index of energetic electrons, number density and temperature of thermal electrons. It is found that there are linear correlations between the logarithms of the maximum flux density and the above eight parameters with correlation coefficients higher than 0.91 and fit accuracies better than 10%. The maximum flux density could be a good indicator of the changes of these source parameters. In addition, we find that there are very good positive linear correla- tions between the logarithms of the maximum flux density and peak frequency when the above former five parameters vary respectively. Their linear correlation coefficients are higher than 0.90 and the fit accuracies are better than 0.5%.
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Abhijit Sinha
2014-01-01
Full Text Available A comparative analysis on thermodynamic efficiency based on maximum power & power density conditions have been performed for a solar-driven Carnot heat engine with internal irreversibility. In this analysis, the heat transfer from the hot reservoir is to be in the radiation mode and the heat transfer to the cold reservoir is to be in the convection mode. The thermodynamic efficiency function, power & power density functions have been derived and maximization of the power functions have been performed for various design parameters. From the optimum conditions, the thermal efficiencies at maximum power and power densities have been obtained. The effects of internal irreversibility, extreme temperature ratios & specific engine size in area ratio between the hot & cold reservoirs as various design parameters on thermodynamic efficiencies have been investigated for both the conditions. The efficiencies have been compared with Curzon-Ahlborn & Carnot efficiencies respectively.The analysis showed that the efficiency at maximum power output is greater than the efficiency at maximum power density. And the efficiencies can be greater than the Curzon- Ahlborn`s efficiency only for low values of design parameters.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum
Kramar, Maxim; Lin, Haosheng
2016-01-01
Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from $1.5$ to $4\\ \\mathrm{R}_\\odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 \\AA \\ band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $\\sim 2.5 \\ \\mathrm{R}_\\odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the a...
3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum
Maxim Kramar
2016-08-01
Full Text Available Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131 to retrieve and analyze the three-dimensional (3D coronal electron density in the range of heights from $1.5$ to $4 R_odot$ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 AA band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below $sim 2.5 R_odot$. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.
3D Global Coronal Density Structure and Associated Magnetic Field near Solar Maximum
Kramar, Maxim; Airapetian, Vladimir; Lin, Haosheng
2016-08-01
Measurement of the coronal magnetic field is a crucial ingredient in understanding the nature of solar coronal dynamic phenomena at all scales. We employ STEREO/COR1 data obtained near maximum of solar activity in December 2012 (Carrington rotation, CR 2131) to retrieve and analyze the three-dimensional (3D) coronal electron density in the range of heights from 1.5 to 4 R_⊙ using a tomography method and qualitatively deduce structures of the coronal magnetic field. The 3D electron density analysis is complemented by the 3D STEREO/EUVI emissivity in 195 Å band obtained by tomography for the same CR period. We find that the magnetic field configuration during CR 2131 has a tendency to become radially open at heliocentric distances below ˜ 2.5 R_⊙. We compared the reconstructed 3D coronal structures over the CR near the solar maximum to the one at deep solar minimum. Results of our 3D density reconstruction will help to constrain solar coronal field models and test the accuracy of the magnetic field approximations for coronal modeling.
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects
Mohammad Tariq
2013-03-01
Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A
2009-06-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
Rotta, Davide; De Michielis, Marco; Ferraro, Elena; Fanciulli, Marco; Prati, Enrico
2016-06-01
Scalability from single-qubit operations to multi-qubit circuits for quantum information processing requires architecture-specific implementations. Semiconductor hybrid qubit architecture is a suitable candidate to realize large-scale quantum information processing, as it combines a universal set of logic gates with fast and all-electrical manipulation of qubits. We propose an implementation of hybrid qubits, based on Si metal-oxide-semiconductor (MOS) quantum dots, compatible with the CMOS industrial technological standards. We discuss the realization of multi-qubit circuits capable of fault-tolerant computation and quantum error correction, by evaluating the time and space resources needed for their implementation. As a result, the maximum density of quantum information is extracted from a circuit including eight logical qubits encoded by the [[7, 1, 3
Falcon, Ross E; Gomez, T A; Schaeuble, M; Nagayama, T; Montgomery, M H; Winget, D E; Rochau, G A
2016-01-01
As part of our laboratory investigation of the theoretical line profiles used in white dwarf atmosphere models, we extend the electron-density ($n_{\\rm e}$) range measured by our experiments to higher densities (up to $n_{e}\\sim80\\times10^{16}$ cm$^{-3}$). Whereas inferred parameters using the hydrogen-$\\beta$ spectral line agree among different line-shape models for $n_{\\rm e}\\lesssim30\\times10^{16}$ cm$^{-3}$, we now see divergence between models. These are densities beyond the range previously benchmarked in the laboratory, meaning theoretical profiles in this regime have not been fully validated. Experimentally exploring these higher densities enables us to test and constrain different line-profile models, as the differences in their relative H-Balmer line shapes are more pronounced at such conditions. These experiments also aid in our study of occupation probabilities because we can measure these from relative line strengths.
Huang, Y X; Zhou, Q; Qiu, X; Shang, X D; Lu, Z M; Liu, and Y L
2014-01-01
In this paper, we introduce a new way to estimate the scaling parameter of a self-similar process by considering the maximum probability density function (pdf) of tis increments. We prove this for $H$-self-similar processes in general and experimentally investigate it for turbulent velocity and temperature increments. We consider turbulent velocity database from an experimental homogeneous and nearly isotropic turbulent channel flow, and temperature data set obtained near the sidewall of a Rayleigh-B\\'{e}nard convection cell, where the turbulent flow is driven by buoyancy. For the former database, it is found that the maximum value of increment pdf $p_{\\max}(\\tau)$ is in a good agreement with lognormal distribution. We also obtain a scaling exponent $\\alpha\\simeq 0.37$, which is consistent with the scaling exponent for the first-order structure function reported in other studies. For the latter one, we obtain a scaling exponent $\\alpha_{\\theta}\\simeq0.33$. This index value is consistent with the Kolmogorov-Ob...
Shifts in the temperature of maximum density (TMD) of ionic liquid aqueous solutions.
Tariq, M; Esperança, J M S S; Soromenho, M R C; Rebelo, L P N; Lopes, J N Canongia
2013-07-14
This work investigates for the first time shifts in the temperature of maximum density (TMD) of water caused by ionic liquid solutes. A vast amount of high-precision volumetric data--more than 6000 equilibrated (static) high-precision density determination corresponding to ∼90 distinct ionic liquid aqueous solutions of 28 different types of ionic liquid--allowed us to analyze the TMD shifts for different homologous series or similar sets of ionic solutes and explain the overall effects in terms of hydrophobic, electrostatic and hydrogen-bonding contributions. The differences between the observed TMD shifts in the -2 temperatures are discussed taking into account the different types of possible solute-water interactions that can modify the structure of the aqueous phase. The results also reveal different insights concerning the nature of the ions that constitute typical ionic liquids and are consistent with previous results that established hydrophobic and hydrophilic scales for ionic liquid ions based on their specific interactions with water and other probe molecules.
Lussana, C.
2013-04-01
The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.
Sacanna, S.; Rossi, L.; Wouterse, A.; Philipse, A.P.
2007-01-01
We have measured the random packing density of monodisperse colloidal silica ellipsoids with a well-defined shape, gradually deviating from a sphere shape up to prolates with aspect ratios of about 5, to find for a colloidal system the first experimental observation for the density maximum (at an as
Prathapa, Siriyara Jagannatha; Mondal, Swastik; van Smaalen, Sander
2013-04-01
Dynamic model densities according to Mondal et al. [(2012), Acta Cryst. A68, 568-581] are presented for independent atom models (IAM), IAMs after high-order refinements (IAM-HO), invariom (INV) models and multipole (MP) models of α-glycine, DL-serine, L-alanine and Ala-Tyr-Ala at T ≃ 20 K. Each dynamic model density is used as prior in the calculation of electron density according to the maximum entropy method (MEM). We show that at the bond-critical points (BCPs) of covalent C-C and C-N bonds the IAM-HO and INV priors produce reliable MEM density maps, including reliable values for the density and its Laplacian. The agreement between these MEM density maps and dynamic MP density maps is less good for polar C-O bonds, which is explained by the large spread of values of topological descriptors of C-O bonds in static MP densities. The density and Laplacian at BCPs of hydrogen bonds have similar values in MEM density maps obtained with all four kinds of prior densities. This feature is related to the smaller spatial variation of the densities in these regions, as expressed by small magnitudes of the Laplacians and the densities. It is concluded that the use of the IAM-HO prior instead of the IAM prior leads to improved MEM density maps. This observation shows interesting parallels to MP refinements, where the use of the IAM-HO as an initial model is the accepted procedure for solving MP parameters. A deconvolution of thermal motion and static density that is better than the deconvolution of the IAM appears to be necessary in order to arrive at the best MP models as well as at the best MEM densities.
Rijgersberg, H.; Nierop Groot, M.N.; Tromp, S.O.; Franz, E.
2013-01-01
Within a microbial risk assessment framework, modeling the maximum population density (MPD) of a pathogenic microorganism is important but often not considered. This paper describes a model predicting the MPD of Salmonella on alfalfa as a function of the initial contamination level, the total count
Vries, de R.Y.; Briels, W.J.; Feil, D.; Velde, te G.; Baerends, E.J.
1996-01-01
1990 Sakata and Sato applied the maximum entropy method (MEM) to a set of structure factors measured earlier by Saka and Kato with the Pendellösung method. They found the presence of non-nuclear attractors, i.e., maxima in the density between two bonded atoms. We applied the MEM to a limited set of
Rius, Jordi
2006-09-01
The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].
Cooling of Water in a Flask: Convection Currents in a Fluid with a Density Maximum
Velasco, S.; White, J. A.; Roman, F. L.
2010-01-01
The effect of density inversion on the convective flow of water in a spherical glass flask cooled with the help of an ice-water bath is shown. The experiment was carried out by temperature measurements (cooling curves) taken at three different heights along the vertical diameter of the flask. Flows inside the flask are visualized by seeding the…
Cooling of Water in a Flask: Convection Currents in a Fluid with a Density Maximum
Velasco, S.; White, J. A.; Roman, F. L.
2010-01-01
The effect of density inversion on the convective flow of water in a spherical glass flask cooled with the help of an ice-water bath is shown. The experiment was carried out by temperature measurements (cooling curves) taken at three different heights along the vertical diameter of the flask. Flows inside the flask are visualized by seeding the…
Kimble, Michael C.; White, Ralph E.
1991-01-01
A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.
Ru/Al Multilayers Integrate Maximum Energy Density and Ductility for Reactive Materials
K. Woll; Bergamaschi, A; Avchachov, K.; Djurabekova, F.; Gier, S.; Pauly, C.; Leibenguth, P.; Wagner, C; Nordlund, K.; Mücklich, F
2016-01-01
Established and already commercialized energetic materials, such as those based on Ni/Al for joining, lack the adequate combination of high energy density and ductile reaction products. To join components, this combination is required for mechanically reliable bonds. In addition to the improvement of existing technologies, expansion into new fields of application can also be anticipated which triggers the search for improved materials. Here, we present a comprehensive characterization of the ...
Ru/Al Multilayers Integrate Maximum Energy Density and Ductility for Reactive Materials.
Woll, K; Bergamaschi, A; Avchachov, K; Djurabekova, F; Gier, S; Pauly, C; Leibenguth, P; Wagner, C; Nordlund, K; Mücklich, F
2016-01-01
Established and already commercialized energetic materials, such as those based on Ni/Al for joining, lack the adequate combination of high energy density and ductile reaction products. To join components, this combination is required for mechanically reliable bonds. In addition to the improvement of existing technologies, expansion into new fields of application can also be anticipated which triggers the search for improved materials. Here, we present a comprehensive characterization of the key parameters that enables us to classify the Ru/Al system as new reactive material among other energetic systems. We finally found that Ru/Al exhibits the unusual integration of high energy density and ductility. For example, we measured reaction front velocities up to 10.9 (± 0.33) ms(-1) and peak reaction temperatures of about 2000 °C indicating the elevated energy density. To our knowledge, such high temperatures have never been reported in experiments for metallic multilayers. In situ experiments show the synthesis of a single-phase B2-RuAl microstructure ensuring improved ductility. Molecular dynamics simulations corroborate the transformation behavior to RuAl. This study fundamentally characterizes a Ru/Al system and demonstrates its enhanced properties fulfilling the identification requirements of a novel nanoscaled energetic material.
Maximum-Likelihood Sequence Detector for Dynamic Mode High Density Probe Storage
Kumar, Naveen; Ramamoorthy, Aditya; Salapaka, Murti
2009-01-01
There is an ever increasing need for storing data in smaller and smaller form factors driven by the ubiquitous use and increased demands of consumer electronics. A new approach of achieving a few Tb per in2 areal densities, utilizes a cantilever probe with a sharp tip that can be used to deform and assess the topography of the material. The information may be encoded by means of topographic profiles on a polymer medium. The prevalent mode of using the cantilever probe is the static mode that is known to be harsh on the probe and the media. In this paper, the high quality factor dynamic mode operation, which is known to be less harsh on the media and the probe, is analyzed for probe based high density data storage purposes. It is demonstrated that an appropriate level of abstraction is possible that obviates the need for an involved physical model. The read operation is modeled as a communication channel which incorporates the inherent system memory due to the intersymbol interference and the cantilever state ...
Sniegowski, Kristel; Bers, Karolien; Ryckeboer, Jaak; Jaeken, Peter; Spanoghe, Pieter; Springael, Dirk
2012-08-01
Addition of pesticide-primed soil containing adapted pesticide degrading bacteria to the biofilter matrix of on farm biopurification systems (BPS) which treat pesticide contaminated wastewater, has been recommended, in order to ensure rapid establishment of a pesticide degrading microbial community in BPS. However, uncertainties exist about the minimal soil inoculum density needed for successful bioaugmentation of BPS. Therefore, in this study, BPS microcosm experiments were initiated with different linuron primed soil inoculum densities ranging from 0.5 to 50 vol.% and the evolution of the linuron mineralization capacity in the microcosms was monitored during feeding with linuron. Successful establishment of a linuron mineralization community in the BPS microcosms was achieved with all inoculum densities including the 0.5 vol.% density with only minor differences in the time needed to acquire maximum degradation capacity. Moreover, once established, the robustness of the linuron degrading microbial community towards expected stress situations proved to be independent of the initial inoculum density. This study shows that pesticide-primed soil inoculum densities as low as 0.5 vol.% can be used for bioaugmentation of a BPS matrix and further supports the use of BPS for treatment of pesticide-contaminated wastewater at farmyards.
Electron density distribution and bonding in ZnSe and PbSe using maximum entropy method (MEM)
K S Syed Ali; R Saravanan; S Israel; R K Rajaram
2006-04-01
The study of electronic structure of materials and bonding is an important part of material characterization. The maximum entropy method (MEM) is a powerful tool for deriving accurate electron density distribution in crystalline materials using experimental data. In this paper, the attention is focused on producing electron density distribution of ZnSe and PbSe using JCPDS X-ray powder diffraction data. The covalent/ionic nature of the bonding and the interaction between the atoms are clearly revealed by the MEM maps. The mid bond electron densities between atoms in these systems are found to be 0.544 e/Å3 and 0.261 e/Å3, respectively for ZnSe and PbSe. The bonding in these two systems has been studied using two-dimensional MEM electron density maps on the (100) and (110) planes, and the one-dimensional electron density profiles along [100], [110] and [111] directions. The thermal parameters of the individual atoms have also been reported in this work. The algorithm of the MEM procedure has been presented.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
Hwang, J; Carbotte, J P
2014-04-23
We use maximum entropy techniques to extract an electron-phonon density from optical data for the normal state at T = 45 K of MgB2. Limiting the analysis to a range of phonon energies below 110 meV, which is sufficient for capturing all phonon structures, we find a spectral function that is in good agreement with that calculated for the quasi-two-dimensional σ-band. Extending the analysis to higher energies, up to 160 meV, we find no evidence for any additional contributions to the fluctuation spectrum, but find that the data can only be understood if the density of states is taken to decrease with increasing energy.
R Saravanan; K S Syed Ali; S Israel
2008-04-01
The local, average and electronic structure of the semiconducting materials Si and Ge has been studied using multipole, maximum entropy method (MEM) and pair distribution function (PDF) analyses, using X-ray powder data. The covalent nature of bonding and the interaction between the atoms are clearly revealed by the two-dimensional MEM maps plotted on (1 0 0) and (1 1 0) planes and one-dimensional density along [1 0 0], [1 1 0] and [1 1 1] directions. The mid-bond electron densities between the atoms are 0.554 e/Å3 and 0.187 e/Å3 for Si and Ge respectively. In this work, the local structural information has also been obtained by analyzing the atomic pair distribution function. An attempt has been made in the present work to utilize the X-ray powder data sets to refine the structure and electron density distribution using the currently available versatile methods, MEM, multipole analysis and determination of pair distribution function for these two systems.
Mueller, Robert P.; Tiller, Brett L.; Bleich, Matthew D.; Turner, Gerald; Welch, Ian D.
2011-01-31
The Hanford Reach of the Columbia River is the last unimpounded section of the river and contains substrate characteristics (cobble, gravel, sand/silt) suitable for many of the native freshwater mussels known to exist in the Pacific Northwest. Information concerning the native mussel species composition, densities, and distributions in the mainstem of the Columbia River is limited. Under funding from the U.S. Department of Energy Richland Operations Office (DOE-RL), Pacific Northwest National Laboratory conducted an assessment of the near-shore habitat on the Hanford Reach. Surveys conducted in 2004 as part of the Ecological Monitoring and Compliance project documented several species of native mussels inhabiting the near-shore habitat of the Hanford Reach. Findings reported here may be useful to resource biologists, ecologists, and DOE-RL to determine possible negative impacts to native mussels from ongoing near-shore remediation activities associated with Hanford Site cleanup. The objective of this study was to provide an initial assessment of the species composition, densities, and distribution of the freshwater mussels (Margaritiferidae and Unionidae families) that exist in the Hanford Reach. Researchers observed and measured 201 live native mussel specimens. Mussel density estimated from these surveys is summarized in this report with respect to near-shore habitat characteristics including substrate size, substrate embeddedness, relative abundance of aquatic vegetation, and large-scale geomorphic/hydrologic characteristics of the Hanford Reach.
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Ke, Xinyou; Alexander, J. Iwan D.; Prahl, Joseph M.; Savinell, Robert F.
2014-12-01
Flow batteries show promise for very large-scale stationary energy storage such as needed for the grid and renewable energy implementation. In recent years, researchers and developers of redox flow batteries (RFBs) have found that electrode and flow field designs of PEM fuel cell (PEMFC) technology can increase the power density and consequently push down the cost of flow battery stacks. In this paper we present a macroscopic model of a typical PEMFC-like RFB electrode-flow field design. The model is a layered system comprised of a single passage of a serpentine flow channel and a parallel underlying porous electrode (or porous layer). The effects of the inlet volumetric flow rate, permeability of the porous layer, thickness of the porous layer and thickness of the flow channel on the flow penetration into the porous layer are investigated. The maximum current density corresponding to stoichiometry is estimated to be 377 mA cm-2 and 724 mA cm-2, which compares favorably with experiments of ∼400 mA cm-2 and ∼750 mA cm-2, for a single layer and three layers of the carbon fiber paper, respectively.
Cecilia W S Chan
Full Text Available Retinal neovascularization is a critical component in the pathogenesis of common ocular disorders that cause blindness, and treatment options are limited. We evaluated the therapeutic effect of a DNA enzyme targeting c-jun mRNA in mice with pre-existing retinal neovascularization. A single injection of Dz13 in a lipid formulation containing N-[1-(2,3-dioleoyloxypropyl]-N,N,N-trimethylammonium methyl-sulfate and 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine inhibited c-Jun expression and reduced retinal microvascular density. The DNAzyme inhibited retinal microvascular density as effectively as VEGF-A antibodies. Comparative microarray and gene expression analysis determined that Dz13 suppressed not only c-jun but a range of growth factors and matrix-degrading enzymes. Dz13 in this formulation inhibited microvascular endothelial cell proliferation, migration and tubule formation in vitro. Moreover, animals treated with Dz13 sensed the top of the cage in a modified forepaw reach model, unlike mice given a DNAzyme with scrambled RNA-binding arms that did not affect c-Jun expression. These findings demonstrate reduction of microvascular density and improvement in forepaw reach in mice administered catalytic DNA.
Francisco Cervantes-Navarro
2013-01-01
Full Text Available The Minnesota family of density functionals (M05, M05-2X, M06, M06L, M06-2X, and M06-HF were evaluated for the calculation of the UV-Vis spectra of the indigo molecule in solvents of different polarities using time-dependent density functional theory (TD-DFT and the polarized continuum model (PCM. The maximum absorption wavelengths predicted for each functional were compared with the known experimental results.
Johnston, James D. [University of Saskatchewan, Department of Mechanical Engineering, Saskatoon, SK (Canada); University of British Columbia, Department of Mechanical Engineering, Vancouver, BC (Canada); Kontulainen, Saija A. [University of Saskatchewan, College of Kinesiology, Saskatoon, SK (Canada); Masri, Bassam A.; Wilson, David R. [University of British Columbia, Department of Orthopaedics, Vancouver, BC (Canada)
2010-09-15
The objective was to identify subchondral bone density differences between normal and osteoarthritic (OA) proximal tibiae using computed tomography osteoabsorptiometry (CT-OAM) and computed tomography topographic mapping of subchondral density (CT-TOMASD). Sixteen intact cadaver knees from ten donors (8 male:2 female; mean age:77.8, SD:7.4 years) were categorized as normal (n = 10) or OA (n = 6) based upon CT reconstructions. CT-OAM assessed maximum subchondral bone mineral density (BMD). CT-TOMASD assessed average subchondral BMD across three layers (0-2.5, 2.5-5 and 5-10 mm) measured in relation to depth from the subchondral surface. Regional analyses of CT-OAM and CT-TOMASD included: medial BMD, lateral BMD, and average BMD of a 10-mm diameter area that searched each medial and lateral plateau for the highest ''focal'' density present within each knee. Compared with normal knees, both CT-OAM and CT-TOMASD demonstrated an average of 17% greater whole medial compartment density in OA knees (p < 0.016). CT-OAM did not distinguish focal density differences between OA and normal knees (p > 0.05). CT-TOMASD focal region analyses revealed an average of 24% greater density in the 0- to 2.5-mm layer (p = 0.003) and 36% greater density in the 2.5- to 5-mm layer (p = 0.034) in OA knees. Both CT-OAM and TOMASD identified higher medial compartment density in OA tibiae compared with normal tibiae. In addition, CT-TOMASD indicated greater focal density differences between normal and OA knees with increased depth from the subchondral surface. Depth-specific density analyses may help identify and quantify small changes in subchondral BMD associated with OA disease onset and progression. (orig.)
Monaco, James Peter; Madabhushi, Anant
2011-07-01
The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
Livingston, Richard A.; Jin, Shuang
2005-05-01
Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.
Barclay, R. S.; Wing, S. L.
2013-12-01
The Paleocene-Eocene Thermal Maximum (PETM) was a geologically brief interval of intense global warming 56 million years ago. It is arguably the best geological analog for a worst-case scenario of anthropogenic carbon emissions. The PETM is marked by a ~4-6‰ negative carbon isotope excursion (CIE) and extensive marine carbonate dissolution, which together are powerful evidence for a massive addition of carbon to the oceans and atmosphere. In spite of broad agreement that the PETM reflects a large carbon cycle perturbation, atmospheric concentrations of CO2 (pCO2) during the event are not well constrained. The goal of this study is to produce a high resolution reconstruction of pCO2 using stomatal frequency proxies (both stomatal index and stomatal density) before, during, and after the PETM. These proxies rely upon a genetically controlled mechanism whereby plants decrease the proportion of gas-exchange pores (stomata) in response to increased pCO2. Terrestrial sections in the Bighorn Basin, Wyoming, contain macrofossil plants with cuticle immediately bracketing the PETM, as well as dispersed plant cuticle from within the body of the CIE. These fossils allow for the first stomatal-based reconstruction of pCO2 near the Paleocene-Eocene boundary; we also use them to determine the relative timing of pCO2 change in relation to the CIE that defines the PETM. Preliminary results come from macrofossil specimens of Ginkgo adiantoides, collected from an ~200ka interval prior to the onset of the CIE (~230-30ka before), and just after the 'recovery interval' of the CIE. Stomatal index values decreased by 37% within an ~70ka time interval at least 100ka prior to the onset of the CIE. The decrease in stomatal index is interpreted as a significant increase in pCO2, and has a magnitude equivalent to the entire range of stomatal index adjustment observed in modern Ginkgo biloba during the anthropogenic CO2 rise during the last 150 years. The inferred CO2 increase prior to the
Blandamer, MJ; Buurma, NJ; Engberts, JBFN; Reis, JCR; Buurma, Niklaas J.; Reis, João C.R.
2003-01-01
At temperatures above and below the temperature of maximum density, TMD, for water at ambient pressure, pairs of temperatures exist at which the molar volumes of water are equal. First-order rate constants for the pH-independent hydrolysis of 1-benzoyl-1,2,4-triazole in aqueous solution at pairs of
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
R Saravanan
2006-06-01
A study of the electronic structure of the three sulphides, SrS, BaS and PuS has been carried out in this work, using the powder X-ray intensity data from JCPDS powder diffraction data base. The statistical approach, MEM (maximum entropy method) is used for the analysis of the data for the electron density distribution in these materials and an attempt has been made to understand the bonding between the metal atom and the sulphur atom. The mid-bond electron density is found to be maximum for PuS among these three sulphides, being 0.584 e/Å3 at 2.397 Å. SrS is found to have the lowest electron density at the mid-bond (0.003 e/Å3) at 2.118 Å from the origin leaving it more ionic than the other two sulphides studied in this work. The two-dimensional electron density maps on (1 0 0) and (1 1 0) planes and the one-dimensional profiles along the bonding direction [1 1 1] are used for these analyses. The overall and individual Debye-Waller factors of atoms in these systems have also been studied and analyzed. The refinements of the observed X-ray data were carried out using standard softwares and also a routine written by the author.
Saravanan, R.
2006-06-01
A study of the electronic structure of the three sulphides, SrS, BaS and PuS has been carried out in this work, using the powder X-ray intensity data from JCPDS powder diffraction data base. The statistical approach, MEM (maximum entropy method) is used for the analysis of the data for the electron density distribution in these materials and an attempt has been made to understand the bonding between the metal atom and the sulphur atom. The mid-bond electron density is found to be maximum for PuS among these three sulphides, being 0.584 e/Å^3 at 2.397 Å. SrS is found to have the lowest electron density at the mid-bond (0.003 e/Å^3) at 2.118 Å from the origin leaving it more ionic than the other two sulphides studied in this work. The two-dimensional electron density maps on (1 0 0) and (1 1 0) planes and the one-dimensional profiles along the bonding direction [1 1 1] are used for these analyses. The overall and individual Debye-Waller factors of atoms in these systems have also been studied and analyzed. The refinements of the observed X-ray data were carried out using standard softwares and also a routine written by the author.
U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
2016-09-01
Popular culture reflects both the interests of and the issues affecting the general public. As concerns regarding climate change and its impacts grow, is it permeating into popular culture and reaching that global audience?
Teratology testing under REACH.
Barton, Steve
2013-01-01
REACH guidelines may require teratology testing for new and existing chemicals. This chapter discusses procedures to assess the need for teratology testing and the conduct and interpretation of teratology tests where required.
Reaching affects saccade trajectories.
Tipper, S P; Howard, L A; Paul, M A
2001-01-01
The pre-motor theory suggests that, when attention is oriented to a location, the motor systems that are involved in achieving current behavioural goals are activated. For example, when a task requires accurate reaching, attention to a location activates the motor circuits controlling saccades and manual reaches. These actions involve separate neural systems for the control of eye and hand, but we believe that the selection processes acting on neural population codes within these systems are similar and can affect each other. The attentional effect can be revealed in the subsequent movement. The present study shows that the path the eye takes as it saccades to a target is affected by whether a reach to the target is also produced. This effect is interpreted as the influence of a hand-centred frame used in reaching on the spatial frame of reference required for the saccade.
刘忠宝; 王士同
2011-01-01
In order to circumvent the deficiencies of Support Vector Machine (SVM) and its improved algorithms, this paper presents Maximum-margin Learning Machine based on Entropy concept and Kernel density estimation (MLMEK). In MLMEK, data distributions in samples are represented by kernel density estimation and classification uncertainties are represented by entropy. MLMEK takes boundary data between classes and inner data in each class seriously, so it performs better than traditional SVM. MLMEK can work for two-class and one-class pattern classification. Experimental results obtained from UCI data sets verify that the algorithms proposed in the paper is effective and competitive.%该文针对支持向量机(SVM)及其变种的不足,提出一种基于熵理论和核密度估计的最大间隔学习机MLMEK.MLMEK引入了核密度估计和熵的概念,用核密度估计表征样本数据的分布特征,用熵表征分类的不确定性.MLMEK真实反映样本数据的分布特征；同时解决两类分类问题和单类分类问题；比传统SVM具有更好的分类性能.UCI数据集上的实验验证了MLMEK的有效性.
Suenimeire Vieira
2012-12-01
Full Text Available INTRODUÇÃO: Um dos benefícios promovidos pelo exercício físico parece ser a melhora da modulação do sistema nervoso autônomo sobre o coração. No entanto, o papel da atividade física como um fator determinante da variabilidade da frequência cardíaca (VFC não está bem estabelecido. Desta forma, o objetivo do estudo foi verificar se há correlação entre a frequência cardíaca de repouso e a carga máxima atingida no teste de esforço físico com os índices de VFC em homens idosos. MÉTODOS: Foram estudados 18 homens idosos com idades entre 60 e 70 anos. Foram feitas as seguintes avaliações: a teste de esforço máximo em cicloergômetro utilizando-se o protocolo de Balke para avaliação da capacidade aeróbia; b registro da frequência cardíaca (FC e dos intervalos R-R durante 15 minutos na condição de repouso em decúbito dorsal. Após a coleta, os dados foram analisados no domínio do tempo, calculando-se o índice RMSSD, e no domínio da frequência, calculando-se os índices de baixa frequência (BF, alta frequência (AF e razão BF/AF. Para verificar se existe associação entre a carga máxima atingida no teste de esforço e os índices de VFC foi aplicado o teste de correlação de Pearson (p 0,05. CONCLUSÃO: Os índices de variabilidade da frequência cardíaca temporal e espectrais estudados não são indicadores do nível de capacidade aeróbia de homens idosos avaliados em cicloergômetro.INTRODUCTION: One of the benefits provided by regular physical activities seems to be the improvement of cardiac autonomic nervous system modulation. However, the role of physical activity as a determinant factor of the heart rate variability (HRV is not well-established. Therefore, the aim of this study was to verify whether there was a correlation between resting heart rate and maximum workload reached in an exercise test with HRV indices in elderly men. METHODS: A study was carried out with 18 elderly men between the ages of
Terry, Dorothy Givens
2012-01-01
Dr. Mae Jemison is the world's first woman astronaut of color who continues to reach for the stars. Jemison was recently successful in leading a team that has secured a $500,000 federal grant to make interstellar space travel a reality. The Dorothy Jemison Foundation for Excellence (named after Jemison's mother) was selected in June by the Defense…
REACH. Air Conditioning Units.
Garrison, Joe; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of air conditioning. The instructional units focus on air conditioning fundamentals, window air conditioning, system and installation, troubleshooting and…
REACH. Air Conditioning Units.
Garrison, Joe; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of air conditioning. The instructional units focus on air conditioning fundamentals, window air conditioning, system and installation, troubleshooting and…
Reaching into Pictorial Spaces
Volcic, Robert; Vishwanath, Dhanraj; Domini, Fulvio
2014-02-01
While binocular viewing of 2D pictures generates an impression of 3D objects and space, viewing a picture monocularly through an aperture produces a more compelling impression of depth and the feeling that the objects are "out there", almost touchable. Here, we asked observers to actually reach into pictorial space under both binocular- and monocular-aperture viewing. Images of natural scenes were presented at different physical distances via a mirror-system and their retinal size was kept constant. Targets that observers had to reach for in physical space were marked on the image plane, but at different pictorial depths. We measured the 3D position of the index finger at the end of each reach-to-point movement. Observers found the task intuitive. Reaching responses varied as a function of both pictorial depth and physical distance. Under binocular viewing, responses were mainly modulated by the different physical distances. Instead, under monocular viewing, responses were modulated by the different pictorial depths. Importantly, individual variations over time were minor, that is, observers conformed to a consistent pictorial space. Monocular viewing of 2D pictures thus produces a compelling experience of an immersive space and tangible solid objects that can be easily explored through motor actions.
Snow, Rufus; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this student manual contains individualized instructional units in the area of refrigeration. The instructional units focus on refrigeration fundamentals, tubing and pipe, refrigerants, troubleshooting, window air conditioning, and…
Terry, Dorothy Givens
2012-01-01
Dr. Mae Jemison is the world's first woman astronaut of color who continues to reach for the stars. Jemison was recently successful in leading a team that has secured a $500,000 federal grant to make interstellar space travel a reality. The Dorothy Jemison Foundation for Excellence (named after Jemison's mother) was selected in June by the Defense…
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Solar Hydrogen Reaching Maturity
Rongé Jan
2015-09-01
Full Text Available Increasingly vast research efforts are devoted to the development of materials and processes for solar hydrogen production by light-driven dissociation of water into oxygen and hydrogen. Storage of solar energy in chemical bonds resolves the issues associated with the intermittent nature of sunlight, by decoupling energy generation and consumption. This paper investigates recent advances and prospects in solar hydrogen processes that are reaching market readiness. Future energy scenarios involving solar hydrogen are proposed and a case is made for systems producing hydrogen from water vapor present in air, supported by advanced modeling.
Geraldo Tadeu dos Santos
2001-05-01
Full Text Available O objetivo do trabalho foi avaliar o efeito de diferentes alturas (24; 26; 43; 45; 52; 62; 73 e 78 cm do pasto sobre a qualidade de forragem e estrutura do perfil do capim-Tanzânia, (Panicum maximum Jacq. cv. Tanzânia – 1 (Poaceae. Foram utilizados novilhos da raça Nelore sob pastejo com carga animal variável, por meio da técnica put and take. O delineamento experimental utilizado foi inteiramente casualizado, com duas repetições. A densidade de matéria seca total (DMT aumentou com o avanço no período experimental, enquanto a densidade de matéria seca de lâminas (DML não foi influenciada pelo período e pela altura do pasto. O estrato superior da pastagem foi a porção de maior qualidade, apresentando maior DML e maior teor de PB. Os estratos inferiores apresentaram menor qualidade, devido à maior DMT e menor DML, acarretando em maiores valores de FDA e FDN e menores teores de PB. O conteúdo de minerais das lâminas foi superior aos colmos, mantendo-se inalterado com relação aos estratos da pastagem.The effect of different sward heights (24; 26; 43; 45; 52; 62; 73 and 78 cm on forage quality and profile structure Tanzania grass, Panicum maximum Jacq. cv. Tanzania – 1 (Poaceae is provided. Nelore steers were used in grazing at variable stocking rates with put and take technique. The experimental design was completely randomized, with two replications. Total dry matter bulk densitity (TDMD increased during experimental period, while the leaf blade dry matter bulk density (LDMD was not influenced by period on by sward height. The upper layers had the best quality with higher LDMD and CP levels. Lower layers had the worst quality, due the higher TDMD and lower LDMD. This fact caused higher ADF and NDF levels and lower CP levels. Leaf blade mineral content was higher than that of stem, and remained unaltered in relation to the different layers.
唐经文; 王林豪; 高诚; 梁鑫俐; 李佳
2009-01-01
对水平环缝内冷水自然对流换热性能进行了实验研究.水平环缝宽度为6～18 mm,外壁温度维持0℃,换热温差为2～24℃.结果表明,在实验范围内,内壁面的平均表面传热系数随环缝宽度的增大而增加;当温差小于4℃或大于8℃时,平均表面传热系数随温差的增大而增大,在4～8℃范围内,随温差的增大而减小.采用逐步线性回归方法,得到了内壁传热关联式.%This paper conducts the experimental study on the natural convection characteristics of cold water near the maximum density in horizontal annulus with the fixed inner radius r_i=14 mm and different width l = 6～18 mm. The temperature at outer wall is maintained at 0 ℃, and the temperature differences between the inner and outer walls range from 2 to 24 ℃. The results show that the heat transfer coefficient at inner wall increases with the increase of the annulus width. When the temperature difference is bellow 4 ℃ or above 8 ℃, the heat transfer coefficient increases with the increase of the temperature difference. When the temperature difference is between 4 ℃ and 8 ℃, it. decreases with the increase of the temperature difference. The formula of heat transfer at inner walls is obtained by using the method of linear regression.
Westar reaches critical crossroads
1992-06-01
Westar Mining Ltd. has applied for court protection until September 30, 1992 to gain time to draw up a final reorganization plan. The Companies' Creditors Arrangement Act is a federal statute that allows a business to restructure financially without having to declare bankruptcy. Normal trade terms with suppliers are usually maintained during this period. The company is struggling under the effects of falling coal prices, a high Canadian dollar and a high debt burden. Changes in work practices at the company's Balmer mine are a major part of the restructuring. An agreement must be reached with the United Mineworkers of America and other stakeholders or the Balmer mine will close permanently. Employees have been locked out since May 1, 1992 when union members rejected the company's final offer.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Reaching Fleming's dicrimination bound
Gruebl, Gebhard
2012-01-01
Any rule for identifying a quantum system's state within a set of two non-orthogonal pure states by a single measurement is flawed. It has a non-zero probability of either yielding the wrong result or leaving the query undecided. This also holds if the measurement of an observable $A$ is repeated on a finite sample of $n$ state copies. We formulate a state identification rule for such a sample. This rule's probability of giving the wrong result turns out to be bounded from above by $1/n\\delta_{A}^{2}$ with $\\delta_{A}=|_{1}-_{2}|/(\\Delta_{1}A+\\Delta_{2}A).$ A larger $\\delta_{A}$ results in a smaller upper bound. Yet, according to Fleming, $\\delta_{A}$ cannot exceed $\\tan\\theta$ with $\\theta\\in(0,\\pi/2) $ being the angle between the pure states under consideration. We demonstrate that there exist observables $A$ which reach the bound $\\tan\\theta$ and we determine all of them.
2001-01-01
The creation of the world's largest sandstone cavern, not a small feat! At the bottom, cave-in preventing steel mesh can be seen clinging to the top of the tunnel. The digging of UX-15, the cavern that will house ATLAS, reached the upper ceiling of LEP on October 10th. The breakthrough which took place nearly 100 metres underground occurred precisely on schedule and exactly as planned. But much caution was taken beforehand to make the LEP breakthrough clean and safe. To prevent the possibility of cave-ins in the side tunnels that will eventually be attached to the completed UX-15 cavern, reinforcing steel mesh was fixed into the walls with bolts. Obviously no people were allowed in the LEP tunnels below UX-15 as the breakthrough occurred. The area was completely evacuated and fences were put into place to keep all personnel out. However, while personnel were being kept out of the tunnels below, this has been anything but the case for the work taking place up above. With the creation of the world's largest...
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Kwon, Sun Il; Ferri, Alessandro; Gola, Alberto; Berg, Eric; Piemonte, Claudio; Cherry, Simon R; Roncali, Emilie
2016-10-01
Current research in the field of positron emission tomography (PET) focuses on improving the sensitivity of the scanner with thicker detectors, extended axial field-of-view, and time-of-flight (TOF) capability. These create the need for depth-of-interaction (DOI) encoding to correct parallax errors. We have proposed a method to encode DOI using phosphor-coated crystals. Our initial work using photomultiplier tubes (PMTs) demonstrated the possibilities of the proposed method, however, a major limitation of PMTs for this application is poor quantum efficiency in yellow light, corresponding to the wavelengths of the converted light by the phosphor coating. In contrast, the red-green-blue-high-density (RGB-HD) silicon photomultipliers (SiPMs) have a high photon detection efficiency across the visible spectrum. Excellent coincidence resolving time (CRT; [Formula: see text]) was obtained by coupling RGB-HD SiPMs and [Formula: see text] lutetium fine silicate crystals coated on a third of one of their lateral sides. Events were classified in three DOI bins ([Formula: see text] width) with an average sensitivity of 83.1%. A CRT of [Formula: see text] combined with robust DOI encoding is a marked improvement in the phosphor-coated approach that we pioneered. For the first time, we read out these crystals with SiPMs and clearly demonstrated the potential of the RGB-HD SiPMs for this TOF-DOI PET detector.
吴普; 王丽丽; 邵雪梅
2008-01-01
Having analyzed the tree ring width and maximum latewood density of Pinus den-sata from west Sichuan, we obtained different climate information from tree-ring width and maximum latewood density chronology. The growth of tree ring width was responded princi-pally to the precipitation in current May, which might be influenced by the activity of southwest monsoon, whereas the maximum latewood density reflected summer temperature (June-September). According to the correlation relationship, a transfer function had been used to reconstruct summer temperature for the study area. The explained variance of re-construction is 51% (F=-52.099, p<0.0001). In the reconstruction series: before the 1930s, the climate was relatively cold, and relatively warm from 1930 to 1960, this trend was in accor-dance with the cold-warm period of the last 100 years, west Sichuan. Compared with Chengdu, the warming break point in west Sichuan is 3 years ahead of time, indicating that the Tibetan Plateau was more sensitive to temperature change. There was an evident sum-mer warming signal after 1983. Although the last 100-year running average of summer tem-perature in the 1990s was the maximum, the running average of the early 1990s was below the average line and it was cold summer, but summer drought occurred in the late 1990s.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
JIN Zhi; SU Yong-bo; CHENG Wei; LIU Xin-Yu; XU An-Huai; QI Ming
2008-01-01
@@ A four-finger InGaAs/InP double heterojunction bipolar transistor is designed and fabricated successfully by using planarization technology. The emitter area of each finger is 1 × 15 μm2. The breakdown voltage is more than 7V, the maximum collector current could be more than 100mA. The current gain cutoff frequency is as high as 155 GHz and the maximum oscillation frequency reaches 253 GHz. The heterostructure bipolar transistor can offer more than 70mW class-A maximum output power at W band and the maximum power density can be as high as 1.2 W/mm.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
Greene, Nicholas
2012-01-01
ABOUT THE BOOK Halo Reach is the latest installment, and goes back to Halo's roots in more ways than one. Set around one of the most frequently referenced events in the Haloverse-The Fall of Reach-Reach puts you in the shoes of Noble 6, an unnamed Spartan, fighting a doomed battle to save the planet. Dual-wielding's gone, health is back, and equipment now takes the form of different "classes," with different weapon loadouts and special abilities (such as sprinting, cloaking, or flight). If you're reading this guide, you're either new to the Halo franchise and looking to get a leg up on all
Erich Regener and the maximum in ionisation of the atmosphere
Carlson, P
2014-01-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under-water and in the atmosphere. He discovered, along with one of his students, Georg Pfotzer, the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students and through...
Lunar Probe Reaches Deep Space
2011-01-01
@@ China's second lunar probe, Chang'e-2, has reached an orbit 1.5 million kilometers from Earth for an additional mission of deep space exploration, the State Administration for Science, Technology and Industry for National Defense announced.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Astronomical reach of fundamental physics
Burrows, Adam S.; Ostriker, Jeremiah P.
2014-02-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
Astronomical reach of fundamental physics.
Burrows, Adam S; Ostriker, Jeremiah P
2014-02-18
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples, we expose the deep interrelationships imposed by nature between disparate realms of the universe and the amazing consequences of the unifying character of physical law.
J. de Haan; W.P. Knulst
2000-01-01
Original title: Het bereik van de kunsten. The reach of the arts (Het bereik van de kunsten) is the fourth study in a series which periodically analyses the status of cultural participation, reading and use of other media. The series, Support for culture (Het culturele draagvlak) is sponsored by th
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Freudenburg, William R.
2006-01-01
Rather than seeking ivory-tower isolation, members of the Rural Sociological Society have always been distinguished by a willingness to work with specialists from a broad range of disciplines, and to work on some of the world's most challenging problems. What is less commonly recognized is that the willingness to reach beyond disciplinary…
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Sampling hard to reach populations.
Faugier, J; Sargeant, M
1997-10-01
Studies on 'hidden populations', such as homeless people, prostitutes and drug addicts, raise a number of specific methodological questions usually absent from research involving known populations and less sensitive subjects. This paper examines the advantages and limitations of nonrandom methods of data collection such as snowball sampling. It reviews the currently available literature on sampling hard to reach populations and highlights the dearth of material currently available on this subject. The paper also assesses the potential for using these methods in nursing research. The sampling methodology used by Faugier (1996) in her study of prostitutes, HIV and drugs is used as a current example within this context.
How to reach library users who cannot reach libraries?
Dragana Ljuić
2002-01-01
Full Text Available The article discusses the ways of getting library activities closer to the individuals or groups of users who have difficulties to or cannot visit the library themselves. The author presents the services offered by the Maribor Public Library and discusses how one of the basic human rights – the right to the access of cultural goods, knowledge and information - is exercised also through library activities. By enabling access to library material and information, public libraries help to fulfill basic human rights and thus raise the quality of living in a social environment. The following forms of library activities are presented in the article: »distance library« – borrowing books at home, in hospital, station for the bibliobus for disabled users, »mobile collections« in the institutions where users, due to their age or illness, have difficulties in accessing or even cannot reach library materials and information by themselves.
Reach capacity in older women submitted to flexibility training
Elciana de Paiva Lima Vieira
2015-11-01
Full Text Available The aim of this study was to analyze the effect of flexibility training on the maximum range of motion levels and reach capacity of older women practitioners of aquatic exercises of the Prev-Quedas project. Participants were divided into two groups: intervention (IG, n = 25, which were submitted to flexibility training program and control (CG, n = 21, in which older women participated only in aquatic exercises. Flexibility training lasted three months with weekly frequency of two days, consisting of stretching exercises involving trunk and lower limbs performed after aquatic exercises. The stretching method used was passive static. Assessment consisted of the functional reach, lateral and goniometric tests. Statistical analysis was performed using the following tests: Shapiro-Wilk normality, ANCOVA, Pearson and Spearman correlations. Significant results for GI in gains of maximum range of motion for the right hip joint (p = 0.0025, however, the same result was not observed in other joints assessed, and there was no improvement in functional and lateral reach capacity for both groups. Significant correlations between reach capacity and range of motion in the trunk, hip and ankle were not observed. Therefore, flexibility training associated with the practice of aquatic exercises promoted increased maximum range of motion only for the hip joint; however, improvement in the reach capacity was not observed. The practice of aquatic exercises alone did not show significant results.
Abnormal changes in the density of thermal neutron flux in biocenoses near the earth surface.
Plotnikova, N V; Smirnov, A N; Kolesnikov, M V; Semenov, D S; Frolov, V A; Lapshin, V B; Syroeshkin, A V
2007-04-01
We revealed an increase in the density of thermal neutron flux in forest biocenoses, which was not associated with astrogeophysical events. The maximum spike of this parameter in the biocenosis reached 10,000 n/(sec x m2). Diurnal pattern of the density of thermal neutron flux depended only on the type of biocenosis. The effects of biomodulation of corpuscular radiation for balneology are discussed.
Reach Envelope of Human Extremities
YANG Jingzhou(杨景周); ZHANG Yunqing(张云清); CHEN Liping(陈立平); ABDEL-MALEK Karim
2004-01-01
Significant attention in recent years has been given to obtain a better understanding of human joint ranges, measurement, and functionality, especially in conjunction with commands issued by the central nervous system. While researchers have studied motor commands needed to drive a limb to follow a path trajectory, various computer algorithms have been reported that provide adequate analysis of limb modeling and motion. This paper uses a rigorous mathematical formulation to model human limbs, understand their reach envelope, delineate barriers therein where a trajectory becomes difficult to control, and help visualize these barriers. Workspaces of a typical forearm with 9 degrees of freedom, a typical finger modeled as a 4- degree-of-freedom system, and a lower extremity with 4 degrees of freedom are discussed. The results show that using the proposed formulation, joint limits play an important role in distinguishing the barriers.
Effect of the equation of state on the maximum mass of differentially rotating neutron stars
Studzińska, A. M.; Kucaba, M.; Gondek-Rosińska, D.; Villain, L.; Ansorg, M.
2016-12-01
Knowing the value of the maximum mass of a differentially rotating relativistic star is a key step towards the understanding of the signals to be expected from the merger of binary neutron stars, one of the most awaited alternative sources of gravitational waves after binary black holes. In this paper, we study the effects of differential rotation and of the equation of state on the maximum mass of rotating neutron stars modelled as relativistic polytropes with various adiabatic indices. Calculations are performed using a highly accurate numerical code, based on a multidomain spectral method. We thoroughly explore the parameter space and determine how the maximum mass depends on the stiffness, on the degree of differential rotation and on the maximal density, taking into account all the types of solutions that were proven to exist in a preceding paper. The highest increase with respect to the maximum mass for non-rotating stars with the same equation of state is reached for a moderate stiffness. With differential rotation, the maximum mass can even be 3-4 times higher than it is for static stars. This result may have important consequences for the gravitational wave signal from coalescing neutron star binaries or for some supernovae events.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
李丽丽; 赵成章; 殷翠琴; 王大为; 张军霞
2012-01-01
The environmental heterogeneity caused by topographical diversity is an important mechanism of the formation and maintenance of bio-geographic spatial distribution pattern at mi-cro-scale, and also, the prerequisite for the difference in the distribution of species richness. With the help of GIS and S-Plus, the GAM model was used to study the topographic indices af-fecting the distribution of grasshopper on the natural grasslands in the upper reaches of Heihe River on the northern slope of Qilian Mountains from July to August 2009, and the relationship between the regional grasshopper number and terrain complexity was also studied, based on the quantitative analysis of the topographic variation characteristics. The topographical factors affect-ing the grasshopper density were in the order of aspect > elevation > slope > position > plane cur-vature > profile curvature. The distribution of grasshopper was almost balanced at different gradi-ents of position, plane curvature, and profile curvature, and presented a quadratic parabola dis-tribution at different gradients of aspect and slope and an "S" distribution at different gradients of evaluation. There was a higher grasshopper density in the whole region, but the grasshopper was mainly distributed in the region with an altitude of 2550-2650 m, and concentrated in the north-west and west aspect, which was consistent with the actual observation. The relationships be-tween the grasshopper density and terrain factors and the distribution of the grasshopper indicated that the redistribution of water and heat conditions due to topographic factors caused the diversifi- cation and fragmentation of the distribution pattern of the grasshopper.%地形差异性导致的环境异质性是小尺度范围内生物空间格局形成与维持的重要机制之一,也是导致物种密度分布差异的前提条件.借助GIS和S-Plus软件,利用广义可加模型(GAM)于2009年7-8月对影响蝗虫分布的地形因子进行了研
How Do Chinese Enterprises Look at REACH?
无
2007-01-01
@@ The new European REACH (Registration, Evaluation, Authorization of Chemicals) regulation has come into force. As soon as the REACH white paper was issued, Chinese enterprises started to research the possible impacts of REACH and prepare to cope with them. How then do these Chinese enterprises look at REACH? Following are views of some Chinese enterprises exporting chemical products to the European Union.
Has the world economy reached its globalization limit?
Miskiewicz, Janusz
2009-01-01
The economy globalization measure problem is discussed. Four macroeconomic indices of twenty among the "richest" countries are examined. Four types of "distances" are calculated.Two types of networks are next constructed for each distance measure definition. It is shown that the globalization process can be best characterised by an entropy measure, based on entropy Manhattan distance. It is observed that a globalization maximum was reached in the interval 1970-2000. More recently a deglobalization process is observed.
ALMA telescope reaches new heights
2009-09-01
of the Array Operations Site. This means surviving strong winds and temperatures between +20 and -20 Celsius whilst being able to point precisely enough that they could pick out a golf ball at a distance of 15 km, and to keep their smooth reflecting surfaces accurate to better than 25 micrometres (less than the typical thickness of a human hair). Once the transporter reached the high plateau it carried the antenna to a concrete pad - a docking station with connections for power and fibre optics - and positioned it with an accuracy of a few millimetres. The transporter is guided by a laser steering system and, just like some cars today, also has ultrasonic collision detectors. These sensors ensure the safety of the state-of-the-art antennas as the transporter drives them across what will soon be a rather crowded plateau. Ultimately, ALMA will have at least 66 antennas distributed over about 200 pads, spread over distances of up to 18.5 km and operating as a single, giant telescope. Even when ALMA is fully operational, the transporters will be used to move the antennas between pads to reconfigure the telescope for different kinds of observations. "Transporting our first antenna to the Chajnantor plateau is a epic feat which exemplifies the exciting times in which ALMA is living. Day after day, our global collaboration brings us closer to the birth of the most ambitious ground-based astronomical observatory in the world", said Thijs de Graauw, ALMA Director. This first ALMA antenna at the high site will soon be joined by others and the ALMA team looks forward to making their first observations from the Chajnantor plateau. They plan to link three antennas by early 2010, and to make the first scientific observations with ALMA in the second half of 2011. ALMA will help astronomers answer important questions about our cosmic origins. The telescope will observe the Universe using light with millimetre and submillimetre wavelengths, between infrared light and radio waves in
CAST reaches milestone but keeps on searching
CERN Courier (september 2011 issue)
2011-01-01
After eight years of searching for the emission of a dark matter candidate particle, the axion, from the Sun, the CERN Axion Solar Telescope (CAST) has fulfilled its original physics programme. Members of the CAST collaboration in July, together with dipole-based helioscope. CAST, the world’s most sensitive axion helioscope, points a recycled prototype LHC dipole magnet at the Sun at dawn and dusk, looking for the conversion of axions to X-rays. It incorporates four state-of-the-art X-ray detectors: three Micromegas detectors and a pn-CCD imaging camera attached to a focusing X-ray telescope that was recovered from the German space programme (see CERN Courier April 2010). Over the years, CAST has operated with the magnet bores - the location of the axion conversion - in different conditions: first in vacuum, covering axion masses up to 20 meV/c2, and then with a buffer gas (4He and later 3He) at various densities, finally reaching the goal of 1.17 eV/c2 on 22 ...
New symmetry of intended curved reaches
Torres Elizabeth B
2010-04-01
Full Text Available Abstract Background Movement regularities are inherently present in automated goal-directed motions of the primate's arm system. They can provide important signatures of intentional behaviours driven by sensory-motor strategies, but it remains unknown if during motor learning new regularities can be uncovered despite high variability in the temporal dynamics of the hand motions. Methods We investigated the conservation and violation of new movement regularity obtained from the hand motions traced by two untrained monkeys as they learned to reach outwardly towards spatial targets while avoiding obstacles in the dark. The regularity pertains to the transformation from postural to hand paths that aim at visual goals. Results In length-minimizing curves the area enclosed between the Euclidean straight line and the curve up to its point of maximum curvature is 1/2 of the total area. Similar trend is found if one examines the perimeter. This new movement regularity remained robust to striking changes in arm dynamics that gave rise to changes in the speed of the reach, to changes in the hand path curvature, and to changes in the arm's postural paths. The area and perimeter ratios characterizing the regularity co-varied across repeats of randomly presented targets whenever the transformation from posture to hand paths was compliant with the intended goals. To interpret this conservation and the cases in which the regularity was violated and recovered, we provide a geometric model that characterizes arm-to-hand and hand-to-arm motion paths as length minimizing curves (geodesics in a non-Euclidean space. Whenever the transformation from one space to the other is distance-metric preserving (isometric the two symmetric ratios co-vary. Otherwise, the symmetric ratios and their co-variation are violated. As predicted by the model we found empirical evidence for the violation of this movement regularity whenever the intended goals mismatched the actions. This
Stream Habitat Reach Summary - NCWAP [ds158
California Department of Resources — The Stream Habitat - NCWAP - Reach Summary [ds158] shapefile contains in-stream habitat survey data summarized to the stream reach level. It is a derivative of the...
Electron density and plasma dynamics of a colliding plasma experiment
Wiechula, J., E-mail: wiechula@physik.uni-frankfurt.de; Schönlein, A.; Iberler, M.; Hock, C.; Manegold, T.; Bohlender, B.; Jacoby, J. [Plasma Physics Group, Institute of Applied Physics, Goethe University, 60438 Frankfurt am Main (Germany)
2016-07-15
We present experimental results of two head-on colliding plasma sheaths accelerated by pulsed-power-driven coaxial plasma accelerators. The measurements have been performed in a small vacuum chamber with a neutral-gas prefill of ArH{sub 2} at gas pressures between 17 Pa and 400 Pa and load voltages between 4 kV and 9 kV. As the plasma sheaths collide, the electron density is significantly increased. The electron density reaches maximum values of ≈8 ⋅ 10{sup 15} cm{sup −3} for a single accelerated plasma and a maximum value of ≈2.6 ⋅ 10{sup 16} cm{sup −3} for the plasma collision. Overall a raise of the plasma density by a factor of 1.3 to 3.8 has been achieved. A scaling behavior has been derived from the values of the electron density which shows a disproportionately high increase of the electron density of the collisional case for higher applied voltages in comparison to a single accelerated plasma. Sequences of the plasma collision have been taken, using a fast framing camera to study the plasma dynamics. These sequences indicate a maximum collision velocity of 34 km/s.
Alazo-Cuartas, K.; Radicella, S. M.
2017-10-01
An improved empirical formulation for the characterization of the ;base point; of the bottomside ionospheric electron density profile is proposed. The ;base point; in an ionospheric layer is defined by the electron density profile height where the gradient dN/dh reaches a maximum. The difference between the height of the maximum electron density and the height of the ;base point; is proportional to the ionospheric F2 layer thickness parameter B2. The previous empirical formula links the maximum value of dN/dh to foF2 and M(3000)F2 scaled from the ionograms. The new formulation adds a dependence on the solar zenith angle. The use of the new equation improves substantially the calculation of the B2 thickness parameter used in the NeQuick model.
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Garner, Grace; Malcolm, Iain A.; Sadler, Jonathan P.; Hannah, David M.
2017-10-01
A simulation experiment was used to understand the importance of riparian vegetation density, channel orientation and flow velocity for stream energy budgets and river temperature dynamics. Water temperature and meteorological observations were obtained in addition to hemispherical photographs along a ∼1 km reach of the Girnock Burn, a tributary of the Aberdeenshire Dee, Scotland. Data from nine hemispherical images (representing different uniform canopy density scenarios) were used to parameterise a deterministic net radiation model and simulate radiative fluxes. For each vegetation scenario, the effects of eight channel orientations were investigated by changing the position of north at 45° intervals in each hemispheric image. Simulated radiative fluxes and observed turbulent fluxes drove a high-resolution water temperature model of the reach. Simulations were performed under low and high water velocity scenarios. Both velocity scenarios yielded decreases in mean (≥1.6 °C) and maximum (≥3.0 °C) temperature as canopy density increased. Slow-flowing water resided longer within the reach, which enhanced heat accumulation and dissipation, and drove higher maximum and lower minimum temperatures. Intermediate levels of shade produced highly variable energy flux and water temperature dynamics depending on the channel orientation and thus the time of day when the channel was shaded. We demonstrate that in many reaches relatively sparse but strategically located vegetation could produce substantial reductions in maximum temperature and suggest that these criteria are used to inform future river management.
GROWTH ANALYSIS AND ASSESSMENT OF PIG’S BIOLOGICAL MAXIMUM
Dragutin Vincek
2010-06-01
Full Text Available The aim of this study was to determine a mathematical model which can be used to describe the growth of domestic animals in an attempt to predict the optimal time of slaughter/weight or the development of body parts or tissues and estimate the biological maximum. The study was conducted on 60 pigs (30 barrows and 30 gilts in the interval between the age of 49 and 215 days. By applying the generalized logistic function, the growth of live weight and tissues were described. The observed gilts reached the inflection point in approximately 121 days (I = 70.7 kg. The point at which the interval of intensive growth starts was at the age of approximately 42 days, (TB=17.35 kg and the saturation point the pigs reached at the age of 200.5 days (TC=126.74 kg. The estimated biological maximum weight of gilts was 179.79 kg. The barrows reached the inflection point in approximately 149 days (I=92.2 kg. The point at which the intensive interval of growth starts was estimated at the age of approximately 52 days (TB=22.93 kg, and the saturation point the barrows reached at the age of 245 days (TC=164.8 kg. The estimated biological maximum weight of barrows was 233.25 kg. Muscle tissue of gilts reached the inflection point (I = 28.46 kg in approximately 110 days. The point at which the interval of intensive growth of muscle tissue starts (TB=6.06 kg was estimated at approximately 53 days, and the saturation point of growth (TC=52.25 kg the muscle tissue of gilts reached at the age of 162 days. The estimated maximum biological growth of muscle tissue in gilts was 75.79 kg. The muscle tissue of barrows reached the inflection point (I=28.78 kg in approximately 118 days, the point at which the interval of intensive growth starts (TB=6.36 kg at the age of approximately 35 days. The saturation point of muscle tissue growth in barrows (TC=52.51 kg was reached at the age of 202 days. The estimated maximum biological growth of muscle tissue in barrows was 75.74 kg. The
João Alfredo Braida
2006-08-01
ótese de que a palha existente sobre o solo é capaz de absorver parte da energia de compactação produzida pelo trânsito de máquinas e animais.The susceptibility of soils to compaction, measured by the Proctor test, decreases with increasing soil organic matter (SOM content. For a given energy level, with increasing SOM contents the maximum obtained density decreases and the corresponding critical moisture content increases. Due to its low density, elasticity and deformation susceptibility, straw is potentially able to dissipate applied loads. This study was conducted to evaluate the SOM effect on the soil compaction curve and to evaluate the ability that mulch has to absorb compactive energy in the Proctor test. The compaction test was carried out using soil surface samples (0 to 0.05 m of a Hapludalf, with sandy loam texture at its soil surface, and an Oxisol, with clayey texture at its soil surface, both with variations in the SOM content. The maximum density, the critical moisture content, the liquid and plastic limits, and the soil organic carbon content were determined. A second test was performed to evaluate the ability of mulch to absorb compactive energy, by compacting Hapludalf samples with the presence of a straw layer on the soil surface, inside a Proctor cylinder, at amounts corresponding to 2, 4, 8 and 12 Mg ha-1. SOM accumulation reduced the maximum density and increased the critical moisture content, suggesting an increased resistance to soil compaction. In the Proctor test the straw on the soil surface dissipated up to 30 % of the compactive energy and reduced the bulk density, confirming the hypothesis that mulch can absorb part of the compactive energy caused by machine traffic and by animals.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
de Abreu, Daniela Cristina Carvalho; Takara, Kelly; Metring, Nathalia Lopes; Reis, Julia Guimaraes; Cliquet, Alberto, Jr.
2012-01-01
We aimed to evaluate the influence of different types of wheelchair seats on paraplegic individuals' postural control using a maximum anterior reaching test. Balance evaluations during 50, 75, and 90% of each individual's maximum reach in the forward direction using two different cushions on seat (one foam and one gel) and a no-cushion condition…
Regional time-density measurement of myocardial perfusion
Eusemann, Christian D.; Breen, Jerome F.; Robb, Richard A.
2003-05-01
The measurement of time-density relationships of the myocardium in studies of Magnetic Resonance perfusion data sets is a clinical technique used in assessing myocardial perfusion. Traditionally, to measure the time-density relationship a physician draws a region on the same 2-D image of the myocardium in sequential cardiac cycles. Throughout multiple cardiac cycles the density changes in this region are measured. A major limitation of this technique is change in anatomy relative to the selected region on the myocardium during consecutive cardiac cycles. This causes measurement errors, which are amplified if the traced region does not encompass the entire myocardial thickness, or includes a boundary exterior to the epicardial or endocardial surface. The technique described in this paper uses approximately the same myocardial region throughout the entire perfusion study, which insures inclusion of the entire endocardial to epicardial region and exclusion of exterior regions. Moreover, this region can be subdivided into smaller regions of interest. This can be accomplished by careful segmentation and reformatting of the data into polar coordinates. This allows sectioning both axially and transaxially through the myocardium permitting regional assessment of perfusion specific values such as maximum and/or the time to reach maximum density. These values can then be illustrated using density-mapped colors or time-density curves. This measurement and display technique may provide enhanced detection and evaluation of regional deficits in myocardial contractility and perfusion.
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Reach preparation enhances visual performance and appearance.
Rolfs, Martin; Lawrence, Bonnie M; Carrasco, Marisa
2013-10-19
We investigated the impact of the preparation of reach movements on visual perception by simultaneously quantifying both an objective measure of visual sensitivity and the subjective experience of apparent contrast. Using a two-by-two alternative forced choice task, observers compared the orientation (clockwise or counterclockwise) and the contrast (higher or lower) of a Standard Gabor and a Test Gabor, the latter of which was presented during reach preparation, at the reach target location or the opposite location. Discrimination performance was better overall at the reach target than at the opposite location. Perceived contrast increased continuously at the target relative to the opposite location during reach preparation, that is, after the onset of the cue indicating the reach target. The finding that performance and appearance do not evolve in parallel during reach preparation points to a distinction with saccade preparation, for which we have shown previously there is a parallel temporal evolution of performance and appearance. Yet akin to saccade preparation, this study reveals that overall reach preparation enhances both visual performance and appearance.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Determination of coronal temperatures from electron density profiles
Lemaire, J F
2011-01-01
The most popular method for determining coronal temperatures is the scale-height-method (shm). It is based on electron density profiles inferred from White Light (WL) brightness measurements of the corona during solar eclipses. This method has been applied to several published coronal electron density models. The calculated temperature distributions reach a maximum at r > 1.3 RS, and therefore do not satisfy one of the conditions for applying the shm method. Another method is the hydrostatic equilibrium method (hst), which enables coronal temperature distributions to be determined, providing solutions to the hydrostatic equilibrium equation. The temperature maximas using the hst method are almost equal to those obtained using the shm method, but the temperature peak is always at significantly lower altitude when the hst-method is used than when the shm-method is used. A third and more recently developed method, dyn, can be used for the same published electron density profiles. The temperature distributions ob...
Evaluation of a hydrological model based on Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-04-01
Evaluation and discrimination of model structures is crucial to ensure an appropriate use of hydrological models. When evaluating model results by aggregating their quality in (a subset of) individual observations, overall results of this analysis sometimes conceal important detailed information about model structural deficiencies. Analyzing model results within their local (time) context can uncover this detailed information. In this research, a methodology called Bidirectional Reach (BReach) is proposed to evaluate and analyze results of a hydrological model by assessing the maximum left and right reach in each observation point that is used for model evaluation. These maximum reaches express the capability of the model to describe a subset of the evaluation data both in the direction of the previous (left) and of the following data (right). This capability is evaluated on two levels. First, on the level of individual observations, the combination of a parameter set and an observation is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Second, the behavior in a sequence of observations is evaluated by means of a tolerance degree. This tolerance degree expresses the condition for satisfactory model behavior in a data series and is defined by the percentage of observations within this series that can have non-acceptable model results. Based on both criteria, the maximum left and right reaches of a model in an observation represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. After assessing these reaches for a variety of tolerance degrees, results can be plotted in a combined BReach plot that show temporal changes in the behavior of model results. The methodology is applied on a Probability Distributed Model (PDM) of the river
Improving exposure scenario definitions within REACH
Lee, Jihyun; Pizzol, Massimo; Thomsen, Marianne
instruments to support a precautionary chemicals management system and to protect receptor’s health have also been increasing. Since 2007, the European Union adopted REACH (the Regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals): REACH makes industry responsible for assessing...... the different background exposure between two countries allows in fact the definition of a common framework for improving exposure scenarios within REACH system, for monitoring environmental health, and for increasing degree of circularity of resource and substance flows. References 1. European Commission...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Ylönen Hannu
2011-07-01
Full Text Available Abstract Background A territory as a prerequisite for breeding limits the maximum number of breeders in a given area, and thus lowers the proportion of breeders if population size increases. However, some territorially breeding animals can have dramatic density fluctuations and little is known about the change from density-dependent processes to density-independence of breeding during a population increase or an outbreak. We suggest that territoriality, breeding suppression and its break-down can be understood with an incomplete-control model, developed for social breeders and social suppression. Results We studied density dependence in an arvicoline species, the bank vole, known as a territorial breeder with cyclic and non-cyclic density fluctuations and periodically high densities in different parts of its range. Our long-term data base from 38 experimental populations in large enclosures in boreal grassland confirms that breeding rates are density-regulated at moderate densities, probably by social suppression of subordinate potential breeders. We conducted an experiment, were we doubled and tripled this moderate density under otherwise the same conditions and measured space use, mortality, reproduction and faecal stress hormone levels (FGM of adult females. We found that mortality did not differ among the densities, but the regulation of the breeding rate broke down: at double and triple densities all females were breeding, while at the low density the breeding rate was regulated as observed before. Spatial overlap among females increased with density, while a minimum territory size was maintained. Mean stress hormone levels were higher in double and triple densities than at moderate density. Conclusions At low and moderate densities, breeding suppression by the dominant breeders, But above a density-threshold (similar to a competition point, the dominance of breeders could not be sustained (incomplete control. In our experiment, this point
Compact muon solenoid magnet reaches full field
2006-01-01
Scientist of the U.S. Department of Energy in Fermilab and collaborators of the US/CMS project announced that the world's largest superconducting solenoid magnet has reached full field in tests at CERN. (1 apge)
Hanford Reach - Ringold Russian Knapweed Treatment
US Fish and Wildlife Service, Department of the Interior — Increase the diversity of the seed mix on approximately 250 acres in the Ringold Unit of the Hanford Reach National Monument (Monument) treated with aminopyralid as...
RICHY
Expanded Program on Immunisation (EPI) training in. Zambia and critically analyses ... excellence in skills such as sport, music or dance, so it is ... only improve through reaching every child both physically and in .... Non-verbal communication.
Women Reaching Equality in Dubious Habit: Drinking
... page: https://medlineplus.gov/news/fullstory_161640.html Women Reaching Equality in Dubious Habit: Drinking Females also ... 25, 2016 MONDAY, Oct. 24, 2016 (HealthDay News) -- Women have made major strides towards equality with men, ...
Reaching the Overlooked Student in Physical Education
Esslinger, Keri; Esslinger, Travis; Bagshaw, Jarad
2015-01-01
This article describes the use of live action role-playing, or "LARPing," as a non-traditional activity that has the potential to reach students who are not interested in traditional physical education.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Impact of the REACH II and REACH VA Dementia Caregiver Interventions on Healthcare Costs.
Nichols, Linda O; Martindale-Adams, Jennifer; Zhu, Carolyn W; Kaplan, Erin K; Zuber, Jeffrey K; Waters, Teresa M
2017-05-01
Examine caregiver and care recipient healthcare costs associated with caregivers' participation in Resources for Enhancing Alzheimer's Caregivers Health (REACH II or REACH VA) behavioral interventions to improve coping skills and care recipient management. RCT (REACH II); propensity-score matched, retrospective cohort study (REACH VA). Five community sites (REACH II); 24 VA facilities (REACH VA). Care recipients with Alzheimer's disease and related dementias (ADRD) and their caregivers who participated in REACH II study (analysis sample of 110 caregivers and 197 care recipients); care recipients whose caregivers participated in REACH VA and a propensity matched control group (analysis sample of 491). Previously collected data plus Medicare expenditures (REACH II) and VA costs plus Medicare expenditures (REACH VA). There was no increase in VA or Medicare expenditures for care recipients or their caregivers who participated in either REACH intervention. For VA care recipients, REACH was associated with significantly lower total VA costs of care (33.6%). VA caregiver cost data was not available. In previous research, both REACH II and REACH VA have been shown to provide benefit for dementia caregivers at a cost of less than $5/day; however, concerns about additional healthcare costs may have hindered REACH's widespread adoption. Neither REACH intervention was associated with additional healthcare costs for caregivers or patients; in fact, for VA patients, there were significantly lower healthcare costs. The VA costs savings may be related to the addition of a structured format for addressing the caregiver's role in managing complex ADRD care to an existing, integrated care system. These findings suggest that behavioral interventions are a viable mechanism to support burdened dementia caregivers without additional healthcare costs. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Encircling the dark: constraining dark energy via cosmic density in spheres
Codis, S; Bernardeau, F; Uhlemann, C; Prunet, S
2016-01-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few percent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical collapse dynamics is made available online so as to provide straightforward means of testing the effect of alternative dark energy models and initial power-spectra on the low-redshift matter distribution.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
U.S. Environmental Protection Agency — Road density is generally highly correlated with amount of developed land cover. High road densities usually indicate high levels of ecological disturbance. More...
Identification of consistency in rating curve data: Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-04-01
Before calculating rating curve discharges, it is crucial to identify possible interruptions in data consistency. In this research, a methodology to perform this preliminary analysis is developed and validated. This methodology, called Bidirectional Reach (BReach), evaluates in each data point results of a rating curve model with randomly sampled parameter sets. The combination of a parameter set and a data point is classified as non-acceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Moreover, a tolerance degree that defines satisfactory behavior of a sequence of model results is chosen. This tolerance degree equals the percentage of observations that are allowed to have non-acceptable model results. Subsequently, the results of the classification is used to assess the maximum left and right reach for each data point of a chronologically sorted time series. This maximum left and right reach in a gauging point represent the data points in the direction of the previous respectively the following observations beyond which none of the sampled parameter sets both are satisfactory and result in an acceptable deviation. This analysis is repeated for a variety of tolerance degrees. Plotting results of this analysis for all data points and all tolerance degrees in a combined BReach plot enables the detection of changes in data consistency. Moreover, if consistent periods are detected, limits of these periods can be derived. The methodology is validated with various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, large errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations.
Do working environment interventions reach shift workers?
Nabe-Nielsen, Kirsten; Jørgensen, Marie Birk; Garde, Anne Helene
2016-01-01
workers were less likely to be reached by workplace interventions. For example, night workers less frequently reported that they had got more flexibility (OR 0.5; 95 % CI 0.3-0.7) or that they had participated in improvements of the working procedures (OR 0.6; 95 % CI 0.5-0.8). Quality of leadership......PURPOSE: Shift workers are exposed to more physical and psychosocial stressors in the working environment as compared to day workers. Despite the need for targeted prevention, it is likely that workplace interventions less frequently reach shift workers. The aim was therefore to investigate whether...... the reach of workplace interventions varied between shift workers and day workers and whether such differences could be explained by the quality of leadership exhibited at different times of the day. METHODS: We used questionnaire data from 5361 female care workers in the Danish eldercare sector...
REACH. Analytical characterisation of petroleum UVCB substances
De Graaff, R.; Forbes, S.; Gennart, J.P.; Gimeno Cortes, M.J.; Hovius, H.; King, D.; Kleise, H.; Martinez Martin, C.; Montanari, L.; Pinzuti, M.; Pollack, H.; Ruggieri, P.; Thomas, M.; Walton, A.; Dmytrasz, B.
2012-10-15
The purpose of this report is to summarise the findings of the scientific and technical work undertaken by CONCAWE to assess the feasibility and potential benefit of characterising petroleum UVCB substances (Substances of Unknown or Variable Composition, Complex reaction products or Biological Materials) beyond the recommendations issued by CONCAWE for the substance identification of petroleum substances under REACH. REACH is the European Community Regulation on chemicals and their safe use (EC 1907/2006). It deals with the Registration, Evaluation, Authorisation and Restriction of Chemical substances. The report is based on Member Company experience of the chemical analysis of petroleum UVCB substances, including analysis in support of REACH registrations undertaken in 2010. This report is structured into four main sections, namely: Section 1 which provides an introduction to the subject of petroleum UVCB substance identification including the purpose of the report, regulatory requirements, the nature of petroleum UVCB substances, and CONCAWE's guidance to Member Companies and other potential registrants. Section 2 provides a description of the capabilities of each of the analytical techniques described in the REACH Regulation. This section also includes details on the type of analytical information obtained by each technique and an evaluation of what each technique can provide for the characterisation of petroleum UVCB substances. Section 3 provides a series of case studies for six petroleum substance categories (low boiling point naphthas, kerosene, heavy fuel oils, other lubricant base oils, residual aromatic extracts and bitumens) to illustrate the value of the information derived from each analytical procedure, and provide an explanation for why some techniques are not scientifically necessary. Section 4 provides a summary of the conclusions reached from the technical investigations undertaken by CONCAWE Member Companies, and summarising the
The Astronomical Reach of Fundamental Physics
Burrows, Adam
2014-01-01
Using basic physical arguments, we derive by dimensional and physical analysis the characteristic masses and sizes of important objects in the Universe in terms of just a few fundamental constants. This exercise illustrates the unifying power of physics and the profound connections between the small and the large in the Cosmos we inhabit. We focus on the minimum and maximum masses of normal stars, the corresponding quantities for neutron stars, the maximum mass of a rocky planet, the maximum mass of a white dwarf, and the mass of a typical galaxy. To zeroth order, we show that all these masses can be expressed in terms of either the Planck mass or the Chandrasekar mass, in combination with various dimensionless quantities. With these examples we expose the deep interrelationships imposed by Nature between disparate realms of the Universe and the amazing consequences of the unifying character of physical law.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Polishing Difficult-To-Reach Cavities
Malinzak, R. Michael; Booth, Gary N.
1990-01-01
Springy abrasive tool used to finish surfaces of narrow cavities made by electrical-discharge machining. Robot arm moves vibrator around perimeters of cavities, polishing walls of cavities as it does so. Tool needed because such cavities inaccessible or at least difficult to reach with most surface-finishing tools.
REACH. Electricity Units, Post-Secondary.
Smith, Gene; And Others
As a part of the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this postsecondary student manual contains individualized instructional units in the area of electricity. The instructional units focus on electricity fundamentals, electric motors, electrical components, and controls and installation.…
Reliability of the Advanced REACH Tool (ART)
Schinkel, J.; Fransman, W.; McDonnell, P.E.; Entink, R.K.; Tielemans, E.; Kromhout, H.
2014-01-01
Objectives: The aim of this study was to assess the reliability of the Advanced REACH Tool (ART) by (i) studying interassessor agreement of the resulting exposure estimates generated by the ART mechanistic model, (ii) studying interassessor agreement per model parameters of the ART mechanistic model
Reliability of the Advanced REACH Tool (ART)
Schinkel, J.; Fransman, W.; McDonnell, P.E.; Entink, R.K.; Tielemans, E.; Kromhout, H.
2014-01-01
Objectives: The aim of this study was to assess the reliability of the Advanced REACH Tool (ART) by (i) studying interassessor agreement of the resulting exposure estimates generated by the ART mechanistic model, (ii) studying interassessor agreement per model parameters of the ART mechanistic
Guiding Warfare to Reach Sustainable Peace
Vestenskov, David; Drewes, Line
The conference report Guiding Warfare to Reach Sustainable Peace constitutes the primary outcome of the conference It is based on excerpts from the conference presenters and workshop discussions. Furthermore, the report contains policy recommendations and key findings, with the ambition of develo...
ATLAS Barrel Toroid magnet reached nominal field
2006-01-01
Â OnÂ 9 November the barrel toroid magnet reached its nominal field of 4 teslas, with an electrical current of 21 000 amperes (21 kA) passing through the eight superconducting coils as shown on this graph
Science Experiments: Reaching Out to Our Users
Nolan, Maureen; Tschirhart, Lori; Wright, Stephanie; Barrett, Laura; Parsons, Matthew; Whang, Linda
2008-01-01
As more users access library services remotely, it has become increasingly important for librarians to reach out to their user communities and promote the value of libraries. Convincing the faculty and students in the sciences of the value of libraries and librarians can be a particularly "hard sell" as more and more of their primary…
The REACH Youth Program Learning Toolkit
Sierra Health Foundation, 2011
2011-01-01
Believing in the value of using video documentaries and data as learning tools, members of the REACH technical assistance team collaborated to develop this toolkit. The learning toolkit was designed using and/or incorporating components of the "Engaging Youth in Community Change: Outcomes and Lessons Learned from Sierra Health Foundation's…
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Garnett, E S; Webber, C E; Coates, G
1977-01-01
The density of a defined volume of the human lung can be measured in vivo by a new noninvasive technique. A beam of gamma-rays is directed at the lung and, by measuring the scattered gamma-rays, lung density is calculated. The density in the lower lobe of the right lung in normal man during quiet...... breathing in the sitting position ranged from 0.25 to 0.37 g.cm-3. Subnormal values were found in patients with emphsema. In patients with pulmonary congestion and edema, lung density values ranged from 0.33 to 0.93 g.cm-3. The lung density measurement correlated well with the findings in chest radiographs...... but the lung density values were more sensitive indices. This was particularly evident in serial observations of individual patients....
Pan, Jie; Li, Li; Wang, Yunuan; Xiu, Xianwu; Wang, Chao; Song, Yuzhi
2016-11-01
Atmospheric-pressure argon plasmas have received increasing attention due to their high potential in many industrial and biomedical applications. In this paper, a 1-D fluid model is used for studying the particle density characteristics of the argon plasmas generated by the pulsed dielectric barrier discharges. The temporal evolutions of the axial particle density distributions are illustrated, and the influences of changing the main discharge conditions on the averaged particle densities are researched by independently varying the various discharge conditions. The calculation results show that the electron density and the ion density reach two peaks near the momentary cathodes during the rising and the falling edges of the pulsed voltage. Compared with the charged particle densities, the densities of the resonance state atom Arr and the metastable state atom Arm have more uniform axial distributions, reach higher maximums and decay more slowly. During the platform of the pulsed voltage and the time interval between the pulses, the densities of the excited state atom Ar* are far lower than those of the Arr or the Arm. The averaged particle densities of the different considered particles increase with the increases of the amplitude and the frequency of the pulsed voltage. Narrowing the discharge gap and increasing the relative dielectric constant of the dielectric also contribute to the increase of the averaged particle densities. The effects of reducing the discharge gap distance on the neutral particle densities are more significant than the influences on the charged particle densities. supported by Natural Science Foundation of Shandong Province, China (No. ZR2015AQ008), and Project of Shandong Province Higher Educational Science and Technology Program of China (No. J15LJ04)
Does workplace health promotion reach shift workers?
Nabe-Nielsen, Kirsten; Garde, Anne Helene; Clausen, Thomas;
2015-01-01
OBJECTIVES: One reason for health disparities between shift and day workers may be that workplace health promotion does not reach shift workers to the same extent as it reaches day workers. This study aimed to investigate the association between shift work and the availability of and participation...... in workplace health promotion. METHODS: We used cross-sectional questionnaire data from a large representative sample of all employed people in Denmark. We obtained information on the availability of and participation in six types of workplace health promotion. We also obtained information on working hours, ie......). RESULTS: In the general working population, fixed evening and fixed night workers, and employees working variable shifts including night work reported a higher availability of health promotion, while employees working variable shifts without night work reported a lower availability of health promotion...
Olefins and chemical regulation in Europe: REACH.
Penman, Mike; Banton, Marcy; Erler, Steffen; Moore, Nigel; Semmler, Klaus
2015-11-05
REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) is the European Union's chemical regulation for the management of risk to human health and the environment (European Chemicals Agency, 2006). This regulation entered into force in June 2007 and required manufacturers and importers to register substances produced in annual quantities of 1000 tonnes or more by December 2010, with further deadlines for lower tonnages in 2013 and 2018. Depending on the type of registration, required information included the substance's identification, the hazards of the substance, the potential exposure arising from the manufacture or import, the identified uses of the substance, and the operational conditions and risk management measures applied or recommended to downstream users. Among the content developed to support this information were Derived No-Effect Levels or Derived Minimal Effect Levels (DNELs/DMELs) for human health hazard assessment, Predicted No Effect Concentrations (PNECs) for environmental hazard assessment, and exposure scenarios for exposure and risk assessment. Once registered, substances may undergo evaluation by the European Chemicals Agency (ECHA) or Member State authorities and be subject to requests for additional information or testing as well as additional risk reduction measures. To manage the REACH registration and related activities for the European olefins and aromatics industry, the Lower Olefins and Aromatics REACH Consortium was formed in 2008 with administrative and technical support provided by Penman Consulting. A total of 135 substances are managed by this group including 26 individual chemical registrations (e.g. benzene, 1,3-butadiene) and 13 categories consisting of 5-26 substances. This presentation will describe the content of selected registrations prepared for 2010 in addition to the significant post-2010 activities. Beyond REACH, content of the registrations may also be relevant to other European activities, for
Distance Reached in the Anteromedial Reach Test as a Function of Learning and Leg Length
Bent, Nicholas P.; Rushton, Alison B.; Wright, Chris C.; Batt, Mark E.
2012-01-01
The Anteromedial Reach Test (ART) is a new outcome measure for assessing dynamic knee stability in anterior cruciate ligament-injured patients. The effect of learning and leg length on distance reached in the ART was examined. Thirty-two healthy volunteers performed 15 trials of the ART on each leg. There was a moderate correlation (r = 0.44-0.50)…
Xu, Shi-qin; Ji, Xi-bin; Jin, Bo-wen
2016-02-01
Independent measurements of stem sap flow in stems of Calligonum mongolicum and environmental variables using commercial sap flow gauges and a micrometeorological monitoring system, respectively, were made to simulate the variation of sap flow density in the middle range of Hexi Corridor, Northwest China during June to September, 2014. The results showed that the diurnal process of sap flow density in C. mongolicum showed a broad unimodal change, and the maximum sap flow density reached about 30 minutes after the maximum of photosynthetically active radiation (PAR) , while about 120 minutes before the maximum of temperature and vapor pressure deficit (VPD). During the studying period, sap flow density closely related with atmosphere evapor-transpiration demand, and mainly affected by PAR, temperature and VPD. The model was developed which directly linked the sap flow density with climatic variables, and good correlation between measured and simulated sap flow density was observed in different climate conditions. The accuracy of simulation was significantly improved if the time-lag effect was taken into consideration, while this model underestimated low and nighttime sap flow densities, which was probably caused by plant physiological characteristics.
A Theory of Grain Clustering in Turbulence: The Origin and Nature of Large Density Fluctuations
Hopkins, Philip F
2016-01-01
We propose a theory for the density fluctuations of aerodynamic grains embedded in a turbulent, gravitating gas disk. The theory combines calculations for the average behavior of grains encountering a single turbulent eddy, with a hierarchical description of the eddy velocity statistics. We show that this makes analytic predictions for a wide range of quantities, including: the distribution of volume-average grain densities, the power spectrum and correlation functions of grain density fluctuations, and the maximum volume density of grains reached. For each, we predict how these scale as a function of grain stopping/friction time (t_stop), spatial scale, grain-to-gas mass ratio, strength of the turbulence (alpha), and detailed disk properties (orbital frequency, sound speed). We test these against numerical simulations and find good agreement over a huge parameter space. Results from 'turbulent concentration' simulations and laboratory experiments are also predicted as a special case. We predict that vortices...
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
On some problems of the maximum entropy ansatz
K Bandyopadhyay; K Bhattacharyya; A K Bhattacharyya
2000-03-01
Some problems associated with the use of the maximum entropy principle, namely, (i) possible divergence of the series that is exponentiated, (ii) input-dependent asymptotic behaviour of the density function resulting from the truncation of the said series, and (iii) non-vanishing of the density function at the boundaries of a ﬁnite domain are pointed out. Prescriptions for remedying the aforesaid problems are put forward. Pilot calculations involving the ground quantum eigenenergy states of the quartic oscillator, the particle-in-a-box model, and the classical Maxwellian speed and energy distributions lend credence to our approach.
Smoothed log-concave maximum likelihood estimation with applications
Chen, Yining
2011-01-01
We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.
Reaching Diverse Audiences through NOAO Education Programs
Pompea, Stephen M.; Sparks, R. T.; Walker, C. E.
2009-01-01
NOAO education programs are designed to reach diverse audiences. Examples described in this poster include the Hands-On Optics Project nationwide, an extension of the Hands-On Optics program at Boys and Girls Clubs in Arizona and in Hawaii, a professional development program for Navajo and Hopi teachers, a number of programs for the Tohono O'odham Nation, and a project collecting and reviewing Spanish language astronomy materials. Additionally NOAO is also involved in several local outreach projects for diverse and underserved audiences.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
Mitigation of maximum world oil production: Shortage scenarios
Hirsch, Robert L. [Management Information Services, Inc., 723 Fords Landing Way, Alexandria, VA 22314 (United States)
2008-02-15
A framework is developed for planning the mitigation of the oil shortages that will be caused by world oil production reaching a maximum and going into decline. To estimate potential economic impacts, a reasonable relationship between percent decline in world oil supply and percent decline in world GDP was determined to be roughly 1:1. As a limiting case for decline rates, giant fields were examined. Actual oil production from Europe and North America indicated significant periods of relatively flat oil production (plateaus). However, before entering its plateau period, North American oil production went through a sharp peak and steep decline. Examination of a number of future world oil production forecasts showed multi-year rollover/roll-down periods, which represent pseudoplateaus. Consideration of resource nationalism posits an Oil Exporter Withholding Scenario, which could potentially overwhelm all other considerations. Three scenarios for mitigation planning resulted from this analysis: (1) A Best Case, where maximum world oil production is followed by a multi-year plateau before the onset of a monatomic decline rate of 2-5% per year; (2) A Middling Case, where world oil production reaches a maximum, after which it drops into a long-term, 2-5% monotonic annual decline; and finally (3) A Worst Case, where the sharp peak of the Middling Case is degraded by oil exporter withholding, leading to world oil shortages growing potentially more rapidly than 2-5% per year, creating the most dire world economic impacts. (author)
Li Cheng-Bin; Li Ming-Kai; Yin Dong; Liu Fu-Qing; Fan Xiang-Jun
2005-01-01
A first principles study of the electronic properties and bulk modulus (B0) of the fcc and bcc transition metals,transition metal carbides and nitrides is presented. The calculations were performed by plane-wave pseudopotential method in the framework of the density functional theory with local density approximation. The density of states and the valence charge densities of these solids are plotted. The results show that B0 does not vary monotonically when the number of the valence d electrons increases. B0 reaches a maximum and then decreases for each of the four sorts of solids. It is related to the occupation of the bonding and anti-bonding states in the solid. The value of the valence charge density at the midpoint between the two nearest metal atoms tends to be proportional to B0.
Qing-song Yan; Huan Yu; Gang Lu; Bo-wen Xiong; Suai Xu
2016-01-01
The density of vacuum counter-pressure cast aluminum alloy samples under grade-pressuring condition was studied. The effect of grade pressure difference and time on the density of aluminum aloys was discussed, and the solidiifcation feeding model under grade-pressuring condition was established. The results indicate the grade-pressured solidiifcation feeding ability of vacuum counter-pressure casting mainly depends on grade pressure difference and time. With the increase of grade pressure difference, the density of al the aluminum aloy samples increases, and the trend of change in density from the pouring gate to the top location is first decreasing gradually and then increasing. In addition, in obtaining the maximum density, the optimal grade-pressuring time is different for samples with different wal thicknesses, and the solidiifcation time when the solid volume fraction of aluminum aloy reaches about 0.65 appears to be the optimal beginning time for grade-pressuring.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Can donated media placements reach intended audiences?
Cooper, Crystale Purvis; Gelb, Cynthia A; Chu, Jennifer; Polonec, Lindsey
2013-09-01
Donated media placements for public service announcements (PSAs) can be difficult to secure, and may not always reach intended audiences. Strategies used by the Centers for Disease Control and Prevention's (CDC) Screen for Life: National Colorectal Cancer Action Campaign (SFL) to obtain donated media placements include producing a diverse mix of high-quality PSAs, co-branding with state and tribal health agencies, securing celebrity involvement, monitoring media trends to identify new distribution opportunities, and strategically timing the release of PSAs. To investigate open-ended recall of PSAs promoting colorectal cancer screening, CDC conducted 12 focus groups in three U.S. cities with men and women either nearing age 50 years, when screening is recommended to begin, or aged 50-75 years who were not in compliance with screening guidelines. In most focus groups, multiple participants recalled exposure to PSAs promoting colorectal cancer screening, and most of these individuals reported having seen SFL PSAs on television, in transit stations, or on the sides of public buses. Some participants reported exposure to SFL PSAs without prompting from the moderator, as they explained how they learned about the disease. Several participants reported learning key campaign messages from PSAs, including that colorectal cancer screening should begin at age 50 years and screening can find polyps so they can be removed before becoming cancerous. Donated media placements can reach and educate mass audiences, including millions of U.S. adults who have not been screened appropriately for colorectal cancer.
Extended-reach wells tap outlying reserves
Nazzal, G. (Eastman Teleco, Houston, TX (United States))
1993-03-01
Extended-reach drilling (ERD) is being used to exploit fields and reserves that are located far from existing platforms. Effective wellbore placement from fewer platforms can reduce development costs, maximize production and increase reserve recovery. Six wells drilled offshore in the US, North Sea and Australia illustrate how to get the most economic benefit from available infrastructure. These wells are divided into three categories by depth (shallow, medium and deep). Vertical depth of these wells range from 963 to 12,791 ft TVD and displacements range from 4,871 to 23,917 ft. Important factors for successful extended-reach drilling included: careful, comprehensive pre-planning; adequate cuttings removal in all sections; hole stability in long, exposed intervals; torque and drag modeling of drilling BHAs, casing and liners; buoyancy-assisted casing techniques where appropriate; critical modifications to drilling rig and top drive, for medium and deep ERD; modified power swivels for shallow operations; drill pipe rubbers or other casing protection during extended periods of drill string rotation; heavy-wall casting across anticipated high-wear areas; survey accuracy and frequency; sound drilling practices and creativity to accomplish goals and objectives. This paper reviews the case history of these sites and records planning and design procedures.
Napa River Restoration Project: Rutherford Reach Completion and Oakville to Oak Knoll Reach
Information about the SFBWQP Napa River Restoration Project: Rutherford Reach Completion/Oakville to Oak Knoll, part of an EPA competitive grant program to improve SF Bay water quality focused on restoring impaired waters and enhancing aquatic resources.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Stone, J. R.; Danielewicz, P.; Iwata, Y.
2017-07-01
Background: The distribution of protons and neutrons in the matter created in heavy-ion collisions is one of the main points of interest for the collision physics, especially at supranormal densities. These distributions are the basis for predictions of the density dependence of the symmetry energy and the density range that can be achieved in a given colliding system. We report results of the first systematic simulation of proton and neutron density distributions in central heavy-ion collisions within the beam energy range of Ebeam≤800 MeV /nucl . The symmetric 40Ca+40Ca , 48Ca+48Ca , 100Sn+100Sn , and 120Sn+120Sn and asymmetric 40Ca+48Ca and 100Sn+120Sn systems were chosen for the simulations. Purpose: We simulate development of proton and neutron densities and asymmetries as a function of initial state, beam energy, and system size in the selected collisions in order to guide further experiments pursuing the density dependence of the symmetry energy. Methods: The Boltzmann-Uhlenbeck-Uehling (pBUU) transport model with four empirical models for the density dependence of the symmetry energy was employed. Results of simulations using pure Vlasov dynamics were added for completeness. In addition, the time-dependent Hartree-Fock (TDHF) model, with the SV-bas Skyrme interaction, was used to model the heavy-ion collisions at Ebeam≤40 MeV /nucl . Maximum proton and neutron densities ρpmax and ρnmax, reached in the course of a collision, were determined from the time evolution of ρp and ρn. Results: The highest total densities predicted at Ebeam=800 MeV /nucl . were of the order of ˜2.5 ρ0 (ρ0=0.16 fm-3 ) for both Sn and Ca systems. They were found to be only weakly dependent on the initial conditions, beam energy, system size, and a model of the symmetry energy. The proton-neutron asymmetry δ =(ρnmax-ρpmax) /(ρnmax+ρpmax) at maximum density does depend, though, on these parameters. The highest value of δ found in all systems and at all investigated beam
QSPR prediction of physico-chemical properties for REACH.
Dearden, J C; Rotureau, P; Fayet, G
2013-01-01
For registration of a chemical, European Union REACH legislation requires information on the relevant physico-chemical properties of the chemical. Predicted property values can be used when the predictions can be shown to be valid and adequate. The relevant physico-chemical properties that are amenable to prediction are: melting/freezing point, boiling point, relative density, vapour pressure, surface tension, water solubility, n-octanol-water partition coefficient, flash point, flammability, explosive properties, self-ignition temperature, adsorption/desorption, dissociation constant, viscosity, and air-water partition coefficient (Henry's law constant). Published quantitative structure-property relationship (QSPR) methods for all of these properties are discussed, together with relevant property prediction software, as an aid for those wishing to use predicted property values in submissions to the European Chemicals Agency (ECHA).
Reach and get capability in a computing environment
Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM
2012-06-05
A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Efficiency at Maximum Power of Low-Dissipation Carnot Engines
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered.
Efficiency at maximum power of low-dissipation Carnot engines.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-√Tc/Th] is recovered.
Speeded reaching movements around invisible obstacles.
Todd E Hudson
Full Text Available We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain using the Dominance Test employed in Hudson et al. (2007. The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions.
Priority setting in the REACH system.
Hansson, Sven Ove; Rudén, Christina
2006-04-01
Due to the large number of chemicals for which toxicological and ecotoxicological information is lacking, priority setting for data acquisition is a major concern in chemicals regulation. In the current European system, two administrative priority-setting criteria are used, namely novelty (i.e., time of market introduction) and production volume. In the proposed Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system, the novelty criterion is no longer used, and production volume will be the main priority-setting criterion for testing requirements, supplemented in some cases with hazard indications obtained from QSAR modelling. This system for priority setting has severe weaknesses. In this paper we propose that a multicriteria system should be developed that includes at least three additional criteria: chemical properties, results from initial testing in a tiered system, and voluntary testing for which efficient incentives can be created. Toxicological and decision-theoretical research is needed to design testing systems with validated priority-setting mechanisms.
Reaching Consensus by Allowing Moments of Indecision
Svenkeson, A.; Swami, A.
2015-10-01
Group decision-making processes often turn into a drawn out and costly battle between two opposing subgroups. Using analytical arguments based on a master equation description of the opinion dynamics occurring in a three-state model of cooperatively interacting units, we show how the capability of a social group to reach consensus can be enhanced when there is an intermediate state for indecisive individuals to pass through. The time spent in the intermediate state must be relatively short compared to that of the two polar states in order to create the beneficial effect. Furthermore, the cooperation between individuals must not be too low, as the benefit to consensus is possible only when the cooperation level exceeds a specific threshold. We also discuss how zealots, agents that remain in one state forever, can affect the consensus among the rest of the population by counteracting the benefit of the intermediate state or making it virtually impossible for an opposition to form.
Morphodynamics of a pseudomeandering gravel bar reach
Bartholdy, J.; Billi, P.
2002-01-01
A large number of rivers in Tuscany have channel planforms, which are neither straight nor what is usually understood as meandering. In the typical case, they consist of an almost straight, slightly incised main channel fringed with large lateral bars and lunate-shaped embayments eroded into the former flood plain. In the past, these rivers have not been recognised as an individual category and have often been considered to be either braided or meandering. It is suggested here that this type of river planform be termed pseudomeandering. A typical pseudomeandering river (the Cecina River) is described and analysed to investigate the main factors responsible for producing this channel pattern. A study reach (100×300 m) was surveyed in detail and related to data on discharge, channel changes after floods and grain-size distribution of bed sediments. During 18 months of topographic monitoring, the inner lateral bar in the study reach expanded and migrated towards the concave outer bank which, concurrently, retreated by as much as 25 m. A sediment balance was constructed to analyse bar growth and bank retreat in relation to sediment supply and channel morphology. The conditions necessary to maintain the pseudomeandering morphology of these rivers by preventing them from developing a meandering planform, are discussed and interpreted as a combination of a few main factors such as the flashy character of floods, sediment supply (influenced by both natural processes and human impact), the morphological effects of discharges with contrasting return intervals and the short duration of flood events. Finally, the channel response to floods with variable sediment transport capacity (represented by bed shear stress) is analysed using a simple model. It is demonstrated that bend migration is associated with moderate floods while major floods are responsible for the development of chute channels, which act to suppress bend growth and maintain the low sinuosity configuration of
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Continental reach: The Westcoast Energy story
Newman, P. C.
2002-07-01
A historical account is given of the spectacular success that was Westcoast Energy Inc., a Canadian natural gas giant that charted a wilderness pipeline from natural gas fields in Canada's sub-arctic solitude. The beginning of the company is traced to an event in 1934 when near the bank of the Pouce Coupe River, close to the Alberta-British Columbia border, Frank McMahon, a solitary wildcatter and the eventual founder of the company, first sighted the fiery inferno of a runaway wildcat well, drilled by geologists of the Imperial Oil Company during their original search for the Canadian petroleum basin's motherlode. It was on this occasion in 1934 that McMahon first conceived a geological profile that connected the gas-bearing sandstone of Pouce Coupe with the reservoir rock of the biggest natural gas field of Alberta, and a pipeline from this sandstone storehouse across the rugged heart of British Columbia to Vancouver, and south into the United States. It took the better part of a quarter century to realize the dream of that pipeline which, in due course, turned out to be only the first step towards reaching the top rank of Canadian corporations in operational and financial terms, and becoming one of only a handful in terms of a story that became a Canadian corporate legend. By chronicling the lives and contributions of the company's founder and senior officials over the years, the book traces the company's meteoric rise from a gleam in its founder's eye to a cautious regional utility, and to the aggressive Canadian adventurer that went on to burst the boundaries of its Pacific Coast world, until the continental reach of its operations and interests run from Canada's Pacific shoreline to its Atlantic basins and Mexico's Campeche Bay to Alaska's Prudhoe Bay. The company's independent existence came to an end in 2002 when Westcoast Energy, by then a $15 billion operation, was acquired by Duke Energy Limited of North
M. P. Chidichimo
2009-11-01
Full Text Available We study the contribution of eastern-boundary density variations to sub-seasonal and seasonal anomalies of the strength and vertical structure of the Atlantic Meridional Overturning Circulation (AMOC at 26.5° N, by means of the RAPID/MOCHA mooring array between April 2004 and October 2007. The major density anomalies are found in the upper 500 m, and they are often coherent down to 1400 m. The densities have 13-day fluctuations that are apparent down to 3500 m. The two strategies for measuring eastern-boundary density – a tall offshore mooring (EB1 and an array of moorings on the continental slope (EBH – show little correspondence in terms of amplitude, vertical structure, and frequency distribution of the resulting basin-wide integrated transport fluctuations, implying that there are significant transport contributions between EB1 and EBH. Contrary to the original planning, measurements from EB1 cannot serve as backup or replacement for EBH: density needs to be measured directly at the continental slope to compute the full-basin density gradient. Fluctuations in density at EBH generate transport variability of 2 Sv rms in the AMOC, while the overall AMOC variability is 4.9 Sv rms. There is a pronounced deep-reaching seasonal cycle in density at the eastern boundary, which is apparent between 100 m and 1400 m, with maximum positive anomalies in spring and maximum negative anomalies in autumn. These changes drive anomalous southward upper mid-ocean flow in spring, implying maximum reduction of the AMOC, and vice-versa in autumn. The amplitude of the seasonal cycle of the AMOC arising from the eastern-boundary densities is 5.2 Sv peak-to-peak, dominating the 7.0 Sv peak-to-peak seasonal cycle of the total AMOC. Our analysis suggests that the seasonal cycle in density may be forced by the strong near-coastal seasonal cycle in wind stress curl.
W. E. Johns
2010-04-01
Full Text Available We study the contribution of eastern-boundary density variations to sub-seasonal and seasonal anomalies of the strength and vertical structure of the Atlantic Meridional Overturning Circulation (AMOC at 26.5° N, by means of the RAPID/MOCHA mooring array between April 2004 and October 2007. The major density anomalies are found in the upper 500 m, and they are often coherent down to 1400 m. The densities have 13-day fluctuations that are apparent down to 3500 m. The two strategies for measuring eastern-boundary density – a tall offshore mooring (EB1 and an array of moorings on the continental slope (EBH – show little correspondence in terms of amplitude, vertical structure, and frequency distribution of the resulting basin-wide integrated transport fluctuations, implying that there are significant transport contributions between EB1 and EBH. Contrary to the original planning, measurements from EB1 cannot serve as backup or replacement for EBH: density needs to be measured directly at the continental slope to compute the full-basin density gradient. Fluctuations in density at EBH generate transport variability of 2 Sv rms in the AMOC, while the overall AMOC variability is 4.8 Sv rms. There is a pronounced deep-reaching seasonal cycle in density at the eastern boundary, which is apparent between 100 m and 1400 m, with maximum positive anomalies in spring and maximum negative anomalies in autumn. These changes drive anomalous southward upper mid-ocean flow in spring, implying maximum reduction of the AMOC, and vice-versa in autumn. The amplitude of the seasonal cycle of the AMOC arising from the eastern-boundary densities is 5.2 Sv peak-to-peak, dominating the 6.7 Sv peak-to-peak seasonal cycle of the total AMOC. Our analysis suggests that the seasonal cycle in density may be forced by the strong near-coastal seasonal cycle in wind stress curl.
Important ATLAS Forward Calorimeter Milestone Reached
Loch, P.
The ATLAS Forward Calorimeter working group has reached an important milestone in the production of their detectors. The mechanical assembly of the first electromagnetic module (FCal1C) has been completed at the University of Arizona on February 25, 2002, only ten days after the originally scheduled date. The photo shows the University of Arizona FCal group in the clean room, together with the assembled FCal1C module. The module consists of a stack of 18 round copper plates, each about one inch thick. Each plate is about 90 cm in diameter, and has 12260 precision-drilled holes in it, to accommodate the tube/rod electrode assembly. The machining of the plates, which was done at the Science Technology Center (STC) at Carleton University, Ottawa, Canada, required high precision to allow for easy insertion of the electrode copper tube. The plates have been carefully cleaned at the University of Arizona, to remove any machining residue and metal flakes. This process alone took about eleven weeks. Exactly 122...
LEP Dismantling Reaches Half-Way Stage
2001-01-01
LEP's last superconducting module leaves its home port... Just seven months into the operation, LEP dismantling is forging ahead. Two of the eight arcs which form the tunnel have already been emptied and the last of the accelerator's radiofrequency (RF) cavities has just been raised to the surface. The 160 people working on LEP dismantling have reason to feel pleased with their progress. All of the accelerator's 72 superconducting RF modules have already been brought to the surface, with the last one being extracted on 2nd May. This represents an important step in the dismantling process, as head of the project, John Poole, explains. 'This was the most delicate part of the project, because the modules are very big and they could only come out at one place', he says. The shaft at point 1.8 through which the RF cavity modules pass is 18 metres in diameter, while each module is 11.5 metres long. Some modules had to travel more than 10 kilometres to reach the shaft. ... is lifted up the PM 1.8 shaft, after a m...
Media perspective - new opportunities for reaching audiences
Haswell, Katy
2007-08-01
The world of media is experiencing a period of extreme and rapid change with the rise of internet television and the download generation. Many young people no longer watch standard TV. Instead, they go on-line, talking to friends and downloading pictures, videos, music clips to put on their own websites and watch/ listen to on their laptops and mobile phones. Gone are the days when TV controllers determined what you watched and when you watched it. Now the buzzword is IPTV, Internet Protocol Television, with companies such as JOOST offering hundreds of channels on a wide range of subjects, all of which you can choose to watch when and where you wish, on your high-def widescreen with stereo surround sound at home or on your mobile phone on the train. This media revolution is changing the way organisations get their message out. And it is encouraging companies such as advertising agencies to be creative about new ways of accessing audiences. The good news is that we have fresh opportunities to reach young people through internet-based media and material downloaded through tools such as games machines, as well as through the traditional media. And it is important for Europlanet to make the most of these new and exciting developments.
Effects of aging on interjoint coordination during arm reaching
Marcus Vinicius da Silva
Full Text Available Abstract Introduction Moving the arm towards an object is a complex task. Movements of the arm joints must be well coordinated in order to obtain a smooth and accurate hand trajectory. Most studies regarding reaching movements address young subjects. Coordination differences in the neural mechanism underlying motor control throughout the life stages is yet unknown. The understanding of these changes can lead to a better comprehension of neuromotor pathologies and therefore to more suitable therapies. Methods Our purpose was to investigate interjoint coordination in three different aging groups (children, young, elderly. Kinematics and kinetics specific variables were analyzed focusing on defined parameters to get insight into arm coordination. Intersegmental dynamics was used to calculate shoulder and elbow torques assuming a 2-link segment model of the upper extremity (upper arm and forearm with two friction-less joints (shoulder and elbow. A virtual reality environment was used to examine multidirectional planar reaching in three different directions (randomly presented. Results Seven measures were computed to investigate group interlimb differences: shoulder and elbow muscle torques (peak and impulse, work performed by shoulder and elbow joints, maximum velocity, movement distance, distance error at final position, movement duration and acceleration duration. Our data analysis showed differences between movement performances for all analyzed variables, at all ages. Conclusion We found that the intersegmental dynamics for the interlimb (left/right comparisons were similar for the elderly and children groups as compared to the young. In addition, the coordination and control of motor tasks changes during life, becoming less effective in old age.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Planning of the Extended Reach well Dieksand 2; Planung der Extended Reach Bohrung Dieksand 2
Frank, U.; Berners, H. [RWE-DEA AG, Hamburg (Germany). Drilling Team Mittelplate und Dieksand; Hadow, A.; Klop, G.; Sickinger, W. [Wintershall AG Erdoelwerke, Barnstdorf (Germany); Sudron, K.
1998-12-31
The Mittelplate oil field is located 7 km offshore the town of Friedrichskoog. Reserves are estimated at 30 million tonnes of oil. At a production rate of 2,500 t/d, it will last about 33 years. The transport capacity of the offshore platform is limited, so that attempts were made to enhance production by constructing the extended reach borehole Dieksand 2. Details are presented. (orig.) [Deutsch] Das Erdoelfeld Mittelplate liegt am suedlichen Rand des Nationalparks Schleswig Holsteinisches Wattenmeer, ca. 7000 m westlich der Ortschaft Friedrichskoog. Die gewinnbaren Reserven betragen ca. 30 Millionen t Oel. Bei einer Foerderkapazitaet von 2.500 t/Tag betraegt die Foerderdauer ca. 33 Jahre. Aufgrund der begrenzten Transportkapazitaeten von der Insel, laesst sich durch zusaetzliche Bohrungen von der kuenstlichen Insel Mittelplate keine entscheidende Erhoehung der Foerderkapazitaet erzielen. Ab Sommer 1996 wurde erstmals die Moeglichkeit der Lagerstaettenerschliessung von Land untersucht. Ein im Mai 1997 in Hamburg etabliertes Drilling Team wurde mit der Aufgabe betraut, die Extended Reach Bohrung Dieksand 2 zu planen und abzuteufen. Die Planungsphasen fuer die Extended Reach Bohrung Dieksand 2 wurden aufgezeigt. Die fuer den Erfolg einer Extended Reach Bohrung wichtigen Planungsparameter wurden erlaeutert. Es wurden Wege gezeigt, wie bei diesem Projekt technische und geologische Risiken in der Planung mit beruecksichtigt und nach Beginn der Bohrung weiter bearbeitet werden koennen. (orig.)
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Columbia River Estuary Ecosystem Classification Hydrogeomorphic Reach
Cannon, Charles M.; Ramirez, Mary F.; Heatwole, Danelle W.; Burke, Jennifer L.; Simenstad, Charles A.; O'Connor, Jim E.; Marcoe, Keith
2012-01-01
Estuarine ecosystems are controlled by a variety of processes that operate at multiple spatial and temporal scales. Understanding the hierarchical nature of these processes will aid in prioritization of restoration efforts. This hierarchical Columbia River Estuary Ecosystem Classification (henceforth "Classification") of the Columbia River estuary is a spatial database of the tidally-influenced reaches of the lower Columbia River, the tidally affected parts of its tributaries, and the landforms that make up their floodplains for the 230 kilometers between the Pacific Ocean and Bonneville Dam. This work is a collaborative effort between University of Washington School of Aquatic and Fishery Sciences (henceforth "UW"), U.S. Geological Survey (henceforth "USGS"), and the Lower Columbia Estuary Partnership (henceforth "EP"). Consideration of geomorphologic processes will improve the understanding of controlling physical factors that drive ecosystem evolution along the tidal Columbia River. The Classification is organized around six hierarchical levels, progressing from the coarsest, regional scale to the finest, localized scale: (1) Ecosystem Province; (2) Ecoregion; (3) Hydrogeomorphic Reach; (4) Ecosystem Complex; (5) Geomorphic Catena; and (6) Primary Cover Class. For Levels 4 and 5, we mapped landforms within the Holocene floodplain primarily by visual interpretation of Light Detection and Ranging (LiDAR) topography supplemented with aerial photographs, Natural Resources Conservation Service (NRCS) soils data, and historical maps. Mapped landforms are classified as to their current geomorphic function, the inferred process regime that formed them, and anthropogenic modification. Channels were classified primarily by a set of depth-based rules and geometric relationships. Classification Level 5 floodplain landforms ("geomorphic catenae") were further classified based on multivariate analysis of land-cover within the mapped landform area and attributed as "sub
Parallel explicit and implicit control of reaching.
Pietro Mazzoni
Full Text Available BACKGROUND: Human movement can be guided automatically (implicit control or attentively (explicit control. Explicit control may be engaged when learning a new movement, while implicit control enables simultaneous execution of multiple actions. Explicit and implicit control can often be assigned arbitrarily: we can simultaneously drive a car and tune the radio, seamlessly allocating implicit or explicit control to either action. This flexibility suggests that sensorimotor signals, including those that encode spatially overlapping perception and behavior, can be accurately segregated to explicit and implicit control processes. METHODOLOGY/PRINCIPAL FINDINGS: We tested human subjects' ability to segregate sensorimotor signals to parallel control processes by requiring dual (explicit and implicit control of the same reaching movement and testing for interference between these processes. Healthy control subjects were able to engage dual explicit and implicit motor control without degradation of performance compared to explicit or implicit control alone. We then asked whether segregation of explicit and implicit motor control can be selectively disrupted by studying dual-control performance in subjects with no clinically manifest neurologic deficits in the presymptomatic stage of Huntington's disease (HD. These subjects performed successfully under either explicit or implicit control alone, but were impaired in the dual-control condition. CONCLUSION/SIGNIFICANCE: The human nervous system can exert dual control on a single action, and is therefore able to accurately segregate sensorimotor signals to explicit and implicit control. The impairment observed in the presymptomatic stage of HD points to a possible crucial contribution of the striatum to the segregation of sensorimotor signals to multiple control processes.
Optimization of agitation and aeration conditions for maximum virginiamycin production.
Shioya, S; Morikawa, M; Kajihara, Y; Shimizu, H
1999-02-01
To maximize the productivity of virginiamycin, which is a commercially important antibiotic as an animal feed additive, an empirical approach was employed in the batch culture of Streptomyces virginiae. Here, the effects of dissolved oxygen (DO) concentration and agitation speed on the maximum cell concentration at the production phase, as well as on the productivity of virginiamycin, were investigated. To maintain the DO concentration in the fermentor at a certain level, either the agitation speed or the inlet oxygen concentration of the supply gas was manipulated. It was found that increasing the agitation speed had a positive effect on the antibiotic productivity independent of the DO concentration. The optimum DO concentration, agitation speed and addition of an autoregulator, virginiae butanolide C (VB-C), were determined to maximize virginiamycin productivity. The optimal strategy was to start the cultivation at 450 rpm and to continue until the DO concentration reached 80%. After reaching 80%, the DO concentration was maintained at this level by changing the agitation speed, up to a maximum of 800 rpm. The addition of an optimal amount of the autoregulator VB-C in an experiment resulted in the maximal production of virginiamycin M (399 mg/l), which was about 1.8-fold those obtained previously.
A case of rapid rock riverbed incision in a coseismic uplift reach and its implications
Huang, Ming-Wan; Pan, Yii-Wen; Liao, Jyh-Jong
2013-02-01
During the 1999 Chi-Chi earthquake (Mw = 7.6) in Taiwan, the coseismic displacement induced fault scarps and a pop-up structure in the Taan River. The fault scarps across the river experienced maximum vertical slip of 10 m, which disturbed the dynamic equilibrium of the fluvial system. As a result, rapid incision in the weak bedrock, with a maximum depth of 20 m, was activated within a decade after its armor layer was removed. This case provides an excellent opportunity for closely tracking and recording the progressive evolution of river morphology that is subjected to coseismic uplift. Based on multistaged orthophotographs and digital elevation model (DEM) data, the process of morphology evolution in the uplift reach was divided into four consecutive stages. Plucking is the dominant mechanism of bedrock erosion associated with channel incision and knickpoint migration. The astonishingly high rate of knickpoint retreat (KPR), as rapid as a few hundred meters per year, may be responsible for the rapid incision in the main channel. The reasons for the high rate of KPR are discussed in depth. The total length of the river affected by the coseismic uplift is 5 km: 1 km in the uplift reach and 4 km in the downstream reach. The downstream reach was affected by a reduction in sediment supply and increase in stream power. The KPR cut through the uplift reach within roughly a decade; further significant flooding in the future will mainly cause widening instead of deepening of the channel.
Spiking and LFP activity in PRR during symbolically instructed reaches
2011-01-01
The spiking activity in the parietal reach region (PRR) represents the spatial goal of an impending reach when the reach is directed toward or away from a visual object. The local field potentials (LFPs) in this region also represent the reach goal when the reach is directed to a visual object. Thus PRR is a candidate area for reading out a patient's intended reach goals for neural prosthetic applications. For natural behaviors, reach goals are not always based on the location of a visual obj...
Keil, Nina M; Pommereau, Marc; Patt, Antonia; Wechsler, Beat; Gygax, Lorenz
2017-02-01
Confined goats spend a substantial part of the day feeding. A poorly designed feeding place increases the risk of feeding in nonphysiological body postures, and even injury. Scientifically validated information on suitable dimensions of feeding places for loose-housed goats is almost absent from the literature. The aim of the present study was, therefore, to determine feeding place dimensions that would allow goats to feed in a species-appropriate, relaxed body posture. A total of 27 goats with a height at the withers of 62 to 80 cm were included in the study. Goats were tested individually in an experimental feeding stall that allowed the height difference between the feed table, the standing area of the forelegs, and a feeding area step (difference in height between forelegs and hind legs) to be varied. The goats accessed the feed table via a palisade feeding barrier. The feed table was equipped with recesses at varying distances to the feeding barrier (5-55 cm in 5-cm steps) at angles of 30°, 60°, 90°, 120°, or 150° (feeding angle), which were filled with the goats' preferred food. In 18 trials, balanced for order across animals, each animal underwent all possible combinations of feeding area step (3 levels: 0, 10, and 20 cm) and of difference in height between feed table and standing area of forelegs (6 levels: 0, 5, 10, 15, 20, and 25 cm). The minimum and maximum reach at which the animals could reach feed on the table with a relaxed body posture was determined for each combination. Statistical analysis was performed using mixed-effects models. The animals were able to feed with a relaxed posture when the feed table was at least 10 cm higher than the standing height of the goats' forelegs. Larger goats achieved smaller minimum reaches and minimum reach increased if the goats' head and neck were angled. Maximum reach increased with increasing height at withers and height of the feed table. The presence of a feeding area step had no influence on minimum and
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
J K YU; H SUN; L L ZHAO; Y H WANG; M Q YU; H L LUO; Z F XU; KAZUHIRO MATSUGI
2017-06-01
NiWP alloy coatings were prepared by electrodeposition, and the effects of ferrous chloride (FeCl$_2$), sodium tungstate (Na$_2$WO$_4$) and current density ($D_K$) on the properties of the coatings were studied. The results show that upon increasing the concentration of FeCl$_2$, initially the Fe content of the coating increased and then tended to be stable; the deposition rate and microhardness of coating decreased when the cathodic current efficiency ($\\eta$) initially increased and then decreased; and for a FeCl$_2$ concentration of 3.6 gl$^{−1}, the cathodic current efficiency reached its maximum of 74.23%. Upon increasing the concentration of Na$_2$WO$_4$, the W content and microhardness of the coatings increased; the deposition rate andthe cathode current efficiency initially increased and then decreased. The cathodic current efficiency reached the maximum value of 70.33% with a Na$_2$WO$_4$ concentration of 50 gl$^{−1}$, whereas the deposition rate is maximum at 8.67 $\\mu$mh$^{−1}$ with a Na$_2$WO$_4$ concentration of 40 gl$^{−1}$. Upon increasing the $D_K$, the deposition rate, microhardness, Fe and W content of the coatings increased, the cathodic current efficiency increases first increased and then decreased. When $D_K$ was 4 A dm$^{−2}$,the current efficiency reached the maximum of 73.64%.
Reaching remote areas in Latin America.
Jaimes, R
1994-01-01
Poor communities in remote and inaccessible areas tend to not only be cut off from family planning education and services, but they are also deprived of basic primary health care services. Efforts to bring family planning to such communities and populations should therefore be linked with other services. The author presents three examples of programs to bring effective family planning services to remote communities in Central and South America. Outside of the municipal center in the Tuxtlas region of Mexico, education and health levels are low and people live according to ancient customs. Ten years ago with the help of MEXFAM, the IPPF affiliate in Mexico, two social promoters established themselves in the town of Catemaco to develop a community program of family planning and health care offering education and prevention to improve the quality of people's lives. Through their health brigades taking health services to towns without an established health center, the program has influenced an estimated 100,000 people in 50 villages and towns. The program also has a clinic. In Guatemala, the Family Welfare Association (APROFAM) gave bicycles to 240 volunteer health care workers to facilitate their outreach work in rural areas. APROFAM since 1988 has operated an integrated program to treat intestinal parasites and promote family planning in San Lucas de Toliman, an Indian town close to Lake Atitlan. Providing health care to more than 10,000 people, the volunteer staff has covered the entire department of Solola, reaching each family in the area. Field educators travel on motorcycles through the rural areas of Guatemala coordinating with the health volunteers the distribution of contraceptives at the community level. The Integrated Project's Clinic was founded in 1992 and currently carries out pregnancy and Pap tests, as well as general lab tests. Finally, Puna is an island in the middle of the Gulf of Guayaquil, Ecuador. Women on the island typically have 10
Zou, P. F.; Wang, H. P.; Yang, S. J.; Hu, L.; Wei, B.
2017-08-01
The density of liquid Ni-Ti alloys were measured by electrostatic levitation technique and the maximum reduced undercooling of ΔT/TL reaches 0.23. Quite different from the linear relationship between density and temperature for liquid Ni45Ti55 and Ni55Ti45 alloys, the density of liquid Ni50Ti50 alloy displays a nonlinear dependence on temperature. Interestingly, the density increasing tendency of liquid Ni50Ti50 alloy rises more rapidly with the decrease of temperature, which results from the more severe shrinking of the distance among atoms at lower temperatures. In addition, the thermal expansion coefficient of liquid Ni50Ti50 alloy increases linearly with the decrease of temperature.
XMM classroom competitions : reaching for the stars!
1999-09-01
Partnered by a unique education network 'European Schoolnet'(*), ESA is today launching these three competitions for schools (age range: 8 to final year) in its Member States: draw a telescope, describe the benefits of space-based astronomy or produce an astronomy observation proposal. Details can be found on the special competition website : http://sci.esa.int/xmm/competition "Draw me a telescope!" This competition for 8 to 12 year-olds asks the class to draw a telescope (inside a 20 - 50 cm diameter circle). The 14 winning entries, one per Member State, will be included in a specially-designed official XMM mission logo to go on the Ariane-5 launcher fairing for official unveiling on launch day. A representative of each winning class will be invited to Kourou for the launch. Deadline for entries : 8 October 1999. For full information on how to enter see : http://sci.esa.int/xmm/competition "What's new, Mr Galileo?" The essay competition for 13 to 15 year-olds challenges an English class, writing in the international language of space, to submit a single page (500 words maximum) description of space-based astronomy and its benefits for humanity. The 14 winners, one per Member States, will be invited to Kourou to visit the Guiana Space Centre, Europe's spaceport, and witness final XMM launch preparations. Deadline for entries : 15 October 1999. For full information on how to enter see : http://sci.esa.int/xmm/competition. "Stargazing" In the final-year class competition, ESA is providing a unique opportunity to use the XMM telescope. Here, the physics class, assisted by the scientific community, has to submit an observation project. The 14 winning proposals will be put into practice in 2000 at a summer camp. Further details will be announced once XMM is in orbit. Note to editors: The X-ray Multi-Mirror mission is the second Cornerstone of ESA's Horizon 2000 Plus science programme. The telescope will revolutionise cosmic X-ray astronomy by harvesting far more X
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
The Effects of Solar Maximum on the Earth's Satellite Population and Space Situational Awareness
Johnson, Nicholas L.
2012-01-01
The rapidly approaching maximum of Solar Cycle 24 will have wide-ranging effects not only on the number and distribution of resident space objects, but also on vital aspects of space situational awareness, including conjunction assessment processes. The best known consequence of high solar activity is an increase in the density of the thermosphere, which, in turn, increases drag on the vast majority of objects in low Earth orbit. The most prominent evidence of this is seen in a dramatic increase in space object reentries. Due to the massive amounts of new debris created by the fragmentations of Fengyun-1C, Cosmos 2251 and Iridium 33 during the recent period of Solar Minimum, this effect might reach epic levels. However, space surveillance systems are also affected, both directly and indirectly, historically leading to an increase in the number of lost satellites and in the routine accuracy of the calculation of their orbits. Thus, at a time when more objects are drifting through regions containing exceptionally high-value assets, such as the International Space Station and remote sensing satellites, their position uncertainties increase. In other words, as the possibility of damaging and catastrophic collisions increases, our ability to protect space systems is degraded. Potential countermeasures include adjustments to space surveillance techniques and the resetting of collision avoidance maneuver thresholds.
Susanne Wegener
Full Text Available After recanalization, cerebral blood flow (CBF can increase above baseline in cerebral ischemia. However, the significance of post-ischemic hyperperfusion for tissue recovery remains unclear. To analyze the course of post-ischemic hyperperfusion and its impact on vascular function, we used magnetic resonance imaging (MRI with pulsed arterial spin labeling (pASL and measured CBF quantitatively during and after a 60 minute transient middle cerebral artery occlusion (MCAO in adult rats. We added a 5% CO2 - challenge to analyze vasoreactivity in the same animals. Results from MRI were compared to histological correlates of angiogenesis. We found that CBF in the ischemic area recovered within one day and reached values significantly above contralateral thereafter. The extent of hyperperfusion changed over time, which was related to final infarct size: early (day 1 maximal hyperperfusion was associated with smaller lesions, whereas a later (day 4 maximum indicated large lesions. Furthermore, after initial vasoparalysis within the ischemic area, vasoreactivity on day 14 was above baseline in a fraction of animals, along with a higher density of blood vessels in the ischemic border zone. These data provide further evidence that late post-ischemic hyperperfusion is a sequel of ischemic damage in regions that are likely to undergo infarction. However, it is transient and its resolution coincides with re-gaining of vascular structure and function.
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Kao, Shih-Chieh [ORNL; McManamay, Ryan A [ORNL; Stewart, Kevin M [ORNL; Samu, Nicole M [ORNL; Hadjerioua, Boualem [ORNL; DeNeale, Scott T [ORNL; Yeasmin, Dilruba [California State University, Fresno; Pasha, M. Fayzul K. [California State University, Fresno; Oubeidillah, Abdoul A [ORNL; Smith, Brennan T [ORNL
2014-04-01
The rapid development of multiple national geospatial datasets related to topography, hydrology, and environmental characteristics in the past decade have provided new opportunities for the refinement of hydropower resource potential from undeveloped stream-reaches. Through 2011 to 2013, the Oak Ridge National Laboratory (ORNL) was tasked by the Department of Energy (DOE) Water Power Program to evaluate the new stream-reach development (NSD) resource potential for more than 3 million US streams. A methodology was designed that contains three main components: (1) identification of stream-reaches with high energy density, (2) topographical analysis of stream-reaches to estimate inundated surface area and reservoir storage, and (3) environmental attribution to spatially join information related to the natural ecological systems, social and cultural settings, policies, management, and legal constraints to stream-reaches of energy potential. An initial report on methodology (Hadjerioua et al., 2013) was later reviewed and revised based on the comments gathered from two peer review workshops. After implementing the assessment across the entire United States, major findings were summarized in this final report. The estimated NSD capacity and generation, including both higher-energy-density (>1 MW per reach) and lower-energy-density (<1 MW per reach) stream-reaches is 84.7 GW, around the same size as the existing US conventional hydropower nameplate capacity (79.5 GW; NHAAP, 2013). In terms of energy, the total undeveloped NSD generation is estimated to be 460 TWh/year, around 169% of average 2002 2011 net annual generation from existing conventional hydropower plants (272 TWh/year; EIA, 2013). Given the run-of-river assumption, NSD stream-reaches have higher capacity factors (53 71%), especially compared with conventional larger-storage peaking-operation projects that usually have capacity factors of around 30%. The highest potential is identified in the Pacific Northwest
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-entropy for the laser fusion problem
Madkour, M.A. [Nansoura Univ. (Egypt). Dept. of Phys.
1996-09-01
The problem of heat flux at the critical surfaces and the surfaces of a pellet of deuterium and tritium (conduction zone) heated by laser have been considered. Ion-electron collisions are only allowed for: i.e. the linear transport equation is used to describe the problem with boundary conditions. The maximum-entropy approach is used to calculate the electron density and temperature across the conduction zone as well as the heat flux. Numerical results are given and compared with those of Rouse and Williams and El-Wakil et al. (orig.).
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Maximum-likelihood analysis of the COBE angular correlation function
Seljak, Uros; Bertschinger, Edmund
1993-01-01
We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.
OIL MONITORING DIAGNOSTIC CRITERIONS BASED ON MAXIMUM ENTROPY PRINCIPLE
Huo Hua; Li Zhuguo; Xia Yanchun
2005-01-01
A method of applying maximum entropy probability density estimation approach to constituting diagnostic criterions of oil monitoring data is presented. The method promotes the precision of diagnostic criterions for evaluating the wear state of mechanical facilities, and judging abnormal data. According to the critical boundary points defined, a new measure on monitoring wear state and identifying probable wear faults can be got. The method can be applied to spectrometric analysis and direct reading ferrographic analysis. On the basis of the analysis and discussion of two examples of 8NVD48A-2U diesel engines, the practicality is proved to be an effective method in oil monitoring.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn; Schmidt, Burkhard; Yang, Yonggang
2014-01-01
We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to ...
Determination of the density of zinc powders for alkaline battery
Beatriz Ares Tejero; David Guede Carnero
2007-01-01
The density of zinc powder for alkaline battery was determined using a pyknometer.The results showed that powders made before the end of 2003 could reach relative densities above 99% of the theoretical density.Investigating the relative volume swelling of electrolysed gels of zinc powders,no evident relation between swelling and pyknometer density was found.
Electronic DC transformer with high power density
Pavlovský, M.
2006-01-01
This thesis is concerned with the possibilities of increasing the power density of high-power dc-dc converters with galvanic isolation. Three cornerstones for reaching high power densities are identified as: size reduction of passive components, reduction of losses particularly in active components
An Empirical Measure for Labor Market Density
P.A. Gautier (Pieter); C.N. Teulings (Coen)
2000-01-01
textabstractIn this paper we derive a structural measure for labor market density based on the Ellison and Glasear (1997) index for industry concentration''. This labor market density measure serves as a proxy for the number of workers that can reach a certain work area within a reasonal amount of t
Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za
2014-05-01
near the middle of the core, while increasing the power density near the top and bottom of the core. This resulted in a huge reduction in the maximum DLOFC temperature from 1581.0 °C to 1297.6 °C, which may produce far reaching safety and economic benefits. However, it came at the cost of a 22% reduction in the average burn-up of the fuel. In a separate optimisation attempt a much smaller, but still significant, reduction in the maximum equilibrium temperature, from 1023 °C down to 988 °C, was achieved.
Effect of density feedback on the two-route traffic scenario with bottleneck
Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua
2016-12-01
In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.
Poor shape perception is the reason reaches-to-grasp are visually guided online.
Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P
2008-08-01
Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.
Far-Reaching Impacts of African Dust- A Calipso Perspective
Yu, Hongbin; Chin, Mian; Yuan, Tianle; Bian, Huisheng; Prospero, Joseph; Omar, Ali; Remer, Lorraine; Winker, David; Yang, Yuekui; Zhang, Yan; Zhang, Zhibo
2014-01-01
African dust can transport across the tropical Atlantic and reach the Amazon basin, exerting far-reaching impacts on climate in downwind regions. The transported dust influences the surface-atmosphere interactions and cloud and precipitation processes through perturbing the surface radiative budget and atmospheric radiative heating and acting as cloud condensation nuclei and ice nuclei. Dust also influences biogeochemical cycle and climate through providing nutrients vital to the productivity of ocean biomass and Amazon forests. Assessing these climate impacts relies on an accurate quantification of dust transport and deposition. Currently model simulations show extremely large diversity, which calls for a need of observational constraints. Kaufman et al. (2005) estimated from MODIS aerosol measurements that about 144 Tg of dust is deposited into the tropical Atlantic and 50 Tg of dust into the Amazon in 2001. This estimated dust import to Amazon is a factor of 3-4 higher than other observations and models. However, several studies have argued that the oversimplified characterization of dust vertical profile in the study would have introduced large uncertainty and very likely a high bias. In this study we quantify the trans-Atlantic dust transport and deposition by using 7 years (2007-2013) observations from CALIPSO lidar. CALIPSO acquires high-resolution aerosol extinction and depolarization profiles in both cloud-free and above-cloud conditions. The unique CALIPSO capability of profiling aerosols above clouds offers an unprecedented opportunity of examining uncertainties associated with the use of MODIS clear-sky data. Dust is separated from other types of aerosols using the depolarization measurements. We estimated that on the basis of 7-year average, 118142 Tg of dust is deposited into the tropical Atlantic and 3860 Tg of dust into the Amazon basin. Substantial interannual variations are observed during the period, with the maximum to minimum ratio of about 1
Piorkowski, Gregory; Jamieson, Rob; Bezanson, Greg; Truelstrup Hansen, Lisbeth; Yost, Chris
2014-10-15
Sediment-borne Escherichia coli can elevate waterborne concentrations through sediment resuspension or hyporheic exchange. This study sought to correlate hydrological, sediment transport, and water quality variables with: (i) the temporal stability of sediment E. coli populations [concentrations, strain richness and similarity (Raup-Crick index)]; and (ii) the contribution of sediment E. coli to the water column as defined through a library-dependent microbial source tracking approach that matched waterborne E. coli isolates to sediment E. coli populations. Three monitoring locations differing in their hydrological characteristics and adjacent upland fecal sources (dairy operation, low-density residential, and tile-drained cultivated field) were investigated. Sediment E. coli population turnover was influenced by sediment transport at upstream, high-energy reaches, but not at the downstream low-energy reach. Sediment contributions to the water column averaged 13% and 18%, and fecal sources averaged 17% and 21% at the upstream sites adjacent to dairy operations and low-density residential areas, respectively. Waterborne E. coli at the downstream site had low matches to E. coli from reach sediments (1%), higher matches to the upstream sediments (27% and 12%), and an average of 14% matches to the tile drained field. The percentage of waterborne E. coli matching sediment-borne E. coli at each stream reach varied in correlations to hydrological and sediment transport variables, suggesting reach-specific differences in the role of sediment resuspension and hyporheic exchange on E. coli transport.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
ERF1 -- Enhanced River Reach File 1.2
U.S. Geological Survey, Department of the Interior — U.S. Environmental Protection Agency's River Reach File 1 (RF1)to ensure the hydrologic integrity of the digital reach traces and to quantify the mean water time of...
Minetti, Andrea; Hurtado, Northan; Grais, Rebecca F; Ferrari, Matthew
2014-01-15
Current mass vaccination campaigns in measles outbreak response are nonselective with respect to the immune status of individuals. However, the heterogeneity in immunity, due to previous vaccination coverage or infection, may lead to potential bias of such campaigns toward those with previous high access to vaccination and may result in a lower-than-expected effective impact. During the 2010 measles outbreak in Malawi, only 3 of the 8 districts where vaccination occurred achieved a measureable effective campaign impact (i.e., a reduction in measles cases in the targeted age groups greater than that observed in nonvaccinated districts). Simulation models suggest that selective campaigns targeting hard-to-reach individuals are of greater benefit, particularly in highly vaccinated populations, even for low target coverage and with late implementation. However, the choice between targeted and nonselective campaigns should be context specific, achieving a reasonable balance of feasibility, cost, and expected impact. In addition, it is critical to develop operational strategies to identify and target hard-to-reach individuals.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Proprioceptive recalibration arises slowly compared to reach adaptation.
Zbib, Basel; Henriques, Denise Y P; Cressman, Erin K
2016-08-01
When subjects reach in a novel visuomotor environment (e.g. while viewing a cursor representing their hand that is rotated from their hand's actual position), they typically adjust their movements (i.e. bring the cursor to the target), thus reducing reaching errors. Additionally, research has shown that reaching with altered visual feedback of the hand results in sensory changes, such that proprioceptive estimates of hand position are shifted in the direction of the visual feedback experienced (Cressman and Henriques in J Neurophysiol 102:3505-3518, 2009). This study looked to establish the time course of these sensory changes. Additionally, the time courses of implicit sensory and motor changes were compared. Subjects reached to a single visual target while seeing a cursor that was either aligned with their hand position (50 trials) or rotated 30° clockwise relative to their hand (150 trials). Reach errors and proprioceptive estimates of felt hand position were assessed following the aligned reach training trials and at seven different times during the rotated reach training trials by having subjects reach to the target without visual feedback, and provide estimates of their hand relative to a visual reference marker, respectively. Results revealed a shift in proprioceptive estimates throughout the rotated reach training trials; however, significant sensory changes were not observed until after 70 trials. In contrast, results showed a greater change in reaches after a limited number of reach training trials with the rotated cursor. These findings suggest that proprioceptive recalibration arises more slowly than reach adaptation.
Reach/frequency for printed media: Personal probabilities or models
Mortensen, Peter Stendahl
2000-01-01
The author evaluates two different ways of estimating reach and frequency of plans for printed media. The first assigns reading probabilities to groups of respondents and calculates reach and frequency by simulation. the second estimates parameters to a model for reach/frequency. It is concluded...
Clades reach highest morphological disparity early in their evolution
Hughes, Martin; Gerber, Sylvain; Albion Wills, Matthew
2013-08-01
There are few putative macroevolutionary trends or rules that withstand scrutiny. Here, we test and verify the purported tendency for animal clades to reach their maximum morphological variety relatively early in their evolutionary histories (early high disparity). We present a meta-analysis of 98 metazoan clades radiating throughout the Phanerozoic. The disparity profiles of groups through time are summarized in terms of their center of gravity (CG), with values above and below 0.50 indicating top- and bottom-heaviness, respectively. Clades that terminate at one of the "big five" mass extinction events tend to have truncated trajectories, with a significantly top-heavy CG distribution overall. The remaining 63 clades show the opposite tendency, with a significantly bottom-heavy mean CG (relatively early high disparity). Resampling tests are used to identify groups with a CG significantly above or below 0.50; clades not terminating at a mass extinction are three times more likely to be significantly bottom-heavy than top-heavy. Overall, there is no clear temporal trend in disparity profile shapes from the Cambrian to the Recent, and early high disparity is the predominant pattern throughout the Phanerozoic. Our results do not allow us to distinguish between ecological and developmental explanations for this phenomenon. To the extent that ecology has a role, however, the paucity of bottom-heavy clades radiating in the immediate wake of mass extinctions suggests that early high disparity more probably results from the evolution of key apomorphies at the base of clades rather than from physical drivers or catastrophic ecospace clearing.
Advanced Reach Tool (ART): development of the mechanistic model.
Fransman, Wouter; Van Tongeren, Martie; Cherrie, John W; Tischer, Martin; Schneider, Thomas; Schinkel, Jody; Kromhout, Hans; Warren, Nick; Goede, Henk; Tielemans, Erik
2011-11-01
This paper describes the development of the mechanistic model within a collaborative project, referred to as the Advanced REACH Tool (ART) project, to develop a tool to model inhalation exposure for workers sharing similar operational conditions across different industries and locations in Europe. The ART mechanistic model is based on a conceptual framework that adopts a source receptor approach, which describes the transport of a contaminant from the source to the receptor and defines seven independent principal modifying factors: substance emission potential, activity emission potential, localized controls, segregation, personal enclosure, surface contamination, and dispersion. ART currently differentiates between three different exposure types: vapours, mists, and dust (fumes, fibres, and gases are presently excluded). Various sources were used to assign numerical values to the multipliers to each modifying factor. The evidence used to underpin this assessment procedure was based on chemical and physical laws. In addition, empirical data obtained from literature were used. Where this was not possible, expert elicitation was applied for the assessment procedure. Multipliers for all modifying factors were peer reviewed by leading experts from industry, research institutes, and public authorities across the globe. In addition, several workshops with experts were organized to discuss the proposed exposure multipliers. The mechanistic model is a central part of the ART tool and with advancing knowledge on exposure, determinants will require updates and refinements on a continuous basis, such as the effect of worker behaviour on personal exposure, 'best practice' values that describe the maximum achievable effectiveness of control measures, the intrinsic emission potential of various solid objects (e.g. metal, glass, plastics, etc.), and extending the applicability domain to certain types of exposures (e.g. gas, fume, and fibre exposure).
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
On the maximum mass of magnetised white dwarfs
Chatterjee, D; Chamel, N; Novak, J; Oertel, M
2016-01-01
We develop a detailed and self-consistent numerical model for extremely-magnetised white dwarfs, which have been proposed as progenitors of overluminous Type Ia supernovae. This model can describe fully-consistent equilibria of magnetic stars in axial symmetry, with rotation, general-relativistic effects and realistic equations of state (including electron-ion interactions and taking into account Landau quantisation of electrons due to the magnetic field). We study the influence of each of these ingredients onto the white dwarf structure and, in particular, on their maximum mass. We perform an extensive stability analysis of such objects, with their highest surface magnetic fields reaching $\\sim 10^{13}~G$ (at which point the star adopts a torus-like shape). We confirm previous speculations that although very massive strongly magnetised white dwarfs could potentially exist, the onset of electron captures and pycnonuclear reactions may severely limit their stability. Finally, the emission of gravitational wave...
Spiking and LFP activity in PRR during symbolically instructed reaches.
Hwang, Eun Jung; Andersen, Richard A
2012-02-01
The spiking activity in the parietal reach region (PRR) represents the spatial goal of an impending reach when the reach is directed toward or away from a visual object. The local field potentials (LFPs) in this region also represent the reach goal when the reach is directed to a visual object. Thus PRR is a candidate area for reading out a patient's intended reach goals for neural prosthetic applications. For natural behaviors, reach goals are not always based on the location of a visual object, e.g., playing the piano following sheet music or moving following verbal directions. So far it has not been directly tested whether and how PRR represents reach goals in such cognitive, nonlocational conditions, and knowing the encoding properties in various task conditions would help in designing a reach goal decoder for prosthetic applications. To address this issue, we examined the macaque PRR under two reach conditions: reach goal determined by the stimulus location (direct) or shape (symbolic). For the same goal, the spiking activity near reach onset was indistinguishable between the two tasks, and thus a reach goal decoder trained with spiking activity in one task performed perfectly in the other. In contrast, the LFP activity at 20-40 Hz showed small but significantly enhanced reach goal tuning in the symbolic task, but its spatial preference remained the same. Consequently, a decoder trained with LFP activity performed worse in the other task than in the same task. These results suggest that LFP decoders in PRR should take into account the task context (e.g., locational vs. nonlocational) to be accurate, while spike decoders can robustly provide reach goal information regardless of the task context in various prosthetic applications.
Potato crop growth as affected by nitrogen and plant density
OLIVEIRA CARLOS ALBERTO DA SILVA
2000-01-01
Full Text Available Growth and development variables and dry matter characteristics were studied for cultivar Snowden of potato (Solanum tuberosum L. to evaluate nitrogen and plant density influence. Disregarding ending of season plant stress, the average number of actives haulms per plant was five and it was not affected by plant spacing. However, seasonal and final number of active haulms per plant were increased at 200 kg/ha of nitrogen. Maximum stem elongation was reached quickly with double density and had the tendency to keep constant at the highest and lowest nitrogen levels after 70 days after planting. Specific stem mass defined as mass per unit stem length was established as an indirect measure of stem thickness and load capacity. Specific leaf mass position in plant was higher at upper stem leaves, increased as plant density increased and did not vary markedly over time throughout the season. The rate of leaf appearance increased drastically due to more branching caused by high nitrogen level, and increased above ground dry matter per plant. Canopy growth and development influenced main tuber yield components. The number of active tubers per haulm decreased after 60 days after planting showing that tuberization is reversible. Tuber growth functions were established allowing the estimate of dry biomass partitioning coefficients for each plant organ.
Critical density of urban traffic
da Silva, Adilton Jose
2010-01-01
A modified version of the Intelligent Driver Model was used to simulate traffic in the district of Afogados, in the city of Recife, Brazil, with the objective to verify whether the complexity of the underlying street grid, with multiple lane streets, crossings, and semaphores, is capable of exhibiting the effect of critical density: appearance of a maximum in the vehicle flux versus density curve. Numerical simulations demonstrate that this effect indeed is observed on individual avenues, while the phase offset among the avenues results in damping of this effect for the region as a whole.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Density changes with substrate negative bias for ta-C films deposited by filter cathode vacuum arc
TAN Man-lin; ZHU Jia-qi; HAN Jie-cai; MENG Song-he
2004-01-01
Specular X-ray reflectivity (XRR) measurements were used to study the density and cross-section information of tetrahedral amorphous carbon (ta-C) films deposited by filter cathode vacuum arc(FCVA) system at different substrate bias. According to the correlation between density and substrate negative bias, it is found that the value of density reaches a maximum at -80 V bias. As the substrate bias increases or decreases, the density tends to lower gradually. Based on the density of diamond and graphite, sp3 bonding ratio of ta-C films was obtained from their corresponding density according to a simple equation between the two. And a similar parabolic variation was observed for ta-C films with the sp3 content changes with substrate negative bias. The mechanical properties such as hardness and elastic modulus were also measured and compared with the corresponding density for ta-C films. From the distribution of data points, a linear proportional correlation between them was found, which shows that the density is a critical parameter to characterize the structure variation for ta-C films.
Maximum Allowable Dynamic Load of Mobile Manipulators with Stability Consideration
Heidary H. R.
2015-09-01
Full Text Available High payload to mass ratio is one of the advantages of mobile robot manipulators. In this paper, a general formula for finding the maximum allowable dynamic load (MADL of wheeled mobile robot is presented. Mobile manipulators operating in field environments will be required to manipulate large loads, and to perform such tasks on uneven terrain, which may cause the system to reach dangerous tip-over instability. Therefore, the method is expanded for finding the MADL of mobile manipulators with stability consideration. Moment-Height Stability (MHS criterion is used as an index for the system stability. Full dynamic model of wheeled mobile base and mounted manipulator is considered with respect to the dynamic of non-holonomic constraint. Then, a method for determination of the maximum allowable loads is described, subject to actuator constraints and by imposing the stability limitation as a new constraint. The actuator torque constraint is applied by using a speed-torque characteristics curve of a typical DC motor. In order to verify the effectiveness of the presented algorithm, several simulation studies considering a two-link planar manipulator, mounted on a mobile base are presented and the results are discussed.
Local solutions of Maximum Likelihood Estimation in Quantum State Tomography
Gonçalves, Douglas S; Lavor, Carlile; Farías, Osvaldo Jiménez; Ribeiro, P H Souto
2011-01-01
Maximum likelihood estimation is one of the most used methods in quantum state tomography, where the aim is to find the best density matrix for the description of a physical system. Results of measurements on the system should match the expected values produced by the density matrix. In some cases however, if the matrix is parameterized to ensure positivity and unit trace, the negative log-likelihood function may have several local minima. In several papers in the field, authors associate a source of errors to the possibility that most of these local minima are not global, so that optimization methods can be trapped in the wrong minimum, leading to a wrong density matrix. Here we show that, for convex negative log-likelihood functions, all local minima are global. We also show that a practical source of errors is in fact the use of optimization methods that do not have global convergence property or present numerical instabilities. The clarification of this point has important repercussion on quantum informat...
Electronic Structure and Maximum Energy Product of MnBi
Jihoon Park
2014-08-01
Full Text Available We have performed first-principles calculations to obtain magnetic moment, magnetocrystalline anisotropy energy (MAE, i.e., the magnetic crystalline anisotropy constant (K, and the Curie temperature (Tc of low temperature phase (LTP MnBi and also estimated the maximum energy product (BHmax at elevated temperatures. The full-potential linearized augmented plane wave (FPLAPW method, based on density functional theory (DFT within the local spin density approximation (LSDA, was used to calculate the electronic structure of LPM MnBi. The Tc was calculated by the mean field theory. The calculated magnetic moment, MAE, and Tc are 3.63 μB/f.u. (formula unit (79 emu/g or 714 emu/cm3, −0.163 meV/u.c. (or K = −0.275 × 106 J/m3 and 711 K, respectively. The (BHmax at the elevated temperatures was estimated by combining experimental coercivity (Hci and the temperature dependence of magnetization (Ms(T. The (BHmax is 17.7 MGOe at 300 K, which is in good agreement with the experimental result for directionally-solidified LTP MnBi (17 MGOe. In addition, a study of electron density maps and the lattice constant c/a ratio dependence of the magnetic moment suggested that doping of a third element into interstitial sites of LTP MnBi can increase the Ms.
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
Postural control during standing reach in children with Down syndrome.
Chen, Hao-Ling; Yeh, Chun-Fu; Howe, Tsu-Hsin
2015-03-01
The purpose of the present study was to investigate the dynamic postural control of children with Down syndrome (DS). Specifically, we compared postural control and goal-directed reaching performance between children with DS and typically developing children during standing reach. Standing reach performance was analyzed in three main phases using the kinematic and kinetic data collected from a force plate and a motion capture system. Fourteen children with DS, age and gender matched with fourteen typically developing children, were recruited for this study. The results showed that the demand of the standing reach task affected both dynamic postural control and reaching performance in children with DS, especially in the condition of beyond arm's length reaching. More postural adjustment strategies were recruited when reaching distance was beyond arm's length. Children with DS tended to use inefficient and conservative strategies for postural stability and reaching. That is, children with DS perform standing reach with increased reaction and execution time and decreased amplitudes of center of pressure displacements. Standing reach resembled functional balance that is required in daily activities. It is suggested to be considered as a part of strength and balance training program with graded task difficulty.
... Information › Bone Density Exam/Testing › Low Bone Density Low Bone Density Low bone density is when your ... compared to people with normal bone density. Detecting Low Bone Density A bone density test will determine ...
Density limits investigation and high density operation in EAST tokamak
Zheng, Xingwei; Li, Jiangang; Hu, Jiansheng; Liu, Haiqing; Jie, Yinxian; Wang, Shouxin; Li, Jiahong; Duan, Yanming; Li, Miaohui; Li, Yongchun; Zhang, Ling; Ye, Yang; Yang, Qingquan; Zhang, Tao; Cheng, Yingjie; Xu, Jichan; Wang, Liang; Xu, Liqing; Zhao, Hailin; Wang, Fudi; Lin, Shiyao; Wu, Bin; Lyu, Bo; Xu, Guosheng; Gao, Xiang; Shi, Tonghui; He, Kaiyang; Lan, Heng; Chu, Nan; Cao, Bin; Sun, Zhen; Zuo, Guizhong; Ren, Jun; Zhuang, Huidong; Li, Changzheng; Yuan, Xiaolin; Yu, Yaowei; Wang, Houyin; Chen, Yue; Wu, Jinhua; EAST Team
2016-05-01
Increasing the density in a tokamak is limited by the so-called density limit, which is generally performed as an appearance of disruption causing loss of plasma confinement, or a degradation of high confinement mode which could further lead to a H → L transition. The L-mode and H-mode density limit has been investigated in EAST tokamak. Experimental results suggest that density limits could be triggered by either edge cooling or excessive central radiation. The L-mode density limit disruption is generally triggered by edge cooling, which leads to the current profile shrinkage and then destabilizes a 2/1 tearing mode, ultimately resulting in a disruption. The L-mode density limit scaling agrees well with the Greenwald limit in EAST. The observed H-mode density limit in EAST is an operational-space limit with a value of 0.8∼ 0.9{{n}\\text{GW}} . High density H-mode heated by neutral beam injection (NBI) and lower hybrid current drive (LHCD) are analyzed, respectively. The constancy of the edge density gradients in H-mode indicates a critical limit caused perhaps by e.g. ballooning induced transport. The maximum density is accessed at the H → L transition which is generally caused by the excessive core radiation due to high Z impurities (Fe, Cu). Operating at a high density (>2.8× {{10}19} {{\\text{m}}-3} ) is favorable for suppressing the beam shine through NBI. High density H-mode up to 5.3× {{10}19}{{\\text{m}}-3}~≤ft(∼ 0.8{{n}\\text{GW}}\\right) could be sustained by 2 MW 4.6 GHz LHCD alone, and its current drive efficiency is studied. Statistics show that good control of impurities and recycling facilitate high density operation. With careful control of these factors, high density up to 0.93{{n}\\text{GW}} stable H-mode operation was carried out heated by 1.7 MW LHCD and 1.9 MW ion cyclotron resonance heating with supersonic molecular beam injection fueling.
A new look at extensional rheology of low-density polyethylene
Huang, Qian; Mangnus, Marc; Alvarez, Nicolas J.
2016-01-01
The nonlinear rheology of three selected commercial low-density polyethylenes (LDPE) is measured in uniaxial extensional flow. The measurements are performed using three different devices including an extensional viscosity fixture (EVF), a homemade filament stretching rheometer (DTU-FSR) and a co......The nonlinear rheology of three selected commercial low-density polyethylenes (LDPE) is measured in uniaxial extensional flow. The measurements are performed using three different devices including an extensional viscosity fixture (EVF), a homemade filament stretching rheometer (DTU......-FSR) and a commercial filament stretching rheometer (VADER-1000). We show that the measurements from the EVF are limited by a maximum Hencky strain of 4, while the two filament stretching rheometers are able to probe the nonlinear behavior at larger Hencky strain values where the steady state is reached...
Exploring the electron density in plasmas induced by extreme ultraviolet radiation in argon
van der Horst, R M; Osorio, E A; Banine, V Y
2015-01-01
The new generation of lithography tools use high energy EUV radiation which ionizes the present background gas due to photoionization. To predict and understand the long term impact on the highly delicate mirrors It is essential to characterize these kinds of EUV-induced plasmas. We measured the electron density evolution in argon gas during and just after irradiation by a short pulse of EUV light at 13.5 nm by applying microwave cavity resonance spectroscopy. Dependencies on EUV pulse energy and gas pressure have been explored over a range relevant for industrial applications. Our experimental results show that the maximum reached electron density depends linearly on pulse energy. A quadratic dependence - caused by photoionization and subsequent electron impact ionization by free electrons - is found from experiments where the gas pressure is varied. This is demonstrated by our theoretical estimates presented in this manuscript as well.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China); Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schild, Axel [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schmidt, Burkhard, E-mail: burkhard.schmidt@fu-berlin.de [Institut für Mathematik, Freie Universität Berlin, Arnimallee 6, 14195 Berlin (Germany); Yang, Yonggang, E-mail: ygyang@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China)
2014-10-17
Highlights: • Coherent tunneling in one-dimensional symmetric double well potentials. • Potentials for analytical estimates in the deep tunneling regime. • Maximum velocities scale as the square root of the ratio of barrier height and mass. • In chemical physics maximum tunneling velocities are in the order of a few km/s. - Abstract: We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to several prototypical molecular models of non-cyclic and cyclic tunneling, including ammonia inversion, Cope rearrangement of semibullvalene, torsions of molecular fragments, and rotational tunneling in strong laser fields. Typical maximum velocities and angular velocities are in the order of a few km/s and from 10 to 100 THz for our non-cyclic and cyclic systems, respectively, much faster than time-averaged velocities. Even for the more extreme case of an electron tunneling through a barrier of height of one Hartree, the velocity is only about one percent of the speed of light. Estimates of the corresponding time scales for
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to
Selection of conditions for production of maximum H beam current density from multicusp source
Krylov, A.I.; Kuznetsov, V.V.; Penkin, D.V.; Semashko, N.N. (I. V. Kurchatov Institute of Atomic Energy, Moscow (USSR))
1990-08-05
This paper describes a large filtered multicusp H{sup {minus}} ion source. The influence of a number of parameters on the yield characteristics of the H{sup {minus}} source were investigated including pressure in discharge chamber, cathode{endash}filter spacing and multicusp geometry. (AIP)
TCAD Analysis of Heating and Maximum Current Density in Carbon Nanofiber Interconnects
2011-09-01
high power applications , greater than 16 nm, and the long term and low power applications , less than 16 nm. As the connection distance decreases...different factors become important due to various quantum effects. Interconnects that fall in the near term/high power applications category are...entire length of the nanotube. This kind of structure is refered to as bamboo -like. The difference between a regular nanotube and the internal
Hydrodynamic states in water below the temperature of the density maximum: the limit to supercooling
van der Elsken, J.; van Boom, L.; Bot, A.
1988-01-01
Spectra of fluctuations in the total intensity of laser light deflected by supercooled water show that even under carefully controlled conditions large samples give convection when cooled below -0%. This is in agreement with the Rayleigh versus Prandtlnumber relation for supercooled water.
1984-06-01
RKHS) if point evaluation is a continuous operation, that is, v - vn in H(f?) implies that v (t) -o v(t) for all t e 12. See n Goffman and Pedrick ...34, ,-’, . .’. ’**’,,**4, .’ *"".-..’ * . .’ *" .-. .;¢ .’ .*** * .’ ’.L’,.’o ¢ . .* h~ ’a . ’’ *’’".2’ *.,’.. .: " 17 Goffman, C. & Pedrick , C
Reach/frequency for printed media: Personal probabilities or models
Mortensen, Peter Stendahl
2000-01-01
that, in order to prevent bias, ratings per group must be used as reading probabilities. Nevertheless, in most cases, the estimates are still biased compared with panel data, thus overestimating net ´reach. Models with the same assumptions as with assignments of reading probabilities are presented......The author evaluates two different ways of estimating reach and frequency of plans for printed media. The first assigns reading probabilities to groups of respondents and calculates reach and frequency by simulation. the second estimates parameters to a model for reach/frequency. It is concluded...
Reach Scale Sediment Balance of Goodwin Creek Watershed, Mississippi
Ran, L.; Garcia, T.; Ye, S.; Harman, C. J.; Hassan, M. A.; Simon, A.
2010-12-01
Several reaches of Goodwin Creek, an experimental watershed within the Mississippi river basin, were analyzed for the period 1977-2007 in terms of long-term trends in sediment gain and loss in each reach, the relation of input and output to within-reach sediment fluxes, and the impacts of land use and bank erosion on reach sediment dynamics. Over the period 1977-2007, degradational and aggradational reaches were identified indicating slight vertical adjustment along the mainstream. Lateral adjustment was the main response of the channel to changes in flow and sediment regimes. Event-based sediment load was estimated using suspended concentration data, bedload transport rate, and changes in cross-sectional data. Bank erosion was estimated using cross-sectional data and models. The spatial and temporal patterns of within-reach sediment dynamics correspond closely with river morphology and also reflect basin conditions over the last three decades; thus they are conditioned by coeval trends in climate, hydrology, and land use. The sediment exchange within the mainstream was calculated by the development of reach sediment balances that reveal complex spatial and temporal patterns of sediment dynamics. Sediment load during the rising limb of the hydrograph was slightly higher than those estimated for the falling limb indicating the relative importance of sediment supply on reach sediment dynamic in the basin. Cumulative plots of sediment exchange reveal that major changes in within reach sediment storage are associated with large floods or major inputs from bank erosion.
Maximum Likelihood Learning of Conditional MTE Distributions
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE speciﬁcations and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....
Shen, Tengming [Fermilab; Ye, Liyang [NCSU, Raleigh; Turrioni, Daniele [Fermilab; Li, Pei [Fermilab
2015-01-01
Small insert coils have been built using a multifilamentary Bi2Sr2CaCu2Ox round wire, and characterized in background fields to explore the quench behaviors and limits of Bi2Sr2CaCu2Ox superconducting magnets, with an emphasis on assessing the impact of slow normal zone propagation on quench detection. Using heaters of various lengths to initiate a small normal zone, a coil was quenched safely more than 70 times without degradation, with the maximum coil temperature reaching 280 K. Coils withstood a resistive voltage of tens of mV for seconds without quenching, showing the high stability of these coils and suggesting that the quench detection voltage shall be greater than 50 mV to not to falsely trigger protection. The hot spot temperature for the resistive voltage of the normal zone to reach 100 mV increases from ~40 K to ~80 K with increasing the operating wire current density Jo from 89 A/mm2 to 354 A/mm2 whereas for the voltage to reach 1 V, it increases from ~60 K to ~140 K, showing the increasing negative impact of slow normal zone propagation on quench detection with increasing Jo and the need to limit the quench detection voltage to < 1 V. These measurements, coupled with an analytical quench model, were used to access the impact of the maximum allowable voltage and temperature upon quench detection on the quench protection, assuming to limit the hot spot temperature to <300 K.
Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion
Poljak, Nikola
2016-11-01
The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.
The problem of the maximum volumes and particle horizon in the Friedmann universe model
Gong, S. M.
1989-08-01
The maximum volume of the closed Friedmann universe is further investigated and is shown to be 2 x pi squared x R cubed (t), instead of pi squared x R cubed (t) as found previously. This discrepancy comes from the incomplete use of the volume formula of 3-dimensional spherical space in the astronomical literature. Mathematically, the maximum volume exists at any cosmic time t in a 3-dimensional spherical case. However, the Friedmann closed universe in expansion reaches its maximum volume only at the time of the maximum scale factor. The particle horizon has no limitation for the farthest objects in the closed Friedmann universe if the proper distance of objects is compared with the particle horizon as is should be. This leads to absurdity if the luminosity distance of objects is compared with the proper distance of the particle horizon.
Upstream proton cyclotron waves at Venus near solar maximum
Delva, M.; Bertucci, C.; Volwerk, M.; Lundin, R.; Mazelle, C.; Romanelli, N.
2015-01-01
magnetometer data of Venus Express are analyzed for the occurrence of waves at the proton cyclotron frequency in the spacecraft frame in the upstream region of Venus, for conditions of rising solar activity. The data of two Venus years up to the time of highest sunspot number so far (1 Mar 2011 to 31 May 2012) are studied to reveal the properties of the waves and the interplanetary magnetic field (IMF) conditions under which they are observed. In general, waves generated by newborn protons from exospheric hydrogen are observed under quasi- (anti)parallel conditions of the IMF and the solar wind velocity, as is expected from theoretical models. The present study near solar maximum finds significantly more waves than a previous study for solar minimum, with an asymmetry in the wave occurrence, i.e., mainly under antiparallel conditions. The plasma data from the Analyzer of Space Plasmas and Energetic Atoms instrument aboard Venus Express enable analysis of the background solar wind conditions. The prevalence of waves for IMF in direction toward the Sun is related to the stronger southward tilt of the heliospheric current sheet for the rising phase of Solar Cycle 24, i.e., the "bashful ballerina" is responsible for asymmetric background solar wind conditions. The increase of the number of wave occurrences may be explained by a significant increase in the relative density of planetary protons with respect to the solar wind background. An exceptionally low solar wind proton density is observed during the rising phase of Solar Cycle 24. At the same time, higher EUV increases the ionization in the Venus exosphere, resulting in higher supply of energy from a higher number of newborn protons to the wave. We conclude that in addition to quasi- (anti)parallel conditions of the IMF and the solar wind velocity direction, the higher relative density of Venus exospheric protons with respect to the background solar wind proton density is the key parameter for the higher number of
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Should these potential CMR substances have been registered under REACH?
Wedebye, Eva Bay; Nikolov, Nikolai Georgiev; Dybdahl, Marianne;
2013-01-01
(Q)SAR models were applied to screen around 68,000 REACH pre-registered substances for CMR properties (carcinogenic, mutagenic or toxic to reproduction). Predictions from 14 relevant models were combined to reach overall calls for C, M and R. Combining predictions may reduce “noise” and increase...
Guaranteed performance in reaching mode of sliding mode controlled systems
G K Singh; K E Holé
2004-02-01
Conventionally, the parameters of a sliding mode controller (SMC) are selected so as to reduce the time spent in the reaching mode. Although, an upper bound on the time to reach (reaching time) the sliding surface is easily derived, performance guarantee in the state/error space needs more consideration. This paper addresses the design of constant plus proportional rate reaching law-based SMC for second-order nonlinear systems. It is shown that this controller imposes a bounding second-order error-dynamics, and thus guarantees robust performance during the reaching phase. The choice of the controller parameters based on the time to reach a desirable level of output tracking error (OTE), rather than on the reaching time is proposed. Using the Lyapunov theory, it is shown that parameter selections, based on the reaching time criterion, may need substantially larger time to achieve the OTE. Simulation results are presented for a nonlinear spring-massdamper system. It is seen that parameter selections based on the proposed OTE criterion, result in substantially quicker tracking, while using similar levels of control effort.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Maximum Entropy Estimation of n-Year Extreme Waveheights
徐德伦; 张军; 郑桂珍
2004-01-01
A new method for estimating the n (50 or 100) -year return-period waveheight, namely, the extreme waveheightexpected to occur in n years, is presented on the basis of the maximum entropy principle. The main points of the method are as follows: ( 1 ) based on the Hamiltonian principle, a maximum entropy probability density function for the extreme waveheight H, f(H)= αHγe-βΗ4 is derived from a Lagrangian function subject to some necessary and rational constraints; (2) the parametersα,β, andγin the function are expressed in terms of the mean H, variance V = ( H - H)2and bias B = ( H- H)3; and (3) with H, V and B estimated from observed data, the n-year return-period wave height Hn is computed in accordance with the formula 1/1 - F(Hn) = n, where F(Hn) is defined as F(Hn) =n Hn Of(H)dH.Examples of estimating the 50 and 100-year retum period waveheights by the present method and by some currently used method from observed data acquired from two hydrographic stations are given. A comparison of the estimated results shows that the present method is superior to the others.
Maximum mass, moment of inertia and compactness of relativistic stars
Breu, Cosima
2016-01-01
A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as "universal relations" and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalised Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, M_{max}, simply in terms of the maximum mass of the nonrotating configuration, M_{TOV}, findi...
NERO- a post-maximum supernova radiation transport code
Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.
2011-12-01
The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.
Maximum mass of a barotropic spherical star
Fujisawa, Atsuhito; Yoo, Chul-Moon; Nambu, Yasusada
2015-01-01
The ratio of total mass $M$ to surface radius $R$ of spherical perfect fluid ball has an upper bound, $M/R < B$. Buchdahl obtained $B = 4/9$ under the assumptions; non-increasing mass density in outward direction, and barotropic equation of states. Barraco and Hamity decreased the Buchdahl's bound to a lower value $B = 3/8$ $(< 4/9)$ by adding the dominant energy condition to Buchdahl's assumptions. In this paper, we further decrease the Barraco-Hamity's bound to $B \\simeq 0.3636403$ $(< 3/8)$ by adding the subluminal (slower-than-light) condition of sound speed. In our analysis, we solve numerically Tolman-Oppenheimer-Volkoff equations, and the mass-to-radius ratio is maximized by variation of mass, radius and pressure inside the fluid ball as functions of mass density.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Crystallization of Ti33Cu67 metallic glass under high-current density electrical pulses
Mali Vyacheslav
2011-01-01
Full Text Available Abstract We have studied the phase and structure evolution of the Ti33Cu67 amorphous alloy subjected to electrical pulses of high current density. By varying the pulse parameters, different stages of crystallization could be observed in the samples. Partial polymorphic nanocrystallization resulting in the formation of 5- to 8-nm crystallites of the TiCu2 intermetallic in the residual amorphous matrix occurred when the maximum current density reached 9.7·108 A m-2 and the pulse duration was 140 μs, though the calculated temperature increase due to Joule heating was not enough to reach the crystallization temperature of the alloy. Samples subjected to higher current densities and higher values of the evolved Joule heat per unit mass fully crystallized and contained the Ti2Cu3 and TiCu3 phases. A common feature of the crystallized ribbons was their non-uniform microstructure with regions that experienced local melting and rapid solidification. PACS: 81; 81.05.Bx; 81.05.Kf.
Identification of temporal consistency in rating curve data: Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Verhoest, Niko E. C.
2016-08-01
In this paper, a methodology is developed to identify consistency of rating curve data based on a quality analysis of model results. This methodology, called Bidirectional Reach (BReach), evaluates results of a rating curve model with randomly sampled parameter sets in each observation. The combination of a parameter set and an observation is classified as nonacceptable if the deviation between the accompanying model result and the measurement exceeds observational uncertainty. Based on this classification, conditions for satisfactory behavior of a model in a sequence of observations are defined. Subsequently, a parameter set is evaluated in a data point by assessing the span for which it behaves satisfactory in the direction of the previous (or following) chronologically sorted observations. This is repeated for all sampled parameter sets and results are aggregated by indicating the endpoint of the largest span, called the maximum left (right) reach. This temporal reach should not be confused with a spatial reach (indicating a part of a river). The same procedure is followed for each data point and for different definitions of satisfactory behavior. Results of this analysis enable the detection of changes in data consistency. The methodology is validated with observed data and various synthetic stage-discharge data sets and proves to be a robust technique to investigate temporal consistency of rating curve data. It provides satisfying results despite of low data availability, errors in the estimated observational uncertainty, and a rating curve model that is known to cover only a limited part of the observations.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Bast, Radovan; Juselius, Jonas [Centre for Theoretical and Computational Chemistry (CTCC), Department of Chemistry, University of Tromso, N-9037 Tromso (Norway); Saue, Trond [Institut de Chimie de Strasbourg, CNRS et Universite Louis Pasteur, Laboratoire de Chimie Quantique, 4, rue Blaise Pascal, BP 1032, F-67070 Strasbourg (France)], E-mail: tsaue@chimie.u-strasbg.fr
2009-02-17
We present a 4-component relativistic implementation for calculating the magnetically induced current density within Hartree-Fock and Kohn-Sham linear response theory using a common gauge origin. We demonstrate how the current density can be decomposed into paramagnetic and diamagnetic contributions by calculating separately the contributions from rotations between positive-energy orbitals and contributions from rotations between the occupied positive-energy orbitals and the virtual negative-energy orbitals, respectively. This methodology is applied to the study of the magnetically induced current density in benzene and the group 15 heteroaromatic compounds C{sub 5}H{sub 5}E (E = N, P, As, Sb, Bi). Quantitative values for the magnetically induced ring currents are obtained by numerical integration over the current flow. We have found that the diatropic ring current is sustained for the entire series of the group 15 heteroaromatic compounds-the induced ring current susceptibility of bismabenzene being 76% of the benzene result. Having employed two hybrid and two nonhybrid generalized gradient approximation functionals, the results are found to be rather insensitive to the choice of the density functional approximation. The relativistic effect is relatively small, reaching its maximum of 8% for bismabenzene. The presented 4-component relativistic methodology opens up the possibility to visualize magnetically induced current densities of aromatic heavy-element systems with both scalar relativistic and spin-orbit effects included.
Waters, Kevin A.; Crowe Curran, Joanna
2016-11-01
While research into the interaction between in-channel vegetation, flow, and bed sediment has increased in recent years, there is still a need to understand how unsteady flows affect these processes, particularly in terms of channel bed adjustments. In this study, flume experiments tested two flood hydrograph sizes run over sand/gravel and sand/silt beds to evaluate reach scale impacts of a midchannel vegetation patch of variable stem density on channel bathymetry and stability. Alternating flood hydrographs with periods of low, steady flow created flow sequences reflective of an extended unsteady flow regime, thereby simulating time scales consisting of multiple flood events. Digital elevation models provided detailed measurements of channel change following each flood event to enable analysis over each unsteady flow sequence. The vegetation patch created characteristic channel bathymetries dependent on sediment mixture and patch density that in all cases resulted in a more variable bed structure than channels without a patch. Reach scale stability, quantified based on net volumetric bed change, only occurred with a sparse patch in the low flood sequence, corresponding with little variation in surface composition and structure. In most other cases, scour measured at the patch prevented stability at the reach scale, especially in the finer substrate. Overall, findings show that a channel may only adjust to a stable bathymetry upon addition of a midchannel vegetation patch within a limited range of flow regimes and patch stem densities, and that for the experimental conditions tested here, in-stream patches generally did not enhance reach scale bed stability.
Energy dependence of CP-violation reach for monochromatic neutrino beam
Bernabeu, Jose [IFIC, Universitat de Valencia-CSIC, E-46100, Burjassot, Valencia (Spain); Espinoza, Catalina [IFIC, Universitat de Valencia-CSIC, E-46100, Burjassot, Valencia (Spain)], E-mail: m.catalina.espinoza@uv.es
2008-06-26
The ultimate goal of future neutrino facilities is the determination of CP violation in neutrino oscillations. Besides |U(e3)|{ne}0, this will require precision experiments with a very intense neutrino source and energy control. With this objective in mind, the creation of monochromatic neutrino beams from the electron capture decay of boosted ions by the SPS of CERN has been proposed. We discuss the capabilities of such a facility as a function of the energy of the boost and the baseline for the detector. We compare the physics potential for two different configurations: (I) {gamma}=90 and {gamma}=195 (maximum achievable at present SPS) to Frejus; (II) {gamma}=195 and {gamma}=440 (maximum achievable at upgraded SPS) to Canfranc. We conclude that the SPS upgrade to 1000 GeV is important to reach a better sensitivity to CP violation iff it is accompanied by a longer baseline.
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
High precision Hugoniot measurements of D2 near maximum compression
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Chiara eBegliomini
2014-09-01
Full Text Available Experimental evidence suggests the existence of a sophisticated brain circuit specifically dedicated to reach-to-grasp planning and execution, both in human and non human primates (Castiello, 2005. Studies accomplished by means of neuroimaging techniques suggest the hypothesis of a dichotomy between a reach-to-grasp circuit, involving the intraparietal area (AIP, the dorsal and ventral premotor cortices (PMd and PMv - Castiello and Begliomini, 2008; Filimon, 2010 and a reaching circuit involving the medial intraparietal area (mIP and the Superior Parieto-Occipital Cortex (SPOC (Culham et al., 2006. However, the time course characterizing the involvement of these regions during the planning and execution of these two types of movements has yet to be delineated. A functional magnetic resonance imaging (fMRI study has been conducted, including reach-to grasp and reaching only movements, performed towards either a small or a large stimulus, and Finite Impulse Response model (FIR - Henson, 2003 was adopted to monitor activation patterns from stimulus onset for a time window of 10 seconds duration. Data analysis focused on brain regions belonging either to the reaching or to the grasping network, as suggested by Castiello & Begliomini (2008.Results suggest that reaching and grasping movements planning and execution might share a common brain network, providing further confirmation to the idea that the neural underpinnings of reaching and grasping may overlap in both spatial and temporal terms (Verhagen et al., 2013.
The maximum optical depth toward bulge stars from axisymmetric models of the Milky Way
Kuijken, K
1997-01-01
It has been known that recent microlensing results toward the bulge imply mass densities that are surprisingly high, given dynamical constraints on the Milky Way mass distribution. We derive the maximum optical depth toward the bulge that may be generated by axisymmetric structures in the Milky Way,
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions
Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper
2016-01-01
We propose a novel Power Spectral Density (PSD) estimator for multi-microphone systems operating in reverberant and noisy conditions. The estimator is derived using the maximum likelihood approach and is based on a blocked and pre-whitened additive signal model. The intended application......, the difference between algorithms was found to be statistically significant only in some of the experimental conditions....
Proprioceptive body illusions modulate the visual perception of reaching distance.
Agustin Petroni
Full Text Available The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide-without engaging in explicit action-whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas.
The impact of REACH on classification for human health hazards.
Oltmanns, J; Bunke, D; Jenseit, W; Heidorn, C
2014-11-01
The REACH Regulation represents a major piece of chemical legislation in the EU and requires manufacturers and importers of chemicals to assess the safety of their substances. The classification of substances for their hazards is one of the crucial elements in this process. We analysed the effect of REACH on classification for human health endpoints by comparing information from REACH registration dossiers with legally binding, harmonised classifications. The analysis included 142 chemicals produced at very high tonnages in the EU, the majority of which have already been assessed in the past. Of 20 substances lacking a harmonised classification, 12 chemicals were classified in REACH registration dossiers. More importantly, 37 substances with harmonised classifications for human health endpoints had stricter classifications in registration dossiers and 29 of these were classified for at least one additional endpoint not covered by the harmonised classification. Substance-specific analyses suggest that one third of these additional endpoints emerged from experimental studies performed to fulfil information requirements under REACH, while two thirds resulted from a new assessment of pre-REACH studies. We conclude that REACH leads to an improved hazard characterisation even for substances with a potentially good data basis.
Proprioceptive Body Illusions Modulate the Visual Perception of Reaching Distance
Petroni, Agustin; Carbajal, M. Julia; Sigman, Mariano
2015-01-01
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas. PMID:26110274
Polivka, Karl; Bennett, Rita L. [USDA Forest Service, Pacific Northwest Research Station, Wenatchee, WA
2009-03-31
We studied variation in productivity in headwater reaches of the Wenatchee subbasin for multiple field seasons with the objective that we could develop methods for monitoring headwater stream conditions at the subcatchment and stream levels, assign a landscape-scale context via the effects of geoclimatic parameters on biological productivity (macroinvertebrates and fish) and use this information to identify how variability in productivity measured in fishless headwaters is transmitted to fish communities in downstream habitats. In 2008, we addressed this final objective. In collaboration with the University of Alaska Fairbanks we found some broad differences in the production of aquatic macroinvertebrates and in fish abundance across categories that combine the effects of climate and management intensity within the subbasin (ecoregions). From a monitoring standpoint, production of benthic macroinvertebrates was not a good predictor of drifting macroinvertebrates and therefore might be a poor predictor of food resources available to fish. Indeed, there is occasionally a correlation between drifting macroinvertebrate abundance and fish abundance which suggests that headwater-derived resources are important. However, fish in the headwaters appeared to be strongly food-limited and there was no evidence that fishless headwaters provided a consistent subsidy to fish in reaches downstream. Fish abundance and population dynamics in first order headwaters may be linked with similar metrics further down the watershed. The relative strength of local dynamics and inputs into productivity may be constrained or augmented by large-scale biogeoclimatic control. Headwater streams are nested within watersheds, which are in turn nested within ecological subregions; thus, we hypothesized that local effects would not necessarily be mutually exclusive from large-scale influence. To test this we examined the density of primarily salmonid fishes at several spatial and temporal scales
Maximum probability domains for the analysis of the microscopic structure of liquids
Agostini, Federica; Savin, Andreas; Vuilleumier, Rodolphe
2014-01-01
We introduce the concept of maximum probability domains, developed in the context of the analysis of electronic densities, in the study of the microscopic spatial structures of liquids. The idea of locating a particle in a three dimensional region, by determining the domain where the probability of finding that, and only that, particle is maximum, gives an interesting characterisation of the local structure of the liquid. The optimisation procedure, required for the search of the domain of maximum probability, is carried out by the implementation of the level set method. Some results for few case studies are presented. In particular by looking at liquid water at different densities or at the solvation shells of Na$^+$ always in liquid water.
Maximum-Entropy Method for Evaluating the Slope Stability of Earth Dams
Shuai Wang
2012-10-01
Full Text Available The slope stability is a very important problem in geotechnical engineering. This paper presents an approach for slope reliability analysis based on the maximum-entropy method. The key idea is to implement the maximum entropy principle in estimating the probability density function. The performance function is formulated by the Simplified Bishop’s method to estimate the slope failure probability. The maximum-entropy method is used to estimate the probability density function (PDF of the performance function subject to the moment constraints. A numerical example is calculated and compared to the Monte Carlo simulation (MCS and the Advanced First Order Second Moment Method (AFOSM. The results show the accuracy and efficiency of the proposed method. The proposed method should be valuable for performing probabilistic analyses.
50 years sets with positive reach - a survey -
Christoph Thäle
2008-09-01
Full Text Available The purpose of this paper is to summarize results on various aspects of sets with positive reach, which are up to now not available in such a compact form. After recalling briefly the results before 1959, sets with positive reach and their associated curvature measures are introduced. We develop an integral and current representation of these curvature measures and show how the current representation helps to prove integralgeometric formulas, such as the principal kinematic formula. Also random sets with positive reach and random mosaics (or the more general random cell-complexes with general cell shape are considered.
REACH Basics for Chinese Producers of Electric Household Appliances
Dr.Klaus W.Mehl
2008-01-01
The following article explains the EU chemical regulation "REACH', explicates the requirements that Chinese producers are facing, and shows how they can fulfill the requirements and secure their access to the EU market. The consequences of failing to fulfill REACH requirements are given in REACH Article 5: No data, no market: ... substances ... in articles ... shall not be ... placed on the market unless they have been registered In other words: Without registration of chemicals Chinese producers of electric household appliances may loose their EU market.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Westhoff, M.; Erpicum, S.; Archambeau, P.; Pirotton, M.; Zehe, E.; Dewals, B.
2015-12-01
Power can be performed by a system driven by a potential difference. From a given potential difference, the power that can be subtracted is constraint by the Carnot limit, which follows from the first and second laws of thermodynamics. If the system is such that the flux producing power (with power being the flux times its driving potential difference) also influences the potential difference, a maximum in power can be obtained as a result of the trade-off between the flux and the potential difference. This is referred to as the maximum power principle. It has already been shown that the atmosphere operates close to this maximum power limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state of maximum power, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells. The aim of this study is to test if the soil's effective hydraulic conductivity also adapts in such a way that it produces maximum power. However, the soil's hydraulic conductivity adapts differently; for example by the creation of preferential flow paths. Here, this process is simulated in a lab experiment, which focuses on preferential flow paths created by piping. In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles, with the aim to test if the effective hydraulic conductivity of the sand bed can be predicted with the maximum power principle. The experimental setup consists of two freely draining reservoir connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. The results will indicate whether the maximum power principle does apply for groundwater flow and how it should be applied. Because of the different way of adaptation of flow conductivity, the results differ from that of the
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Reaching and Teaching: A Study in Audience Targeting.
Ritter, Ellen M.; Welch, Diane T.
1988-01-01
Describes a project conducted by the Texas Agricultural Extension Service to market the Family Day Home Care Providers Program to an unknown clientele. Discusses the problems involved in identifying and reaching the target audience. (JOW)
Stream Habitat Reach Summary - North Coast [ds63
California Department of Resources — The shapefile is based on habitat unit level data summarized at the stream reach level. The database represents salmonid stream habitat surveys from 645 streams of...
Helping the Library Reach Out to the Future
... Issues Helping the Library Reach Out to the Future Past Issues / Fall 2007 Table of Contents For ... of this page please turn Javascript on. Encouraging future medical researchers: (l-r) NLM Director Dr. Donald ...
Hanford Reach - Snively Basin Rye Field Rehabilitation 2014
US Fish and Wildlife Service, Department of the Interior — The Snively Basin area of the Arid Lands Ecology Reserve within the Hanford Reach National Monument was historically used to farm cereal rye (Secale cereale), among...
PNW River Reach Files -- 1:100k Waterbodies (polygons)
Pacific States Marine Fisheries Commission — This feature class includes the POLYGON waterbody features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes...
Reach tracking reveals dissociable processes underlying cognitive control.
Erb, Christopher D; Moher, Jeff; Sobel, David M; Song, Joo-Hyun
2016-07-01
The current study uses reach tracking to investigate how cognitive control is implemented during online performance of the Stroop task (Experiment 1) and the Eriksen flanker task (Experiment 2). We demonstrate that two of the measures afforded by reach tracking, initiation time and reach curvature, capture distinct patterns of effects that have been linked to dissociable processes underlying cognitive control in electrophysiology and functional neuroimaging research. Our results suggest that initiation time reflects a response threshold adjustment process involving the inhibition of motor output, while reach curvature reflects the degree of co-activation between response alternatives registered by a monitoring process over the course of a trial. In addition to shedding new light on fundamental questions concerning how these processes contribute to the cognitive control of behavior, these results present a framework for future research to investigate how these processes function across different tasks, develop across the lifespan, and differ among individuals. Copyright © 2016 Elsevier B.V. All rights reserved.
Birth Defects from Zika More Far-Reaching Than Thought
... gov/news/fullstory_162538.html Birth Defects From Zika More Far-Reaching Than Thought Studies found greater ... 14, 2016 WEDNESDAY, Dec. 14, 2016 (HealthDay News) -- Zika's ability to damage the infant brain may be ...
Monitoring Weather Station Fire Rehabilitation Treatments: Hanford Reach National Monument
US Fish and Wildlife Service, Department of the Interior — The Weather Station Fire (July, 2005) burned across 4,918 acres in the Saddle Mountain Unit of the Hanford Reach National Monument, which included parts of the...
PNW River Reach Files -- 1:100k Watercourses (arcs)
Pacific States Marine Fisheries Commission — This feature class includes the ARC features from the 2001 version of the PNW River Reach files Arc/INFO coverage. Separate, companion feature classes are also...
Optical technologies in extended-reach access networks
Wong, Elaine; Amaya Fernández, Ferney Orlando; Tafur Monroy, Idelfonso
2009-01-01
The merging of access and metro networks has been proposed as a solution to lower the unit cost of customer bandwidth. This paper reviews some of the recent advances and challenges in extended-reach optical access networks....
Hanford Reach - Strategic Control of Phragmites Within Saddle Mountain Lakes
US Fish and Wildlife Service, Department of the Interior — The Saddle Lakes Fire of 2015 burned 14,200 acres of habitat on Saddle Mountain National Wildlife Refuge, part of the Hanford Reach National Monument. Within the...
Hanford Reach - Snively Basin Rye Field Rehabilitation 2012
US Fish and Wildlife Service, Department of the Interior — The Snively Basin area of the Arid Lands Ecology Reserve (ALE) within the Hanford Reach National Monument was historically used to farm cereal rye, among other...
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
An approximate, maximum terminal velocity descent to a point
Eisler, G.R.; Hull, D.G.
1987-01-01
No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximate physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
The maximum contribution to reionization from metal-free stars
Rozas, J M; Salvador-Solé, E; Rozas, Jose M.; Miralda-Escude, Jordi; Salvador-Sole, Eduard
2005-01-01
We estimate the maximum contribution to reionization from the first generation of massive stars, with zero metallicity, under the assumption that one of these stars forms with a fixed mass in every collapsed halo in which metal-free gas is able to cool. We assume that any halo that has already had stars previously formed in one of their halo progenitors will form only stars with metals, which are assigned an emissivity of ionizing radiation equal to that determined at z=4 from the measured intensity of the ionizing background. We examine the impact of molecular hydrogen photodissociation (which tends to reduce cooling when a photodissociating background is produced by the first stars) and X-Ray photoheating (which heats the atomic medium, raising the entropy of the gas before it collapses into halos). We find that in the CDM$\\Lambda$ model supported by present observations, and even assuming no negative feedbacks for the formation of metal-free stars, a reionized mass fraction of 50% is not reached until reds...
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.
2017-04-01
When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach). This methodology considers a period to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the model behaves satisfactory. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each station, a BReach analysis is performed and subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear consistent with this knowledge of historical changes and facilitates thus a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).
Whole-Body Reaching Movements Formulated by Minimum Muscle-Tension Change Criterion.
Kudo, Naoki; Choi, Kyuheong; Kagawa, Takahiro; Uno, Yoji
2016-05-01
It is well known that planar reaching movements of the human shoulder and elbow joints have invariant features: roughly straight hand paths and bell-shaped velocity profiles. The optimal control models with the criteria of smoothness or precision, which determine a unique movement pattern, predict such features of hand trajectories. In this letter on expanding the research on simple arm reaching movements, we examine whether the smoothness criteria can be applied to whole-body reaching movements with many degrees of freedom. Determining a suitable joint trajectory in the whole-body reaching movement corresponds to the optimization problem with constraints, since body balance must be maintained during a motion task. First, we measured human joint trajectories and ground reaction forces during whole-body reaching movements, and confirmed that subjects formed similar movements with common characteristics in the trajectories of the hand position and body center of mass. Second, we calculated the optimal trajectories according to the criteria of torque and muscle-tension smoothness. While the minimum torque change trajectories were not consistent with the experimental data, the minimum muscle-tension change model was able to predict the stereotyped features of the measured trajectories. To explore the dominant effects of the extension from the torque change to the muscle-tension change, we introduced a weighted torque change cost function. Considering the maximum voluntary contraction (MVC) force of the muscle as the weighting factor of each joint torque, we formulated the weighted torque change cost as a simplified version of the minimum muscle-tension change cost. The trajectories owing to the minimum weighted torque change criterion also showed qualitative agreement with the common features of the measured data. Proper estimation of the MVC forces in the body joints is essential to reproduce human whole-body movements according to the minimum muscle-tension change
Field line distribution of density at L=4.8 inferred from observations by CLUSTER
S. Schäfer
2009-02-01
Full Text Available For two events observed by the CLUSTER spacecraft, the field line distribution of mass density ρ was inferred from Alfvén wave harmonic frequencies and compared to the electron density ne from plasma wave data and the oxygen density nO+ from the ion composition experiment. In one case, the average ion mass M≈ρ/ne was about 5 amu (28 October 2002, while in the other it was about 3 amu (10 September 2002. Both events occurred when the CLUSTER 1 (C1 spacecraft was in the plasmatrough. Nevertheless, the electron density ne was significantly lower for the first event (ne=8 cm−3 than for the second event (ne=22 cm−3, and this seems to be the main difference leading to a different value of M. For the first event (28 October 2002, we were able to measure the Alfvén wave frequencies for eight harmonics with unprecedented precision, so that the error in the inferred mass density is probably dominated by factors other than the uncertainty in frequency (e.g., magnetic field model and theoretical wave equation. This field line distribution (at L=4.8 was very flat for magnetic latitude |MLAT|≲20° but very steeply increasing with respect to |MLAT| for |MLAT|≳40°. The total variation in ρ was about four orders of magnitude, with values at large |MLAT| roughly consistent with ionospheric values. For the second event (10 September 2002, there was a small local maximum in mass density near the magnetic equator. The inferred mass density decreases to a minimum 23% lower than the equatorial value at |MLAT|=15.5°, and then steeply increases as one moves along the field line toward the ionosphere. For this event we were also able to examine the spatial dependence of the electron density using measurements of ne from all four CLUSTER spacecraft. Our analysis indicates that the density varies with L at L~5 roughly like L−4, and that ne is also locally peaked at the magnetic equator, but with a smaller peak. The value of ne reaches a density minimum
Concept of REACH and impact on evaluation of chemicals.
Foth, H; Hayes, Aw
2008-01-01
Industrial chemicals have been in use for many decades and new products are regularly invented and introduced to the market. Also for decades, many different chemical laws have been introduced to regulate safe handling of chemicals in different use patterns. The patchwork of current regulation in the European Union is to be replaced by the new regulation on industrial chemical control, REACH. REACH stands for registration, evaluation, and authorization of chemicals. REACH entered force on June 1, 2007. REACH aims to overcome limitations in testing requirements of former regulation on industrial chemicals to enhance competitiveness and innovation with regard to manufacture safer substances and to promote the development of alternative testing methods. A main task of REACH is to address data gaps regarding the properties and uses of industrial chemicals. Producers, importers, and downstream users will have to compile and communicate standard information for all chemicals. Information sets to be prepared include safety data sheets (SDS), chemical safety reports (CSR), and chemical safety assessments (CSA). These are designed to guarantee adequate handling in the production chain, in transport and in use and to prevent the substances from being released to and distributed within the environment. Another important aim is to identify the most harmful chemicals and to set incentives to substitute them with safer alternatives. On one hand, REACH will have substantial impact on the basic understanding of the evaluation of chemicals. However, the toxicological sciences can also substantially influence the workability of REACH that supports the transformation of data to the information required to understand and manage acceptable and non acceptable risks in the use of industrial chemicals. The REACH regulation has been laid down in the main document and 17 Annexes of more than 849 pages. Even bigger technical guidance documents will follow and will inform about the rules for
Thomas S Ullrich
2004-02-01
QCD predicts a phase transition between hadronic matter and a quark-gluon plasma at high energy density. The relativistic heavy ion collider (RHIC) at Brookhaven National Laboratory is a new facility dedicated to the experimental study of matter under extreme conditions. Already the first round of experimental results at RHIC indicated that the conditions to create a new state of matter are indeed reached in the collisions of heavy nuclei. Studies of particle spectra and their correlations at low transverse momenta provide evidence of strong pressure gradients in the highly interacting dense medium and hint that we observe a system in thermal equilibrium. Recent runs with high statistics allow us to explore the regime of hard-scattering processes where the suppression of hadrons at large transverse momentum, and quenching of di-jets are observed thus providing further evidence for extreme high density matter created in collisions at RHIC.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Consensus reaching in swarms ruled by a hybrid metric-topological distance
Shang, Yilun
2014-01-01
Recent empirical observations of three-dimensional bird flocks and human crowds have challenged the long-prevailing assumption that a metric interaction distance rules swarming behaviors. In some cases, individual agents are found to be engaged in local information exchanges with a fixed number of neighbors, i.e. a topological interaction. However, complex system dynamics based on pure metric or pure topological distances both face physical inconsistencies in low and high density situations. Here, we propose a hybrid metric-topological interaction distance overcoming these issues and enabling a real-life implementation in artificial robotic swarms. We use network- and graph-theoretic approaches combined with a dynamical model of locally interacting self-propelled particles to study the consensus reaching pro- cess for a swarm ruled by this hybrid interaction distance. Specifically, we establish exactly the probability of reaching consensus in the absence of noise. In addition, simulations of swarms of self-pr...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Memory-guided reaching in a patient with visual hemiagnosia.
Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc
2016-06-01
The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing.
Hancock Beverley
2010-04-01
Full Text Available Abstract Background The notion 'hard to reach' is a contested and ambiguous term that is commonly used within the spheres of social care and health, especially in discourse around health and social inequalities. There is a need to address health inequalities and to engage in services the marginalized and socially excluded sectors of society. Methods This paper describes a pilot study involving interviews with representatives from eight Voluntary and Community Sector (VCS organisations. The purpose of the study was to explore the notion of 'hard to reach' and perceptions of the barriers and facilitators to accessing services for 'hard to reach' groups from a voluntary and community sector perspective. Results The 'hard to reach' may include drug users, people living with HIV, people from sexual minority communities, asylum seekers, refugees, people from black and ethnic minority communities, and homeless people although defining the notion of the 'hard to reach' is not straight forward. It may be that certain groups resist engaging in treatment services and are deemed hard to reach by a particular service or from a societal stance. There are a number of potential barriers for people who may try and access services, including people having bad experiences in the past; location and opening times of services and how services are funded and managed. A number of areas of commonality are found in terms of how access to services for 'hard to reach' individuals and groups could be improved including: respectful treatment of service users, establishing trust with service users, offering service flexibility, partnership working with other organisations and harnessing service user involvement. Conclusions If health services are to engage with groups that are deemed 'hard to reach' and marginalised from mainstream health services, the experiences and practices for engagement from within the VCS may serve as useful lessons for service improvement for
Minimax Current Density Coil Design
Poole, Michael; Lopez, Hector Sanchez; Ng, Michael; Crozier, Stuart; 10.1088/0022-3727/43/9/095001
2010-01-01
'Coil design' is an inverse problem in which arrangements of wire are designed to generate a prescribed magnetic field when energized with electric current. The design of gradient and shim coils for magnetic resonance imaging (MRI) are important examples of coil design. The magnetic fields that these coils generate are usually required to be both strong and accurate. Other electromagnetic properties of the coils, such as inductance, may be considered in the design process, which becomes an optimization problem. The maximum current density is additionally optimized in this work and the resultant coils are investigated for performance and practicality. Coils with minimax current density were found to exhibit maximally spread wires and may help disperse localized regions of Joule heating. They also produce the highest possible magnetic field strength per unit current for any given surface and wire size. Three different flavours of boundary element method that employ different basis functions (triangular elements...
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Low-Cost Tele-assessment System for Home-Based Evaluation of Reaching Ability Following Stroke
Rau, Chi-Lun; Chen, Ya-Ping; Lai, Jin-Shin; Chen, Shih-Ching; Kuo, Te-Son; Jaw, Fu-Shan
2013-01-01
Abstract Objective: Tele-assessment techniques can provide healthcare professionals with easily accessible information regarding patients' clinical progress. Recently, kinematic analysis systems have been used to assess rehabilitative outcomes in stroke patients. Kinematic systems, however, are not compatible with tele-assessment. The objective of our study was to develop a tele-assessment system for acquiring kinematic data of forward reaching movements in stroke patients, with an emphasis on cost-effectiveness, portability, and ease of use. Materials and Methods: We selected four healthy control participants and eight hemiplegic stroke patients for our study. The stroke patients were classified as Brunnstrom stage III, stage IV, or stage V. Our tele-assessment system used two three-axes accelerometers, a potentiometer, a multifunctional data acquisition card, and two computers. A standardized kinematic system was applied simultaneously to validate the measurements recorded by our tele-assessment system during five repetitions of forward reaching movements. Results: The correlation coefficients of the reaching displacement, velocity, and acceleration measurements obtained using our tele-assessment system and the standardized kinematic system were 0.956, 0.896, and 0.727, respectively. Differences in the maximum reaching distance and the maximum reaching velocity of forward reaching movements were observed among the study groups. There were no significant differences in the time required to complete the testing session among the study groups. Conclusions: Our tele-assessment system is valid for the evaluation of upper-extremity reaching ability in stroke patients. Further research is needed to investigate the feasibility of the use of the tele-assessment system in patients' homes. PMID:24138613
Health campaign channels: tradeoffs among reach, specificity, and impact.
Schooler, C; Chaffee, S H; Flora, J A; Roser, C
1998-03-01
Stanford University's Five-City Multifactor Risk Reduction Project (FCP) was a 14-year trial of community-wide cardiovascular disease (CVD) risk reduction through integrated programs of community organization and mass media health promotion. The project was launched in 1978 in 5 central California cities, including Monterey, Salinas, Modesto, and San Luis Obispo. TV public service announcements (PSAs), TV shows, booklets, printed tip sheets with brief health suggestions on 7 topics, and newspaper coverage were the types of mass media approaches used in the FCP. These strategies are compared with regard to reach, specificity, and impact for a 5-year study period from 1979/80. Reach is measured as the number of messages intervention community residents remembered, specificity was assessed by examining whether the campaign differentially reached people who were already knowledgeable and practicing cardiovascular disease risk reduction, and impact is defined as the amount of knowledge gained during the course of the campaign. Reach was highest for tip sheets, while specificity was highest for booklets followed by TV programs. Newspaper messages had the most impact, followed by booklets and TV PSAs, tip sheets, and TV programs. Communication channels varied according to reach, specificity, and impact, with each criterion being distinct. No channel was optimal for all 3 of the outcome measures.
The reach and impact of social marketing and reproductive health communication campaigns in Zambia
Meekers Dominique
2007-12-01
Full Text Available Abstract Background Like many sub-Saharan African countries, Zambia is dealing with major health issues, including HIV/AIDS, family planning, and reproductive health. To address reproductive health problems and the HIV/AIDS epidemic in Zambia, several social marketing and health communication programs focusing on reproductive and HIV/AIDS prevention programs are being implemented. This paper describes the reach of these programs and assesses their impact on condom use. Methods This paper assesses the reach of selected radio and television programs about family planning and HIV/AIDS and of communications about the socially marketed Maximum condoms in Zambia, as well as their impact on condom use, using data from the 2001–2002 Zambia Demographic and Health Survey. To control for self-selection and endogeneity, we use a two-stage regression model to estimate the effect of program exposure on the behavioural outcomes. Results Those who were exposed to radio and television programs about family planning and HIV/AIDS were more likely to have ever used a condom (OR = 1.16 for men and 1.06 for women. Men highly exposed to Maximum condoms social marketing communication were more likely than those with low exposure to the program to have ever used a condom (OR = 1.48, and to have used a condom at their last sexual intercourse (OR = 1.23. Conclusion Findings suggest that the reproductive health and social marketing campaigns in Zambia reached a large portion of the population and had a significant impact on condom use. The results suggest that future reproductive health communication campaigns that invest in radio programming may be more effective than those investing in television programming, and that future campaigns should seek to increase their impact among women, perhaps by focusing on the specific constrains that prevent females from using condoms.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Reach-to-grasp interjoint coordination for moving object in children with hemiplegia.
Petrarca, Maurizio; Zanelli, Giulia; Patanè, Fabrizio; Frascarelli, Flaminia; Cappa, Paolo; Castelli, Enrico
2009-11-01
To evaluate interjoint coordination in children with hemiplegia as they reach to grasp objects, in both static and dynamic conditions. An ad hoc robotic device was used to study the dynamic condition. Observational study. Six children with hemiplegia and 6 young adults. Kinematics of the trunk and arm were studied using an optoelectronic system. In the dynamic condition the target object, a cup, was moved by the robotic device along clockwise and counterclockwise circular trajectories. Two main strategies were used to study the onset and offset of shoulder and elbow movements and their maximum velocities. The hand velocity profile was bell-shaped in the static condition and compatible with ramp movements for the more affected side in the dynamic condition. The time to object contact was higher for the more affected side in the dynamic condition. The temporal coordination index illustrated an immature and less flexible behaviour in children's reaching in all the examined conditions. Study of the hand velocity profiles, the time to object contact and the temporal coordination index highlighted, first, the dependence of upper limb interjoint coordination on task, context, residual resources and individual solution, and secondly, the sensory-motor deficit characteristics of the children's more affected side during dynamic reaching, raising the prospect of a promising training context in children with hemiplegia.
Es-sebbar, Et-touhami
2012-11-27
Absolute ground-state density of nitrogen atoms N (2p3 4S3/2) in non-equilibrium Townsend dielectric barrier discharges (TDBDs) at atmospheric pressure sustained in N2/N2O and N2/O2 gas mixtures has been measured using Two-photon absorption laser-induced fluorescence (TALIF) spectroscopy. The quantitative measurements have been obtained by TALIF calibration using krypton as a reference gas. We previously reported that the maximum of N (2p3 4S3/2) atom density is around 3 × 1014 cm-3 in pure nitrogen TDBD, and that this maximum depends strongly on the mean energy dissipated in the gas. In the two gas mixtures studied here, results show that the absolute N (2p3 4S3/2) density is strongly affected by the N2O and O2 addition. Indeed, the density still increases exponentially with the energy dissipated in the gas but an increase in N2O and O2 amounts (a few hundreds of ppm) leads to a decrease in nitrogen atom density. No discrepancy in the order of magnitude of N (2p3 4S3/2) density is observed when comparing results obtained in N2/N2O and N2/O2 mixtures. Compared with pure nitrogen, for an energy of ∼90 mJ cm-3, the maximum of N (2p3 4S3/2) density drops by a factor of 3 when 100 ppm of N2O and O2 are added and it reduces by a factor of 5 for 200 ppm, to reach values close to our TALIF detection sensitivity for 400 ppm (1 × 1013 cm -3 at atmospheric pressure). © 2013 IOP Publishing Ltd.
All Recent Mars Landers Have Landed Downrange - Are Mars Atmosphere Models Mis-Predicting Density?
Desai, Prasun N.
2008-01-01
All recent Mars landers (Mars Pathfinder, the two Mars Exploration Rovers Spirit and Opportunity, and the Mars Phoenix Lander) have landed further downrange than their pre-entry predictions. Mars Pathfinder landed 27 km downrange of its prediction [1], Spirit and Opportunity landed 13.4 km and 14.9 km, respectively, downrange from their predictions [2], and Phoenix landed 21 km downrange from its prediction [3]. Reconstruction of their entries revealed a lower density profile than the best a priori atmospheric model predictions. Do these results suggest that there is a systemic issue in present Mars atmosphere models that predict a higher density than observed on landing day? Spirit Landing: The landing location for Spirit was 13.4 km downrange of the prediction as shown in Fig. 1. The navigation errors upon Mars arrival were very small [2]. As such, the entry interface conditions were not responsible for this downrange landing. Consequently, experiencing a lower density during the entry was the underlying cause. The reconstructed density profile that Spirit experienced is shown in Fig. 2, which is plotted as a fraction of the pre-entry baseline prediction that was used for all the entry, descent, and landing (EDL) design analyses. The reconstructed density is observed to be less dense throughout the descent reaching a maximum reduction of 15% at 21 km. This lower density corresponded to approximately a 1- low profile relative to the dispersions predicted. Nearly all the deceleration during the entry occurs within 10- 50 km. As such, prediction of density within this altitude band is most critical for entry flight dynamics analyses and design (e.g., aerodynamic and aerothermodynamic predictions, landing location, etc.).
The directed flow maximum near cs = 0
Brachmann, J.; Dumitru, A.; Stöcker, H.; Greiner, W.
2000-07-01
We investigate the excitation function of quark-gluon plasma formation and of directed in-plane flow of nucleons in the energy range of the BNL-AGS and for the E {Lab/kin} = 40 AGeV Pb + Pb collisions performed recently at the CERN-SPS. We employ the three-fluid model with dynamical unification of kinetically equilibrated fluid elements. Within our model with first-order phase transition at high density, droplets of QGP coexisting with hadronic matter are produced already at BNL-AGS energies, E {Lab/kin} ≃ 10 AGeV. A substantial decrease of the isentropic velocity of sound, however, requires higher energies, E {Lab/kin} ≃ 0 AGeV. We show the effect on the flow of nucleons in the reaction plane. According to our model calculations, kinematic requirements and EoS effects work hand-in-hand at E {Lab/kin} = 40 AGeV to allow the observation of the dropping velocity of sound via an increase of the directed flow around midrapidity as compared to top BNL-AGS energy.
Electron density and temperature in NIO1 RF source operated in oxygen and argon
Barbisan, M.; Zaniol, B.; Cavenago, M.; Pasqualotto, R.; Serianni, G.; Zanini, M.
2017-08-01
The NIO1 experiment, built and operated at Consorzio RFX, hosts an RF negative ion source, from which it is possible to produce a beam of maximum 130 mA in H- ions, accelerated up to 60 kV. For the preliminary tests of the extraction system the source has been operated in oxygen, whose high electronegativity allows to reach useful levels of extracted beam current. The efficiency of negative ions extraction is strongly influenced by the electron density and temperature close to the Plasma Grid, i.e. the grid of the acceleration system which faces the source. To support the tests, these parameters have been measured by means of the Optical Emission Spectroscopy diagnostic. This technique has involved the use of an oxygen-argon mixture to produce the plasma in the source. The intensities of specific Ar I and Ar II lines have been measured along lines of sight close to the Plasma Grid, and have been interpreted with the ADAS package to get the desired information. This work will describe the diagnostic hardware, the analysis method and the measured values of electron density and temperature, as function of the main source parameters (RF power, pressure, bias voltage and magnetic filter field). The main results show that not only electron density but also electron temperature increase with RF power; both decrease with increasing magnetic filter field. Variations of source pressure and plasma grid bias voltage appear to affect only electron temperature and electron density, respectively.
Delangle, Mathieu; Poirson, Emilie
2016-01-01
Numerical models and computer-aided modeling software are tools commonly used to assess the accessibility of an environment, based on static human body dimensions. In this paper, the limits of validity of these approaches are assessed by comparing the reach envelopes obtained by these methods to those obtained experimentally. First, the accessibility areas of forty adult subjects, which may correspond to the distance of reachability of products, were evaluated by performing an accessibility task comprising 168 reach points. Second, anthropometric characteristics of participants were recorded and used to perform the reach assessment by a numerical method, and then by a CAD-based analysis, where the reach was predicted using the software's maximum reach-envelope generation. In spite of the simple nature of the presented design problem (two-dimensional), the results show important differences between the three methods. The study of the number of reached points shows that the CAD-based assessment provides more ac...
Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study
Jisun Byun
2014-12-01
Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.
REACH-related substitution within the Danish printing industry
Larsen, Henrik Fred; Bøg, Carsten; Markussen, Helene
The accomplishment of the EU REACH regulation will most probably promote substitution within sectors handling a lot of different chemicals like the printing industry. With the aim of being at the cutting edge of this development the Danish EPA together with the Danish printing industry and IPU...... are running a substitution project. A major part of the work has been mapping the presence of chemicals which are potential candidates for substitution (e.g. PBT, CMR, vPvB, EDS) within the Danish printing industry. The mapping comprises a combination of a literature study and an investigation of the actual...... fulfil one or more of the criteria (e.g. CMR, EDS) for the REACH Annex XIV candidate list (authorisation). The paper presents the results of the mapping of chemical candidates and the first results of the actual substitutions. Keywords: REACH, chemicals, substitution, printing industry....
Richtmyer - Meshkov instability in a spherical target with density variation
Mandal, Labakanta; Banerjee, Rahul; Khan, Manoranjan; Gupta, M R
2011-01-01
The motion of unstable fluid interface due to Richtmyer - Meshkov (RM) instability incorporating with density variation has been studied in a spherical target using Lagrangian formulation. During the compression in Inertial Confinement Fusion (ICF)process, the density of deuterium - tritium (DT) fuel increases 1000 times greater than the density of gaseous DT fuel within the core of spherical target. We have extended the feature of density variation [PRA,84-Mikaelian & Lindl] in spherical geometry.Due to convergent shock impingement, the perturbed interface will be nonspherical which leads to the density variation in both radial as well as in polar angle. We have shown that the interface of perturbed surface decreases with time to reach a minimum and then kick back to gradual increase. As the perturbed radius decreases, the density increases and reaches a maxima corresponding to a minima of perturbed radius. This is the practical situation of density characteristics during implosion of ICF. The numerical ...
Carbon storage of artificial forests in rehabilitated lands in the upper reaches of the Yellow River
HU Jianzhong
2006-01-01
We studied 10- to 27-year-old artificial forests on rehabilitated lands in the upper reaches of the Yellow River with the objective of comparing the carbon densities of various artificial and natural forests.Under artificial plantations,the vegetation layer (including roots) had a mean carbon density of 111.3 t/hm2,the litter layer a density of 5.1 t/hm2,and the soil layer a density of 64.9 t/hm2.These values accounted for 28.6%,13.8%,and 61.0% of their respective counterparts in the natural secondary forests under the same site conditions in the region.The ratios of carbon density among vegetation,litter,and soil pools were 39.6:1.8:58.6 for artificial forests and 57.4:2.7:39.9 for natural forests.The carbon densities of the vegetation and litter layers increased exponentially with forest age.The total carbon density ratios were also increasing gradually.Although the mean total carbon density of the artificial forests in the rehabilitated lands was 281.2 t/hm2 in the experimental area,it accounted for only 41.5% of the carbon density of the natural secondary forests (677.4 t/hm2).The annual increase in total carbon density of artificial forests was as high as 15.2 t/hm2,which was 11.7% more than that of natural forests and 6.8 times higher than that (1.95 t/hm) of artificial forests in the entire country as measured during 1994-1998.This indicates that growth and carbon storage capacity of artificial forests in the rehabilitated lands were higher than those of forests on the barren hills and the secondary forests.We concluded that the conversion project from croplands to forests and grasslands based on scientific principles is very important in the formation of carbon sinks for reducing greenhouse effects.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.
Ford, Ian J
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics
Ford, Ian J.
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Laboratory Density Functionals
Giraud, B. G.
2007-01-01
We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals.
Laboratory Density Functionals
Giraud, B G
2007-01-01
We compare several definitions of the density of a self-bound system, such as a nucleus, in relation with its center-of-mass zero-point motion. A trivial deconvolution relates the internal density to the density defined in the laboratory frame. This result is useful for the practical definition of density functionals.
Jenkins, M E; Johnson, A M; Holmes, J D; Stephenson, F F; Spaulding, S J
2010-07-01
Balance problems and falls are a common concern among individuals with Parkinson's disease (PD). Falls frequently occur during daily activities such as reaching into cupboards in the kitchen or bathroom. This study compared the correlation among two standard postural stability tests - the postural stability score on the Unified Parkinson's Disease Rating Scale (UPDRS) and the Functional Reach Test (FRT) - and ecologically valid reaching tasks that correspond to reaching at different cupboard heights among 20 individuals with PD and 20 age-matched controls. Both the FRT and the UPDRS postural stability tests are quick measures that can be performed during the clinical examination. The FRT, but not the postural stability score, demonstrated a significant correlation with the ecologically valid reaching tasks, among individuals with PD. Furthermore the FRT scores did not correlate with the UPDRS postural stability scores, indicating that these are measuring different aspects of balance. This study suggests that the FRT score may better predict the risk of postural instability encountered during daily activities among individuals with PD.
Chen, D-X [ICREA and Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain); Sanchez, A; Navau, C [Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain); Shi, Y-H; Cardwell, D A [Engineering Department, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)
2008-08-15
The field amplitude and frequency dependent complex ac susceptibility {chi}(H{sub m},f) of three Y-Ba-Cu-O disks made by a top-seeded melt growth technique has been measured at 77 K with the ac field applied along the c-axis of the samples (parallel to their thickness). A procedure based on the Bean model has been developed to calculate the critical-current density J{sub c} near the surface of the sample from the measured {chi}(H{sub m}) for the case where the maximum imaginary component {chi}{sup ''} is not reached.
Jiang, Fan, E-mail: jiangfan1109@163.com [Department of Materials and Physics, School of Physics and Optoelectronic Engineering, Nanjing University of Information Science and Technology, 219 Ningliu Road, Nanjing 210044, Jiangsu Province (China); School of Materials Science and Engineering, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 10083 (China)
2016-02-15
Graphical abstract: - Highlights: • Tungsten coatings were successfully electroplated on molybdenum substrate. • The electrodeposition was studied in the air atmosphere at 1173 K. • The coating had columnar structure with preferential growth orientation of (2 0 0). • The coating obtained at 50 mA cm{sup −2} had a maximum microhardness of 495 HV. - Abstract: Smooth tungsten coatings were prepared at current density below 70 mA cm{sup −2} by electrodeposition on molybdenum substrate from Na{sub 2}WO{sub 4}-WO{sub 3} -melt at 1173 K in air atmosphere. As the current density reached up to 90 mA cm{sup −2}, many significant nodules were observed on the surface of the coating. Surface characterization, microstructure and mechanical properties were performed on the tungsten coatings. As the increasing of current density, the preferred orientation of the coatings changed to (2 0 0). All coatings exhibited columnar-grained-crystalline. There was about a 2 μm thick diffusion layer between tungsten coating and molybdenum substrate. The bending test revealed the tungsten coating had –good bonding strength with the molybdenum substrate. There is a down trend of the grain size of the coating on molybdenum as the current density increased from 30 mA cm{sup −2} to 50 mA cm{sup −2}. The coating obtained at 50 mA cm{sup −2} had a minimum grain size of 4.57 μm, while the microhardness of this coating reached to a maximum value of 495 HV.
NERO - A Post Maximum Supernova Radiation Transport Code
Maurer, I; Mazzali, P A; Taubenberger, S; Hachinger, S; Kromer, M; Sim, S; Hillebrandt, W
2011-01-01
The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a NLTE treatment of radiation transport has rarely been studied. In this paper we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand et al. (2011) and to the nebular code of Mazzali et al. (2001). All three codes have bee...
Suspended sediment transport in the freshwater reach of the Hudson river estuary in eastern New York
Wall, G.R.; Nystrom, E.A.; Litten, S.
2008-01-01
Deposition of Hudson River sediment into New York Harbor interferes with navigation lanes and requires continuous dredging. Sediment dynamics at the Hudson estuary turbidity maximum (ETM) have received considerable study, but delivery of sediment to the ETM through the freshwater reach of the estuary has received relatively little attention and few direct measurements. An acoustic Doppler current profiler was positioned at the approximate limit of continuous freshwater to develop a 4-year time series of water velocity, discharge, suspended sediment concentration, and suspended sediment discharge. This data set was compared with suspended sediment discharge data collected during the same period at two sites just above the Hudson head-of-tide (the Federal Dam at Troy) that together represent the single largest source of sediment entering the estuary. The mean annual suspended sediment-discharge from the freshwater reach of the estuary was 737,000 metric tons. Unexpectedly, the total suspended sediment discharge at the study site in November and December slightly exceeded that observed during March and April, the months during which rain and snowmelt typically result in the largest sediment discharge to the estuary. Suspended sediment discharge at the study site exceeded that from the Federal Dam, even though the intervening reach appears to store significant amounts of sediment, suggesting that 30-40% of sediment discharge observed at the study site is derived from tributaries to the estuary between the Federal Dam and study site. A simple model of sediment entering and passing through the freshwater reach on a timescale of weeks appears reasonable during normal hydrologic conditions in adjoining watersheds; however, this simple model may dramatically overestimate sediment delivery during extreme tributary high flows, especially those at the end of, or after, the "flushing season" (October through April). Previous estimates of annual or seasonal sediment delivery
Advanced REACH tool: A Bayesian model for occupational exposure assessment
McNally, K.; Warren, N.; Fransman, W.; Entink, R.K.; Schinkel, J.; Van Tongeren, M.; Cherrie, J.W.; Kromhout, H.; Schneider, T.; Tielemans, E.
2014-01-01
This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate sourc
An Assessment of EU 2020 Strategy: Too Far to Reach?
Colak, Mehmet Selman; Ege, Aylin
2013-01-01
In 2010, EU adopted a new growth strategy which includes three growth priorities and five headline targets to be reached by 2020. The aim of this paper is to investigate the current performance of the EU member and candidate states in achieving these growth priorities and the overall strategy target by allocating the headline targets into the…
Reaching the Summit: Deaf Adults as Essential Partners in Education
Bourne-Firl, Bridgetta
2016-01-01
How do we reach the summit in terms of supporting the best transition possible for each young deaf or hard of hearing individual in the United States? Should professionals who are hearing work alone to succeed with deaf and hard of hearing students? No matter how good the intention, if we want deaf and hard of hearing students to transition from…
Reach for the Stars: Visions for Literacy Coaching Programs
DeFord, Diane
2012-01-01
This brief by the Literacy Coaching Clearinghouse is about reaching for the stars--stories of vision and commitment from educators in small and large schools. Everyone knows of people who are held up as "visionaries" throughout history: Leonardo Da Vinci, Mahatma Gandhi, Jules Verne, Thomas Edison, Susan Anthony, or John Dewey, to name a few. The…
Reaching a Moveable Visual Target: Dissociations in Brain Tumour Patients
Buiatti, Tania; Skrap, Miran; Shallice, Tim
2013-01-01
Damage to the posterior parietal cortex (PPC) can lead to Optic Ataxia (OA), in which patients misreach to peripheral targets. Recent research suggested that the PPC might be involved not only in simple reaching tasks toward peripheral targets, but also in changing the hand movement trajectory in real time if the target moves. The present study…
Nanshan Aluminum Reached Strategic Cooperation with CSR Corporation Limited
2015-01-01
As a key supplier of aluminum profiles and aluminum plate,sheet and trip products for CSR Corporation Limited,Nanshan Aluminum will join hands with CSR Corporation Limited to reach strategic cooperation.On January 5,Nanshan Aluminum signed strategic cooperation agreement with CSR Sifang Locomotive&Rolling; Stock Co.,Ltd,both
Reaching an understanding innovations in how we view reading assessment
Sabatini, John; O'Reilly, Tenaha
2012-01-01
Reaching an Understanding: Innovations in How We View Reading Assessment builds upon the editors previous book Measuring Up: Advances in How We Assess Reading Ability by representing some early attempts to apply theory to help guide the development of new assessments and measurement models.
Advanced reach tool (ART) : Development of the mechanistic model
Fransman, W.; Tongeren, M. van; Cherrie, J.W.; Tischer, M.; Schneider, T.; Schinkel, J.; Kromhout, H.; Warren, N.; Goede, H.; Tielemans, E.
2011-01-01
This paper describes the development of the mechanistic model within a collaborative project, referred to as the Advanced REACH Tool (ART) project, to develop a tool to model inhalation exposure for workers sharing similar operational conditions across different industries and locations in Europe. T
Priming of Reach and Grasp Actions by Handled Objects
Masson, Michael E. J.; Bub, Daniel N.; Breuer, Andreas T.
2011-01-01
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object,…
Variation in reach-scale hydraulic conductivity of streambeds
Stewardson, M. J.; Datry, T.; Lamouroux, N.; Pella, H.; Thommeret, N.; Valette, L.; Grant, S. B.
2016-04-01
Streambed hydraulic conductivity is an important control on flow within the hyporheic zone, affecting hydrological, ecological, and biogeochemical processes essential to river ecosystem function. Despite many published field measurements, few empirical studies examine the drivers of spatial and temporal variations in streambed hydraulic conductivity. Reach-averaged hydraulic conductivity estimated for 119 surveys in 83 stream reaches across continental France, even of coarse bed streams, are shown to be characteristic of sand and finer sediments. This supports a model where processes leading to the accumulation of finer sediments within streambeds largely control hydraulic conductivity rather than the size of the coarse bed sediment fraction. After describing a conceptual model of relevant processes, we fit an empirical model relating hydraulic conductivity to candidate geomorphic and hydraulic drivers. The fitted model explains 72% of the deviance in hydraulic conductivity (and 30% using an external cross-validation). Reach hydraulic conductivity increases with the amplitude of bedforms within the reach, the bankfull channel width-depth ratio, stream power and upstream catchment erodibility but reduces with time since the last streambed disturbance. The correlation between hydraulic conductivity and time since a streambed mobilisation event is likely a consequence of clogging processes. Streams with a predominantly suspended load and less frequent streambed disturbances are expected to have a lower streambed hydraulic conductivity and reduced hyporheic fluxes. This study suggests a close link between streambed sediment transport dynamics and connectivity between surface water and the hyporheic zone.
Postural control during reaching in preterm children with cerebral palsy
van der Heide, JC; Begeer, C; Fock, JM; Otten, B; Stremmelaar, E; van Eykern, LA; Hadders-Algra, M
2004-01-01
Postural control during reaching with the dominant arm was assessed in 58 preterm children with cerebral palsy (CP) aged 2 to 11 years, comprising 34 with spastic hemiplegia (17 males, 17 females) and 24 with bilateral spastic CP (bilateral CP; 15 male, 9 females). Assessments were made by multiple
Perceiving Children's Behavior and Reaching Limits in a Risk Environment
Cordovil, Rita; Santos, Carlos; Barreiros, Joao
2012-01-01
The purpose of this study was to investigate the accuracy of parents' perception of children's reaching limits in a risk scenario. A sample of 68 parents of 1- to 4-year-olds were asked to make a prior estimate of their children's behavior and action limits in a task that involved retrieving a toy out of the water. The action modes used for…
LTRM Fish Sampling Strata, UMRS La Grange Reach
U.S. Geological Survey, Department of the Interior — The data set includes delineation of sampling strata for the six study reaches of the UMRR Programâs LTRM element. Separate strata coverages exist for each of the...
Control of reaching movements by muscle synergy combinations
Andrea eD'avella
2013-04-01
Full Text Available Controlling the movement of the arm to achieve a goal, such as reaching for an object, is challenging because it requires coordinating many muscles acting on many joints. The central nervous system might simplify the control of reaching by directly mapping initial states and goals into muscle activations through the combination of muscle synergies, coordinated recruitment of groups of muscles with specific activation profiles. Here we review recent results from the analysis of reaching muscle patterns supporting such a control strategy. Muscle patterns for point-to-point movements can be reconstructed by the combination of a small number of time-varying muscle synergies, modulated in amplitude and timing according to movement directions and speeds. Moreover, the modulation and superposition of the synergies identified from point-to-point movements captures the muscle patterns underlying multi-phasic movements, such as reaching through a via-point or to a target whose location changes after movement initiation. Thus, the sequencing of time-varying muscle synergies might implement an intermittent controller which would allow the construction of complex movements from simple building blocks.
The Internet and the Global Reach of EU law
Kuner, Christopher Barth
, and fundamental rights. There are many examples of the EU exerting its global reach regarding the Internet, particularly in data protection law, but also in areas such as Internet governance, international agreements, and private international law. This occurs through a variety of mechanisms, including emulation...
An Assessment of EU 2020 Strategy: Too Far to Reach?
Colak, Mehmet Selman; Ege, Aylin
2013-01-01
In 2010, EU adopted a new growth strategy which includes three growth priorities and five headline targets to be reached by 2020. The aim of this paper is to investigate the current performance of the EU member and candidate states in achieving these growth priorities and the overall strategy target by allocating the headline targets into the…
Veterans Crisis Line: Videos About Reaching out for Help
Full Text Available ... the U.S. Department of Veterans Affairs and other organizations about reaching out for help. Bittersweet More Videos ... Health Administration I am A Veteran Family/Friend Active Duty/Reserve and Guard Signs of Crisis Identifying ...
Development of postural adjustments during reaching in infants with CP
Hadders-Algra, M; van der Fits, IBM; Stremmelaar, EF; Touwen, BCL
1999-01-01
The development of postural adjustments during reaching movements was longitudinally studied in seven infants with cerebral palsy (CP) between 4 and 18 months of age. Five infants developed spastic hemiplegia, one spastic tetraplegia, and one spastic tetraplegia with athetosis. Each assessment consi
Reaching a Representative Sample of College Students: A Comparative Analysis
Giovenco, Daniel P.; Gundersen, Daniel A.; Delnevo, Cristine D.
2016-01-01
Objective: To explore the feasibility of a random-digit dial (RDD) cellular phone survey in order to reach a national and representative sample of college students. Methods: Demographic distributions from the 2011 National Young Adult Health Survey (NYAHS) were benchmarked against enrollment numbers from the Integrated Postsecondary Education…
The influence of object identity on obstacle avoidance reaching behaviour.
de Haan, A M; Van der Stigchel, S; Nijnens, C M; Dijkerman, H C
2014-07-01
When reaching for target objects, we hardly ever collide with other objects located in our working environment. Behavioural studies have demonstrated that the introduction of non-target objects into the workspace alters both spatial and temporal parameters of reaching trajectories. Previous studies have shown the influence of spatial object features (e.g. size and position) on obstacle avoidance movements. However, obstacle identity may also play a role in the preparation of avoidance responses as this allows prediction of possible negative consequences of collision based on recognition of the obstacle. In this study we test this hypothesis by asking participants to reach towards a target as quickly as possible, in the presence of an empty or full glass of water placed about half way between the target and the starting position, at 8 cm either left or right of the virtual midline. While the spatial features of full and empty glasses of water are the same, the consequences of collision are clearly different. Indeed, when there was a high chance of collision, reaching trajectories veered away more from filled than from empty glasses. This shows that the identity of potential obstacles, which allows for estimating the predicted consequences of collision, is taken into account during obstacle avoidance.
Science Behind Bars: Reaching Inmates from Rikers Island
Mocsy, Agnes
2017-01-01
I report on the project ``Science Behind Bars: Reaching Inmates on Rikers Island'' partially funded through an APS Public Outreach and Informing the Public grant. This project involves developing leave-behind materials and setting up meetings to speak with Rikers Island inmates about science, evidence-based reasoning, and the dangers of stereotype threat. APS Mini Grant.
Arctic sea ice reaches second lowest in satellite record
2011-01-01
Xinhua reports that the blanket of sea ice that floats on the Arctic Ocean appears to have reached its lowest extent for 2011, the second lowest recorded since satellites began measuring it in 1979, according to a report released on September 15 by the University of Colorado Boulder＇s National Snow and Ice Data Center （NSIDC）.
LTRM Water Quality Sampling Strata, UMRS La Grange Reach
U.S. Geological Survey, Department of the Interior — The data set includes delineation of sampling strata for the six study reaches of the UMRR Programâs LTRM element. Separate strata coverages exist for each of the...
Imaginative Play during Childhood: Required for Reaching Full Potential
Stephens, Karen
2009-01-01
At a brisk pace, research findings focused on children's play are finally reaching the light of day in popular media. No longer left sitting in archives of academic journals, the benefits of play to lifelong success have been touted in radio, television, magazines, and newspapers. It gives early childhood professionals a powerful, credible…
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
KRM Silveira
2006-12-01
Full Text Available OBJETIVOS: O objetivo deste estudo foi avaliar os padrões de desempenho dos testes Functional Reach e Lateral Reach em uma amostra de indivíduos saudáveis de 20 a 87 anos e verificar a influência do gênero, idade, estatura do indivíduo, peso corporal, comprimentos do braço e do pé. MÉTODO: foi realizado um estudo observacional transversal com 98 pessoas de ambos os gêneros, que residiam na capital e interior de São Paulo. Os voluntários tiveram suas medidas descritivas registradas e posteriormente foram submetidos aos testes Functional Reach e Lateral Reach. RESULTADOS: Para o FR, todas as variáveis tiveram influência, exceto o comprimento do braço (p=0,057, o peso corporal (p=0,746 e a base de suporte usada no momento da avaliação (p=0,384. As variáveis que exerceram maior influência foram o gênero (p=0,001, a idade (pOBJECTIVE: To assess the performance in the functional reach test (FR and lateral reach test (LR among a sample of healthy individuals aged 20 to 87 years and to verify the influence of gender, age, height, body weight, arm length and foot length. METHOD: A cross-sectional observational study was conducted on 98 people of both genders living in the city of São Paulo and other places in the State of São Paulo. The volunteers were measured and then underwent FR and LR. RESULTS: All the variables had an influence on FR, except arm length (p=0.057, body weight (p=0.746 and the support base used at the time of assessment (p=0.384. The variables exerting greatest influence were the individual's gender (p=0.001, age (p<0.001 and height (p=0.004. This analysis showed that women had less anterior and lateral functional reach than men. There was a substantial positive correlation (r=0.696 between the left and right LR findings. FR had a moderate positive correlation of 0.405 with the left LR and a substantial positive correlation of 0.614 with the right LR. For LR, the height, weight, foot length and arm length
Wang, Yunpu; Dai, Leilei; Fan, Liangliang; Cao, Leipeng; Zhou, Yue; Zhao, Yunfeng; Liu, Yuhuan; Ruan, Roger
2017-03-01
In this study, a ZrO2-based polycrystalline ceramic foam catalyst was prepared and used in catalytic co-pyrolysis of waste vegetable oil and high density polyethylene (HDPE) for hydrocarbon fuel production. The effects of pyrolysis temperature, catalyst dosage, and HDPE to waste vegetable oil ratio on the product distribution and hydrocarbon fuel composition were examined. Experimental results indicate that the maximum hydrocarbon fuel yield of 63.1wt. % was obtained at 430°C, and the oxygenates were rarely detected in the hydrocarbon fuel. The hydrocarbon fuel yield increased when the catalyst was used. At the catalyst dosage of 15wt.%, the proportion of alkanes in the hydrocarbon fuel reached 97.85wt.%, which greatly simplified the fuel composition and improved the fuel quality. With the augment of HDPE to waste vegetable oil ratio, the hydrocarbon fuel yield monotonously increased. At the HDPE to waste vegetable oil ratio of 1:1, the maximum proportion (97.85wt.%) of alkanes was obtained. Moreover, the properties of hydrocarbon fuel were superior to biodiesel and 0(#) diesel due to higher calorific value, better low-temperature low fluidity, and lower density and viscosity. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
Epiphyte Density and Diversity on Halimeda incrassata and the Affect on Bulk Isotopic Measurements
Drayer, C. L.; Katz, D. A.; Devlin, Q. B.; Swart, P. K.; Evans, S. L.
2008-12-01
Epiphytic density and diversity, organic and inorganic δ13C and organic δ15N were determined for the green calcifying benthic macroalgae, Halimeda incrassata, from Biscayne Bay, a subtropical coastal lagoon located off the southeastern coast of Florida. Cocconeis and Bacteriastrum were determined to be the two most dominant diatomaceous epiphytes living on the Halimeda tissue but an unidentified bacteria proved to be the overall most abundant epiphyte. Cocconeis and the bacteria reached maximum densities mid-strand while Bacteriastrum reached a bimodal peak density at the top and bottom of the strand. Organic δ13C are consistently lighter towards the top of the strand while inorganic δ13C shows the opposite pattern. These results indicate that photosynthetic rates are higher at the top of the strand where metabolic processes are able to preferentially uptake 12C . δ13C values can vary as much as 3 ‰ by algal segment and inorganic values tend to approach equilibrium with the surrounding seawater in the lower algal segments, which are the oldest parts of the thallus. Organic δ15N shows no definite independent trend but increases in δ15N appear to correlate with increased density of the Cocconeis diatom. Organic δ15N can vary as much 9 ‰ by algal segment. This work questions the validity of using bulk isotopic measurements of the algae as a proxy for the origin of nitrogen. Future work should address quantification of epiphytic and Halimeda sp. biomass, the possible presence of other bacteria within the algal tissue and skeletal structure, the relationship between the bacteria and algal host (symbiotic vs. parasitic), and determine how epiphytic communities vary over time (seasons) and space (location within the Bay).
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Surface Elevation Distribution of Sea Waves Based on the Maximum Entropy Principle
戴德君; 王伟; 钱成春; 孙孚
2001-01-01
A probability density function of surface elevation is obtained through improvement of the method introduced byCieslikiewicz who employed the maximum entropy principle to investigate the surface elevation distribution. The densityfunction can be easily extended to higher order according to demand and is non-negative everywhere, satisfying the basicbehavior of the probability. Moreover because the distribution is derived without any assumption about sea waves, it isfound from comparison with several accepted distributions that the new form of distribution can be applied in a widerrange of wave conditions. In addition, the density function can be used to fit some observed distributions of surface verti-cal acceleration although something remains unsolved.
Bennani, Youssef; Pronzato, Luc; Rendas, Maria João
2015-01-01
We estimate the density of a set of biophysical parameters from region censored observations. We propose a new Maximum Entropy (maxent) estimator formulated as finding the most likely constrained maxent density. By using the Ŕnyi entropy of order two instead of the Shannon entropy, we are lead to a quadratic optimization problem with linear inequality constraints that has an efficient numerical solution. We compare the proposed estimator to the NPMLE and to the best fitting maxent solutions in real data from hyperbaric diving, showing that the resulting distribution has better generalization performance than NPMLE or maxent alone.
Fate of the insecticide lambda-cyhalothrin in ditch enclosures differing in vegetation density.
Leistra, Minze; Zweers, Anton J; Warinton, Jacqui S; Crum, Steven J H; Hand, Laurence H; Beltman, Wim H J; Maund, Stephen J
2004-01-01
Use of the insecticide lambda-cyhalothrin in agriculture may result in the contamination of water bodies, for example by spray drift. Therefore, the possible exposure of aquatic organisms to this insecticide needs to be evaluated. The exposure of the organisms may be reduced by the strong sorption of the insecticide to organic materials and its susceptibility to hydrolysis at the high pH values in the natural range. In experiments done in May and August, formulated lambda-cyhalothrin was mixed with the water body of enclosures in experimental ditches containing a bottom layer and macrophytes (at different densities) or phytoplankton. Concentrations of lambda-cyhalothrin in the water body and in the sediment layer, and contents in the plant compartment, were measured by gas-liquid chromatography at various times up to 1 week after application. Various water quality parameters were also measured. Concentrations of lambda-cyhalothrin decreased rapidly in the water column: 1 day after application, 24-40% of the dose remained in the water, and by 3 days it had declined to 1.8-6.5%. At the highest plant density, lambda-cyhalothrin residue in the plant compartment reached a maximum of 50% of the dose after 1 day; at intermediate and low plant densities, this maximum was only 3-11% of the dose (after 1-2 days). The percentage of the insecticide in the ditch sediment was 12% or less of the dose and tended to be lower at higher plant densities. Alkaline hydrolysis in the water near the surface of macrophytes and phytoplankton is considered to be the main dissipation process for lambda-cyhalothrin.
High cell density strategy for poly(3-hydroxybutyrate production by Cupriavidus necator
J. L. Ienczak
2011-12-01
Full Text Available Poly(3-hydroxybutyrate (P(3HB is a carbon and intracellular storage source for different microorganisms and its production can achieve high productivities by means of high cell density cultures. The aim of this study was to propose a high cell density strategy for P(3HB production by Cupriavidus necator. The exponential growth phase demands an accurate control of the oxygen transfer system in the bioreactor, due to maximum specific growth rate (µXr, and, consequently, a maximum specific oxygen uptake rate (QO2, in addition to significant residual biomass (Xr growth in high cell density cultures. In this context, this work investigated the strategy for obtaining high cell density, with the inclusion of a linear growth phase for P(3HB production by C. necator in a fed-batch culture. The linear growth phase was included between the exponential growth phase and the P(3HB production phase as a strategy to reduce the specific growth rate (µXr and specific oxygen uptake rate (QO2, with constant residual biomass growth rate (d(V.Xr/dt = k = constant and linear increase of biomass. Three strategies of culture were performed. The results showed that a high residual biomass concentration (30 gXr.L-1 can be reached by the inclusion of the linear growth strategy and specific growth rates (µXr between 0.08 and 0.05 h-1, at the beginning of the production phase, are necessary to attain a high P(3HB productivity.
Pigeon, Pascale; Bortolami, Simone B; DiZio, Paul; Lackner, James R
2003-01-01
When reaching movements involve simultaneous trunk rotation, additional interaction torques are generated on the arm that are absent when the trunk is stable. To explore whether the CNS compensates for such self-generated interaction torques, we recorded hand trajectories in reaching tasks involving various amplitudes and velocities of arm extension and trunk rotation. Subjects pointed to three targets on a surface slightly above waist level. Two of the target locations were chosen so that a similar arm configuration relative to the trunk would be required for reaching to them, one of these targets requiring substantial trunk rotation, the other very little. Significant trunk rotation was necessary to reach the third target, but the arm's radial distance to the body remained virtually unchanged. Subjects reached at two speeds-a natural pace (slow) and rapidly (fast)-under normal lighting and in total darkness. Trunk angular velocity and finger velocity relative to the trunk were higher in the fast conditions but were not affected by the presence or absence of vision. Peak trunk velocity increased with increasing trunk rotation up to a maximum of 200 degrees /s. In slow movements, peak finger velocity relative to the trunk was smaller when trunk rotation was necessary to reach the targets. In fast movements, peak finger velocity was approximately 1.7 m/s for all targets. Finger trajectories were more curved when reaching movements involved substantial trunk rotation; however, the terminal errors and the maximal deviation of the trajectory from a straight line were comparable in slow and fast movements. This pattern indicates that the larger Coriolis, centripetal, and inertial interaction torques generated during rapid reaches were compensated by additional joint torques. Trajectory characteristics did not vary with the presence or absence of vision, indicating that visual feedback was unnecessary for anticipatory compensations. In all reaches involving trunk
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.