Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Maximum and minimum entropy states yielding local continuity bounds
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
Future changes over the Himalayas: Maximum and minimum temperature
Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.
2018-03-01
An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with
CO2 maximum in the oxygen minimum zone (OMZ
Directory of Open Access Journals (Sweden)
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2Chile during 4 cruises (2000-2002) and a monthly monitoring (2000-2001) in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the
Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years
Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.
2013-05-01
Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.
Maximum effort in the minimum-effort game
Czech Academy of Sciences Publication Activity Database
Engelmann, Dirk; Normann, H.-T.
2010-01-01
Roč. 13, č. 3 (2010), s. 249-259 ISSN 1386-4157 Institutional research plan: CEZ:AV0Z70850503 Keywords : minimum-effort game * coordination game * experiments * social capital Subject RIV: AH - Economics Impact factor: 1.868, year: 2010
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Dopant density from maximum-minimum capacitance ratio of implanted MOS structures
International Nuclear Information System (INIS)
Brews, J.R.
1982-01-01
For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, Aurélien; Ruiz-Pino, D.; Garcon, V.
2011-01-01
International audience; Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. ...
50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.
2010-10-01
... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...
Minimum disturbance rewards with maximum possible classical correlations
Energy Technology Data Exchange (ETDEWEB)
Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)
2017-07-12
Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.
Maximum nonlocality and minimum uncertainty using magic states
Howard, Mark
2015-04-01
We prove that magic states from the Clifford hierarchy give optimal solutions for tasks involving nonlocality and entropic uncertainty with respect to Pauli measurements. For both the nonlocality and uncertainty tasks, stabilizer states are the worst possible pure states, so our solutions have an operational interpretation as being highly nonstabilizer. The optimal strategy for a qudit version of the Clauser-Horne-Shimony-Holt game in prime dimensions is achieved by measuring maximally entangled states that are isomorphic to single-qudit magic states. These magic states have an appealingly simple form, and our proof shows that they are "balanced" with respect to all but one of the mutually unbiased stabilizer bases. Of all equatorial qudit states, magic states minimize the average entropic uncertainties for collision entropy and also, for small prime dimensions, min-entropy, a fact that may have implications for cryptography.
Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation
Directory of Open Access Journals (Sweden)
Su Yeon Oh
2006-06-01
Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating pressure...
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Energy Technology Data Exchange (ETDEWEB)
Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer
Lee, Jae Nyung
2008-10-01
Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum
Directory of Open Access Journals (Sweden)
S.Balaji
2015-06-01
Full Text Available Mobile Adhoc network is an instantaneous wireless network that is dynamic in nature. It supports single hop and multihop communication. In this infrastructure less network, clustering is a significant model to maintain the topology of the network. The clustering process includes different phases like cluster formation, cluster head selection, cluster maintenance. Choosing cluster head is important as the stability of the network depends on well-organized and resourceful cluster head. When the node has increased number of neighbors it can act as a link between the neighbor nodes which in further reduces the number of hops in multihop communication. Promisingly the node with more number of neighbors should also be available with enough energy to provide stability in the network. Hence these aspects demand the focus. In weight based cluster head selection, closeness and average minimum power required is considered for purging the ineligible nodes. The optimal set of nodes selected after purging will compete to become cluster head. The node with maximum weight selected as cluster head. Mathematical formulation is developed to show the proposed method provides optimum result. It is also suggested that weight factor in calculating the node weight should give precise importance to energy and node stability.
International Nuclear Information System (INIS)
Tit, N.; Kumar, N.; Pradhan, P.
1993-07-01
Exact numerical calculation of ensemble averaged length-scale dependent conductance for the 1D Anderson model is shown to support an earlier conjecture for a conductance minimum. Numerical results can be understood in terms of the Thouless expression for the conductance and the Wigner level-spacing statistics. (author). 8 refs, 2 figs
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
International Nuclear Information System (INIS)
Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George
2012-01-01
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Energy Technology Data Exchange (ETDEWEB)
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
78 FR 22798 - Hazardous Materials: Revision of Maximum and Minimum Civil Penalties
2013-04-17
.... 5101 et seq.). Section 5123(a) of that law provides civil penalties for knowing violations of Federal... 107--Guidelines for Civil Penalties * * * * * IV. * * * C. * * * Under the Federal hazmat law, 49 U.S... Maximum and Minimum Civil Penalties AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA...
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Energy Technology Data Exchange (ETDEWEB)
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
International Nuclear Information System (INIS)
Kaya, Savaş; Kaya, Cemal; Islam, Nazmul
2016-01-01
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
International Nuclear Information System (INIS)
Jain, P.C.
1985-12-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers
Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya
Directory of Open Access Journals (Sweden)
George Lukoye Makokha
2010-01-01
Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.
Medina-Silva, Renata; de Oliveira, Rafael R.; Pivel, Maria A. G.; Borges, Luiz G. A.; Simão, Taiz L. L.; Pereira, Leandro M.; Trindade, Fernanda J.; Augustin, Adolpho H.; Valdez, Fernanda P.; Eizirik, Eduardo; Utz, Laura R. P.; Groposo, Claudia; Miller, Dennis J.; Viana, Adriano R.; Ketzer, João M. M.; Giongo, Adriana
2018-02-01
Conspicuous physicochemical vertical stratification in the deep sea is one of the main forces driving microbial diversity in the oceans. Oxygen and sunlight availability are key factors promoting microbial diversity throughout the water column. Ocean currents also play a major role in the physicochemical stratification, carrying oxygen down to deeper zones as well as moving deeper water masses up towards shallower depths. Water samples within a 50-km radius in a pockmark location of the southwestern Atlantic Ocean were collected and the prokaryotic communities from different water depths - chlorophyll maximum, oxygen minimum and deep-sea bottom (down to 1355 m) - were described. At phylum level, Proteobacteria were the most frequent in all water depths, Cyanobacteria were statistically more frequent in chlorophyll maximum zone, while Thaumarchaeota were significantly more abundant in both oxygen minimum and bottom waters. The most frequent microorganism in the chlorophyll maximum and oxygen minimum zones was a Pelagibacteraceae operational taxonomic unit (OTU). At the bottom, the most abundant genus was the archaeon Nitrosopumilus. Beta diversity analysis of the 16S rRNA gene sequencing data uncovered in this study shows high spatial heterogeneity among water zones communities. Our data brings important contribution for the characterisation of oceanic microbial diversity, as it consists of the first description of prokaryotic communities occurring in different oceanic water zones in the southwestern Atlantic Ocean.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
Energy Technology Data Exchange (ETDEWEB)
Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)
2015-06-15
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
International Nuclear Information System (INIS)
Gong, Y; Yu, J; Xiao, Y
2015-01-01
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant
International Nuclear Information System (INIS)
Jain, P.C.
1984-01-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)
The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity
Shaltout, M.
It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline
Directory of Open Access Journals (Sweden)
Md. Sanaul H. Mondal
2017-03-01
Full Text Available Bangladesh shares a common border with India in the west, north and east and with Myanmar in the southeast. These borders cut across 57 rivers that discharge through Bangladesh into the Bay of Bengal in the south. The upstream courses of these rivers traverse India, China, Nepal and Bhutan. Transboundary flows are the important sources of water resources in Bangladesh. Among the 57 transboundary rivers, the Teesta is the fourth major river in Bangladesh after the Ganges, the Brahmaputra and the Meghna and Bangladesh occupies about 2071 km2 . The Teesta River floodplain in Bangladesh accounts for 14% of the total cropped area and 9.15 million people of the country. The objective of this study was to investigate trends in both maximum and minimum water flow at Kaunia and Dalia stations for the Teesta River and the coping strategies developed by the communities to adjust with uncertain flood situations. The flow characteristics of the Teesta were analysed by calculating monthly maximum and minimum water levels and discharges from 1985 to 2006. Discharge of the Teesta over the last 22 years has been decreasing. Extreme low-flow conditions were likely to occur more frequently after the implementation of the Gozoldoba Barrage by India. However, a very sharp decrease in peak flows was also observed albeit unexpected high discharge in 1988, 1989, 1991, 1997, 1999 and 2004 with some in between April and October. Onrush of water causes frequent flash floods, whereas decreasing flow leaves the areas dependent on the Teesta vulnerable to droughts. Both these extreme situations had a negative impact on the lives and livelihoods of people dependent on the Teesta. Over the years, people have developed several risk mitigation strategies to adjust with both natural and anthropogenic flood situations. This article proposed the concept of ‘MAXIN (maximum and minimum flows’ for river water justice for riparian land.
EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY
Directory of Open Access Journals (Sweden)
Barbaros Gönençgil
2016-01-01
Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008
Directory of Open Access Journals (Sweden)
S. Federico
2011-02-01
Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.
Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.
Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
2013-02-12
... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...
Directory of Open Access Journals (Sweden)
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
THE 2003 -2007 MINIMUM, MAXIMUM AND MEDIUM DISCHARGE ANALYSIS OF THE LATORIŢA-LOTRU WATER SYSTEM
Directory of Open Access Journals (Sweden)
Simona-Elena MIHĂESCU
2010-06-01
Full Text Available The 2003 -2007 minimum, maximum and medium discharge analysis of the Latoriţa-Lotru water system From a functional point of view, the Lotru and Latoriţa make up a water system by the junction of the two high hydro energetic potential water flows. The Lotru springs from the Parâng Massif with a spring quota of over 1900m and an outfall quota of 298m, which makes for an altitude difference of 1602m; it is the affluent of the Olt River, has a course length of 76 km and a minimum discharge of 20m3/s. Its reception hollow is of 1024 km2. Latoriţa springs from the Latoriţa Mountains, it is a small river with an average discharge of 2.7m3/s and is an affluent of the Lotru. Together, the two make up a high hydro energetic potential system, valorized in the system of lakes which serve the Ciunget Hydro-Electric Power Plant. Galbenu and Petrimanu are two reservoirs built on the Latoriţa River and on the Lotru, we have Vidra reservoir, Balindru, Mălaia and Brădişor. The discharge analysis of these rivers is very important in view of a good risk management, especially consisting in floods and high level waters, even in the case of artificial water flows such as the Latoriţa-Lotru water system.
Laboratory test on maximum and minimum void ratio of tropical sand matrix soils
Othman, B. A.; Marto, A.
2018-04-01
Sand is generally known as loose granular material which has a grain size finer than gravel and coarser than silt and can be very angular to well-rounded in shape. The present of various amount of fines which also influence the loosest and densest state of sand in natural condition have been well known to contribute to the deformation and loss of shear strength of soil. This paper presents the effect of various range of fines content on minimum void ratio e min and maximum void ratio e max of sand matrix soils. Laboratory tests to determine e min and e max of sand matrix soil were conducted using non-standard method introduced by previous researcher. Clean sand was obtained from natural mining site at Johor, Malaysia. A set of 3 different sizes of sand (fine sand, medium sand, and coarse sand) were mixed with 0% to 40% by weight of low plasticity fine (kaolin). Results showed that generally e min and e max decreased with the increase of fines content up to a minimal value of 0% to 30%, and then increased back thereafter.
A novel minimum cost maximum power algorithm for future smart home energy management
Directory of Open Access Journals (Sweden)
A. Singaravelan
2017-11-01
Full Text Available With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
A novel minimum cost maximum power algorithm for future smart home energy management.
Singaravelan, A; Kowsalya, M
2017-11-01
With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
Godolphin, E. J.
1980-01-01
It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...
Martucci, M.; Munini, R.; Boezio, M.; Di Felice, V.; Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Santis, C.; Galper, A. M.; Karelin, A. V.; Koldashov, S. V.; Koldobskiy, S.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malakhov, V.; Marcelli, L.; Marcelli, N.; Mayorov, A. G.; Menn, W.; Mergè, M.; Mikhailov, V. V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Osteria, G.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S. B.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y. I.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S. A.; Yurkin, Y. T.; Zampa, G.; Zampa, N.; Potgieter, M. S.; Raath, J. L.
2018-02-01
Precise measurements of the time-dependent intensity of the low-energy (solar activity periods, i.e., from minimum to maximum, are needed to achieve comprehensive understanding of such physical phenomena. The minimum phase between solar cycles 23 and 24 was peculiarly long, extending up to the beginning of 2010 and followed by the maximum phase, reached during early 2014. In this Letter, we present proton differential spectra measured from 2010 January to 2014 February by the PAMELA experiment. For the first time the GCR proton intensity was studied over a wide energy range (0.08–50 GeV) by a single apparatus from a minimum to a maximum period of solar activity. The large statistics allowed the time variation to be investigated on a nearly monthly basis. Data were compared and interpreted in the context of a state-of-the-art three-dimensional model describing the GCRs propagation through the heliosphere.
Scale dependence of the average potential around the maximum in Φ4 theories
International Nuclear Information System (INIS)
Tetradis, N.; Wetterich, C.
1992-04-01
The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
Energy Technology Data Exchange (ETDEWEB)
SAROFIM, A F; LISAUSKAS, R; RILEY, D; EDDINGS, E G; BROUWER, J; KLEWICKI, J P; DAVIS, K A; BOCKELIE, M J; HEAP, M P; PERSHING, D
1998-01-01
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers,"in-furnace NO_{x} control," which includes: staged low-NO_{x} burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of "in-furnace" NO_{x} control, processes. 2) To devise new, or improve existing, approaches for maximum "in-furnace" NO_{x} control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO_{x} burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO_{x} burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO_{x} burners. 3 Determine the limits on NO control by in-furnace NO_{x} control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO_{x} burners and coal reburning systems. 6 Modify the char burnout model in REI's coal
2012-09-13
46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Directory of Open Access Journals (Sweden)
Sung Woo Park
2015-03-01
Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
International Nuclear Information System (INIS)
Ackroyd, R.T.
1982-01-01
Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)
International Nuclear Information System (INIS)
Bispo, Heleno; Silva, Nilton; Brito, Romildo; Manzi, João
2013-01-01
Highlights: • Minimum entropy generation (MEG) principle improved the reaction performance. • MEG rate and the maximum conversion equivalence have been analyzed. • Temperature and residence time are used to the domain establishment of MEG. • Satisfying the temperature and residence time relationship results a optimal performance. - Abstract: The analysis of the equivalence between the minimum entropy generation (MEG) rate and the maximum conversion rate for a reactive system is the main purpose of this paper. While being used as a strategy of optimization, the minimum entropy production was applied to the production of propylene glycol in a Continuous Stirred-Tank Reactor (CSTR) with a view to determining the best operating conditions, and under such conditions, a high conversion rate was found. The effects of the key variables and restrictions on the validity domain of MEG were investigated, which raises issues that are included within a broad discussion. The results from simulations indicate that from the chemical reaction standpoint a maximum conversion rate can be considered as equivalent to MEG. Such a result can be clearly explained by examining the classical Maxwell–Boltzmann distribution, where the molecules of the reactive system under the condition of the MEG rate present a distribution of energy with reduced dispersion resulting in a better quality of collision between molecules with a higher conversion rate
Directory of Open Access Journals (Sweden)
Roham Vali, Mohammad Nasrollahzadeh Masouleh* and Siamak Mashhady Rafie1
2013-04-01
Full Text Available There is no data on the effect of maximum and minimum doses of furosemide on heart's work performance and amount of fractional shortening (FS in echocardiography of rabbit. This study was designed to validate probability of the mentionable effect. Twenty-four healthy female New Zealand white rabbits were divided into four equal groups. Maximum and minimum doses of furosemide were used for the first and second groups and the injection solution for the third and fourth groups was sodium chloride 0.9% which had the same calculated volumes of furosemide for the first two groups, respectively. The left ventricle FS in statutory times (0, 2, 5, 15, 30 minutes was determined by echocardiography. Measurements of Mean±SD, maximum and minimum amounts for FS values in all groups before injection and in statutory times were calculated. Statistical analysis revealed non-significant correlation between the means of FS. The results of this study showed that furosemide can be used as a diuretic agent for preparing a window approach in abdominal ultrasonography examination with no harmful effect on cardiac function.
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Directory of Open Access Journals (Sweden)
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
Minimum and Maximum Potential Contributions to Future Sea Level Rise from Polar Ice Sheets
Deconto, R. M.; Pollard, D.
2017-12-01
New climate and ice-sheet modeling, calibrated to past changes in sea-level, is painting a stark picture of the future fate of the great polar ice sheets if greenhouse gas emissions continue unabated. This is especially true for Antarctica, where a substantial fraction of the ice sheet rests on bedrock more than 500-meters below sea level. Here, we explore the sensitivity of the polar ice sheets to a warming atmosphere and ocean under a range of future greenhouse gas emissions scenarios. The ice sheet-climate-ocean model used here considers time-evolving changes in surface mass balance and sub-ice oceanic melting, ice deformation, grounding line retreat on reverse-sloped bedrock (Marine Ice Sheet Instability), and newly added processes including hydrofracturing of ice shelves in response to surface meltwater and rain, and structural collapse of thick, marine-terminating ice margins with tall ice-cliff faces (Marine Ice Cliff Instability). The simulations improve on previous work by using 1) improved atmospheric forcing from a Regional Climate Model and 2) a much wider range of model physical parameters within the bounds of modern observations of ice dynamical processes (particularly calving rates) and paleo constraints on past ice-sheet response to warming. Approaches to more precisely define the climatic thresholds capable of triggering rapid and potentially irreversible ice-sheet retreat are also discussed, as is the potential for aggressive mitigation strategies like those discussed at the 2015 Paris Climate Conference (COP21) to substantially reduce the risk of extreme sea-level rise. These results, including physics that consider both ice deformation (creep) and calving (mechanical failure of marine terminating ice) expand on previously estimated limits of maximum rates of future sea level rise based solely on kinematic constraints of glacier flow. At the high end, the new results show the potential for more than 2m of global mean sea level rise by 2100
Directory of Open Access Journals (Sweden)
Syed S. Ghani
2017-12-01
Full Text Available The current work observes the trends in Lautoka’s temperature and relative humidity during the period 2003 – 2013, which were analyzed using the recently updated data obtained from Fiji Meteorological Services (FMS. Four elements, mean maximum temperature, mean minimum temperature along with diurnal temperature range (DTR and mean relative humidity are investigated. From 2003–2013, the annual mean temperature has been enhanced between 0.02 and 0.080C. The heating is more in minimum temperature than in maximum temperature, resulting in a decrease of diurnal temperature range. The statistically significant increase was mostly seen during the summer months of December and January. Mean Relative Humidity has also increased from 3% to 8%. The bases of abnormal climate conditions are also studied. These bases were defined with temperature or humidity anomalies in their appropriate time sequences. These established the observed findings and exhibited that climate has been becoming gradually damper and heater throughout Lautoka during this period. While we are only at an initial phase in the probable inclinations of temperature changes, ecological reactions to recent climate change are already evidently noticeable. So it is proposed that it would be easier to identify climate alteration in a small island nation like Fiji.
Barbarossa, Valerio; Huijbregts, Mark A. J.; Beusen, Arthur H. W.; Beck, Hylke E.; King, Henry; Schipper, Aafke M.
2018-03-01
Streamflow data is highly relevant for a variety of socio-economic as well as ecological analyses or applications, but a high-resolution global streamflow dataset is yet lacking. We created FLO1K, a consistent streamflow dataset at a resolution of 30 arc seconds (~1 km) and global coverage. FLO1K comprises mean, maximum and minimum annual flow for each year in the period 1960-2015, provided as spatially continuous gridded layers. We mapped streamflow by means of artificial neural networks (ANNs) regression. An ensemble of ANNs were fitted on monthly streamflow observations from 6600 monitoring stations worldwide, i.e., minimum and maximum annual flows represent the lowest and highest mean monthly flows for a given year. As covariates we used the upstream-catchment physiography (area, surface slope, elevation) and year-specific climatic variables (precipitation, temperature, potential evapotranspiration, aridity index and seasonality indices). Confronting the maps with independent data indicated good agreement (R2 values up to 91%). FLO1K delivers essential data for freshwater ecology and water resources analyses at a global scale and yet high spatial resolution.
International Nuclear Information System (INIS)
Nukiyama, S.
1991-01-01
The quantity of heat transmitted from a metal surface to boiling water increases as the temperature difference ΔT is increased, but after the ΔT has reached a certain limit, quantity Q decreases with further increase in ΔT. This turning point is the maximum value of heat transmitted. The existence of this point was actually observed in the experiment. Under atmospheric pressure, ΔT corresponding to the maximum value of heat transfer for water at 100 degrees C falls between 20-40 degrees C, and Q is between 1,080,000 and 1,800,000 kcal/m 2 h (i.e. between 2,000 and 3,000 kg/m 2 h, if expressed in constant evaporation rate at 100 degrees C); this figure is larger than the maximum value of heat transfer as was previously considered. In this paper the minimum value of heat transfer was obtained, and in the Q-ΔT curve for the high temperature region, the burn-out effect is discussed
Directory of Open Access Journals (Sweden)
S. Vignesh
2017-04-01
Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.
2010-04-01
... assisted with NAHASDA grant amounts? 1000.124 Section 1000.124 Housing and Urban Development Regulations... Activities § 1000.124 What maximum and minimum rent or homebuyer payment can a recipient charge a low-income...
Energy Technology Data Exchange (ETDEWEB)
Shirai, Kiyonori [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Nishiyama, Kinji, E-mail: sirai-ki@mc.pref.osaka.jp [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Katsuda, Toshizo [Department of Radiology, National Cerebral and Cardiovascular Center, Osaka (Japan); Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan)
2014-01-01
Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error.
International Nuclear Information System (INIS)
Shirai, Kiyonori; Nishiyama, Kinji; Katsuda, Toshizo; Teshima, Teruki; Ueda, Yoshihiro; Miyazaki, Masayoshi; Tsujii, Katsutomo
2014-01-01
Purpose: To determine whether maximum or average intensity projection (MIP or AIP, respectively) reconstructed from 4-dimensional computed tomography (4DCT) is preferred for alignment to cone beam CT (CBCT) images in lung stereotactic body radiation therapy. Methods and Materials: Stationary CT and 4DCT images were acquired with a target phantom at the center of motion and moving along the superior–inferior (SI) direction, respectively. Motion profiles were asymmetrical waveforms with amplitudes of 10, 15, and 20 mm and a 4-second cycle. Stationary CBCT and dynamic CBCT images were acquired in the same manner as stationary CT and 4DCT images. Stationary CBCT was aligned to stationary CT, and the couch position was used as the baseline. Dynamic CBCT was aligned to the MIP and AIP of corresponding amplitudes. Registration error was defined as the SI deviation of the couch position from the baseline. In 16 patients with isolated lung lesions, free-breathing CBCT (FBCBCT) was registered to AIP and MIP (64 sessions in total), and the difference in couch shifts was calculated. Results: In the phantom study, registration errors were within 0.1 mm for AIP and 1.5 to 1.8 mm toward the inferior direction for MIP. In the patient study, the difference in the couch shifts (mean, range) was insignificant in the right-left (0.0 mm, ≤1.0 mm) and anterior–posterior (0.0 mm, ≤2.1 mm) directions. In the SI direction, however, the couch position significantly shifted in the inferior direction after MIP registration compared with after AIP registration (mean, −0.6 mm; ranging 1.7 mm to the superior side and 3.5 mm to the inferior side, P=.02). Conclusions: AIP is recommended as the reference image for registration to FBCBCT when target alignment is performed in the presence of asymmetrical respiratory motion, whereas MIP causes systematic target positioning error
Directory of Open Access Journals (Sweden)
Xin Ma
2015-01-01
Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong
2016-01-01
The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.
2018-03-01
Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.
Liu, Saiyan; Huang, Shengzhi; Xie, Yangyang; Huang, Qiang; Leng, Guoyong; Hou, Beibei; Zhang, Ying; Wei, Xiu
2018-05-01
Due to the important role of temperature in the global climate system and energy cycles, it is important to investigate the spatial-temporal change patterns, causes and implications of annual maximum (Tmax) and minimum (Tmin) temperatures. In this study, the Cloud model were adopted to fully and accurately analyze the changing patterns of annual Tmax and Tmin from 1958 to 2008 by quantifying their mean, uniformity, and stability in the Wei River Basin (WRB), a typical arid and semi-arid region in China. Additionally, the cross wavelet analysis was applied to explore the correlations among annual Tmax and Tmin and the yearly sunspots number, Arctic Oscillation, Pacific Decadal Oscillation, and soil moisture with an aim to determine possible causes of annual Tmax and Tmin variations. Furthermore, temperature-related impacts on vegetation cover and precipitation extremes were also examined. Results indicated that: (1) the WRB is characterized by increasing trends in annual Tmax and Tmin, with a more evident increasing trend in annual Tmin, which has a higher dispersion degree and is less uniform and stable than annual Tmax; (2) the asymmetric variations of Tmax and Tmin can be generally explained by the stronger effects of solar activity (primarily), large-scale atmospheric circulation patterns, and soil moisture on annual Tmin than on annual Tmax; and (3) increasing annual Tmax and Tmin have exerted strong influences on local precipitation extremes, in terms of their duration, intensity, and frequency in the WRB. This study presents new analyses of Tmax and Tmin in the WRB, and the findings may help guide regional agricultural production and water resources management.
Directory of Open Access Journals (Sweden)
Stefan Krähenmann
2013-07-01
Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly
Energy Technology Data Exchange (ETDEWEB)
Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)
2013-10-15
The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across
Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.
1999-05-01
Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.
Panagoulia, Dionysia; Vlahogianni, Eleni I.
2018-06-01
A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to
Directory of Open Access Journals (Sweden)
G. M. J. HASAN
2014-10-01
Full Text Available Climate, one of the major controlling factors for well-being of the inhabitants in the world, has been changing in accordance with the natural forcing and manmade activities. Bangladesh, the most densely populated countries in the world is under threat due to climate change caused by excessive use or abuse of ecology and natural resources. This study checks the rainfall patterns and their associated changes in the north-eastern part of Bangladesh mainly Sylhet city through statistical analysis of daily rainfall data during the period of 1957 - 2006. It has been observed that a good correlation exists between the monthly mean and daily maximum rainfall. A linear regression analysis of the data is found to be significant for all the months. Some key statistical parameters like the mean values of Coefficient of Variability (CV, Relative Variability (RV and Percentage Inter-annual Variability (PIV have been studied and found to be at variance. Monthly, yearly and seasonal variation of rainy days also analysed to check for any significant changes.
Energy Technology Data Exchange (ETDEWEB)
Ngeow, Chow-Choong [Graduate Institute of Astronomy, National Central University, Jhongli 32001, Taiwan (China); Kanbur, Shashi M.; Schrecengost, Zachariah [Department of Physics, SUNY Oswego, Oswego, NY 13126 (United States); Bhardwaj, Anupam; Singh, Harinder P. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India)
2017-01-10
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the ( V − R ){sub MACHO} or ( V − I ) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five ( ugriz ) bands. We present the PC and AC relations at maximum and minimum light in four colors: ( u − g ){sub 0}, ( g − r ){sub 0}, ( r − i ){sub 0}, and ( i − z ){sub 0}, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the ( g − r ){sub 0} and ( r − i ){sub 0} colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
Energy Technology Data Exchange (ETDEWEB)
Jurkovic, I [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Stathakis, S; Li, Y; Patel, A; Vincent, J; Papanikolaou, N; Mavroidis, P [Cancer Therapy and Research Center University of Texas Health Sciences Center at San Antonio, San Antonio, TX (United States)
2014-06-01
Purpose: To determine the difference in coverage between plans done on average intensity projection and maximum intensity projection CT data sets for lung patients and to establish correlations between different factors influencing the coverage. Methods: For six lung cancer patients, 10 phases of equal duration through the respiratory cycle, the maximum and average intensity projections (MIP and AIP) from their 4DCT datasets were obtained. MIP and AIP datasets had three GTVs delineated (GTVaip — delineated on AIP, GTVmip — delineated on MIP and GTVfus — delineated on each of the 10 phases and summed up). From the each GTV, planning target volumes (PTV) were then created by adding additional margins. For each of the PTVs an IMRT plan was developed on the AIP dataset. The plans were then copied to the MIP data set and were recalculated. Results: The effective depths in AIP cases were significantly smaller than in MIP (p < 0.001). The Pearson correlation coefficient of r = 0.839 indicates strong degree of positive linear relationship between the average percentage difference in effective depths and average PTV coverage on the MIP data set. The V2 0 Gy of involved lung depends on the PTV coverage. The relationship between PTVaip mean CT number difference and PTVaip coverage on MIP data set gives r = 0.830. When the plans are produced on MIP and copied to AIP, r equals −0.756. Conclusion: The correlation between the AIP and MIP data sets indicates that the selection of the data set for developing the treatment plan affects the final outcome (cases with high average percentage difference in effective depths between AIP and MIP should be calculated on AIP). The percentage of the lung volume receiving higher dose depends on how well PTV is covered, regardless of on which set plan is done.
Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim
2016-01-01
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Abaurrea, J.; Asín, J.; Cebrián, A. C.
2018-02-01
The occurrence of extreme heat events in maximum and minimum daily temperatures is modelled using a non-homogeneous common Poisson shock process. It is applied to five Spanish locations, representative of the most common climates over the Iberian Peninsula. The model is based on an excess over threshold approach and distinguishes three types of extreme events: only in maximum temperature, only in minimum temperature and in both of them (simultaneous events). It takes into account the dependence between the occurrence of extreme events in both temperatures and its parameters are expressed as functions of time and temperature related covariates. The fitted models allow us to characterize the occurrence of extreme heat events and to compare their evolution in the different climates during the observed period. This model is also a useful tool for obtaining local projections of the occurrence rate of extreme heat events under climate change conditions, using the future downscaled temperature trajectories generated by Earth System Models. The projections for 2031-60 under scenarios RCP4.5, RCP6.0 and RCP8.5 are obtained and analysed using the trajectories from four earth system models which have successfully passed a preliminary control analysis. Different graphical tools and summary measures of the projected daily intensities are used to quantify the climate change on a local scale. A high increase in the occurrence of extreme heat events, mainly in July and August, is projected in all the locations, all types of event and in the three scenarios, although in 2051-60 the increase is higher under RCP8.5. However, relevant differences are found between the evolution in the different climates and the types of event, with a specially high increase in the simultaneous ones.
Directory of Open Access Journals (Sweden)
М.А. Zemlyanova
2015-03-01
Full Text Available We presented the materials on the verification of the average daily maximum permissible concentration of styrene in the atmospheric air of settlements performed under the results of own in-depth epidemiological studies of children’s population according to the principles of the international risk assessment practice. It was established that children in the age of 4–7 years when exposed to styrene at the level above 1.2 of threshold level value for continuous exposure develop the negative exposure effects in the form of disorders of hormonal regulation, pigmentary exchange, antioxidative activity, cytolysis, immune reactivity and cytogenetic disbalance which contribute to the increased morbidity of diseases of the central nervous system, endocrine system, respiratory organs, digestion and skin. Based on the proved cause-and-effect relationships between the biomarkers of negative effects and styrene concentration in blood it was demonstrated that the benchmark styrene concentration in blood is 0.002 mg/dm3. The justified value complies with and confirms the average daily styrene concentration in the air of settlements at the level of 0.002 mg/m3 accepted in Russia which provides the safety for the health of population (1 threshold level value for continuous exposure.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
Directory of Open Access Journals (Sweden)
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Energy Technology Data Exchange (ETDEWEB)
Shen, Tengming [Fermilab; Ye, Liyang [NCSU, Raleigh; Turrioni, Daniele [Fermilab; Li, Pei [Fermilab
2015-01-01
Small insert coils have been built using a multifilamentary Bi2Sr2CaCu2Ox round wire, and characterized in background fields to explore the quench behaviors and limits of Bi2Sr2CaCu2Ox superconducting magnets, with an emphasis on assessing the impact of slow normal zone propagation on quench detection. Using heaters of various lengths to initiate a small normal zone, a coil was quenched safely more than 70 times without degradation, with the maximum coil temperature reaching 280 K. Coils withstood a resistive voltage of tens of mV for seconds without quenching, showing the high stability of these coils and suggesting that the quench detection voltage shall be greater than 50 mV to not to falsely trigger protection. The hot spot temperature for the resistive voltage of the normal zone to reach 100 mV increases from ~40 K to ~80 K with increasing the operating wire current density Jo from 89 A/mm2 to 354 A/mm2 whereas for the voltage to reach 1 V, it increases from ~60 K to ~140 K, showing the increasing negative impact of slow normal zone propagation on quench detection with increasing Jo and the need to limit the quench detection voltage to < 1 V. These measurements, coupled with an analytical quench model, were used to access the impact of the maximum allowable voltage and temperature upon quench detection on the quench protection, assuming to limit the hot spot temperature to <300 K.
"Minimum input, maximum output, indeed!" Teaching Collocations ...
African Journals Online (AJOL)
Fifty-nine EFL college students participated in the study, and they received two 75-minute instructions between pre- and post-tests: one on the definition of colloca-tion and its importance, and the other on the skill of looking up collocational information in the Naver Dictionary — an English–Korean online dictionary. During ...
Maximum/minimum asymmetric rod detection
International Nuclear Information System (INIS)
Huston, J.T.
1990-01-01
This patent describes a system for determining the relative position of each control rod within a control rod group in a nuclear reactor. The control rod group having at least three control rods therein. It comprises: means for producing a signal representative of a position of each control rod within the control rod group in the nuclear reactor; means for establishing a signal representative of the highest position of a control rod in the control rod group in the nuclear reactor; means for establishing a signal representative of the lowest position of a control rod in the control rod group in the nuclear reactor; means for determining a difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; means for establishing a predetermined limit for the difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; and means for comparing the difference between the signals with the predetermined limit. The comparing means producing an output signal when the difference between the signals exceeds the predetermined limit
Energy Technology Data Exchange (ETDEWEB)
McKenna-Lawlor, S.M.P. (Saint Patrick' s Coll., Maynooth (Ireland)); Afonin, V.V.; Gringauz, K.I. (AN SSSR, Moscow (USSR). Space Research Inst.) (and others)
Twin telescope particle detector systems SLED-1 and SLED-2, with the capability of monitoring electron and ion fluxes within an energy range spanning approximately 30 keV to a few megaelectron volts, were individually launched on the two spacecraft (Phobos-2 and Phobos-1, respectively) of the Soviet Phobos Mission to Mars and its moons in July 1988. A short description of the SLED instrument and a preliminary account of representative solar-related particle enhancements recorded by SLED-1 and SLED-2 during the Cruise Phase, and by SLED-1 in the near Martian environment (within the interval 25 July 1988-26 March 1989) are presented. These observations were made while the interplanetary medium was in the course of changing over from solar minimum- to solar maximum-dominated conditions and examples are presented of events associated with each of these phenomenological states. (author).
Directory of Open Access Journals (Sweden)
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Directory of Open Access Journals (Sweden)
Carlos Rogério de Mello
2010-04-01
Full Text Available Vazões máximas são grandezas hidrológicas aplicadas a projetos de obras hidráulicas e vazões mínimas são utilizadas para a avaliação das disponibilidades hídricas em bacias hidrográficas e comportamento do escoamento subterrâneo. Neste estudo, objetivou-se à construção de intervalos de confiança estatísticos para vazões máximas e mínimas diárias anuais e sua relação com as características fisiográficas das 6 maiores bacias hidrográficas da região Alto Rio Grande à montante da represa da UHE-Camargos/CEMIG. As distribuições de probabilidades Gumbel e Gama foram aplicadas, respectivamente, para séries históricas de vazões máximas e mínimas, utilizando os estimadores de Máxima Verossimilhança. Os intervalos de confiança constituem-se em uma importante ferramenta para o melhor entendimento e estimativa das vazões, sendo influenciado pelas características geológicas das bacias. Com base nos mesmos, verificou-se que a região Alto Rio Grande possui duas áreas distintas: a primeira, abrangendo as bacias Aiuruoca, Carvalhos e Bom Jardim, que apresentaram as maiores vazões máximas e mínimas, significando potencialidade para cheias mais significativas e maiores disponibilidades hídricas; a segunda, associada às bacias F. Laranjeiras, Madre de Deus e Andrelândia, que apresentaram as menores disponibilidades hídricas.Maximum discharges are applied to hydraulic structure design and minimum discharges are used to characterize water availability in hydrographic basins and subterranean flow. This study is aimed at estimating the confidence statistical intervals for maximum and minimum annual discharges and their relationship wih the physical characteristics of basins in the Alto Rio Grande Region, State of Minas Gerais. The study was developed for the six (6 greatest Alto Rio Grande Region basins at upstream of the UHE-Camargos/CEMIG reservoir. Gumbel and Gama probability distribution models were applied to the
Estimating minimum and maximum air temperature using MODIS ...
Indian Academy of Sciences (India)
in a wide range of applications in areas of ecology, hydrology ... stations, thus attracting researchers to make use ... simpler because of the lack of solar radiation effect .... water from the snow packed Himalayan region to ... tribution System (LAADS) webdata archive cen- ..... ing due to greenhouse gases is different for the air.
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
Zero forcing parameters and minimum rank problems
Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.
2010-01-01
The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
How unprecedented a solar minimum was it?
Russell, C T; Jian, L K; Luhmann, J G
2013-05-01
The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.
Effect of tank geometry on its average performance
Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.
2018-03-01
The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.
Fields, Gary S.; Kanbur, Ravi
2005-01-01
Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
International Nuclear Information System (INIS)
Beer, M.
1980-01-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
International Nuclear Information System (INIS)
Dam, H. van; Leege, P.F.A. de
1987-01-01
An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation
Directory of Open Access Journals (Sweden)
Hao Zhang
2015-01-01
Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Energy expenditure, economic growth, and the minimum EROI of society
International Nuclear Information System (INIS)
Fizaine, Florian; Court, Victor
2016-01-01
We estimate energy expenditure for the US and world economies from 1850 to 2012. Periods of high energy expenditure relative to GDP (from 1850 to 1945), or spikes (1973–74 and 1978–79) are associated with low economic growth rates, and periods of low or falling energy expenditure are associated with high and rising economic growth rates (e.g. 1945–1973). Over the period 1960–2010 for which we have continuous year-to-year data for control variables (capital formation, population, and unemployment rate) we estimate that, statistically, in order to enjoy positive growth, the US economy cannot afford to spend more than 11% of its GDP on energy. Given the current energy intensity of the US economy, this translates in a minimum societal EROI of approximately 11:1 (or a maximum tolerable average price of energy of twice the current level). Granger tests consistently reveal a one way causality running from the level of energy expenditure (as a fraction of GDP) to economic growth in the US between 1960 and 2010. A coherent economic policy should be founded on improving net energy efficiency. This would yield a “double dividend”: increased societal EROI (through decreased energy intensity of capital investment), and decreased sensitivity to energy price volatility. - Highlights: •We estimate energy expenditures as a fraction of GDP for the US, the world (1850–2012), and the UK (1300–2008). •Statistically speaking, the US economy cannot afford to allocate more than 11% of its GDP to energy expenditures in order to have a positive growth rate. •This corresponds to a maximum tolerable average price of energy of twice the current level. •In the same way, US growth is only possible if its primary energy system has at least a minimum EROI of approximately 11:1.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design
Borcherdt, Roger D.
2015-01-01
Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Minimum Wages and Teen Employment: A Spatial Panel Approach
Charlene Kalenkoski; Donald Lacombe
2011-01-01
The authors employ spatial econometrics techniques and Annual Averages data from the U.S. Bureau of Labor Statistics for 1990-2004 to examine how changes in the minimum wage affect teen employment. Spatial econometrics techniques account for the fact that employment is correlated across states. Such correlation may exist if a change in the minimum wage in a state affects employment not only in its own state but also in other, neighboring states. The authors show that state minimum wages negat...
Flow Convergence Caused by a Salinity Minimum in a Tidal Channel
Directory of Open Access Journals (Sweden)
John C. Warner
2006-12-01
Full Text Available Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1 A simple one-dimensional (1D finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2 A three-dimensional (3D hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3 A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
Directory of Open Access Journals (Sweden)
Chieh-Fan Chen
2011-01-01
Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Minimum Tracking Error Volatility
Luca RICCETTI
2010-01-01
Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...
Trends in Intense Typhoon Minimum Sea Level Pressure
Directory of Open Access Journals (Sweden)
Stephen L. Durden
2012-01-01
Full Text Available A number of recent publications have examined trends in the maximum wind speed of tropical cyclones in various basins. In this communication, the author focuses on typhoons in the western North Pacific. Rather than maximum wind speed, the intensity of the storms is measured by their lifetime minimum sea level pressure (MSLP. Quantile regression is used to test for trends in storms of extreme intensity. The results indicate that there is a trend of decreasing intensity in the most intense storms as measured by MSLP over the period 1951–2010. However, when the data are broken into intervals 1951–1987 and 1987–2010, neither interval has a significant trend, but the intensity quantiles for the two periods differ. Reasons for this are discussed, including the cessation of aircraft reconnaissance in 1987. The author also finds that the average typhoon intensity is greater in El Nino years, while the intensity of the strongest typhoons shows no significant relation to El Nino Southern Oscillation.
Employment effects of minimum wages
Neumark, David
2014-01-01
The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
2010-02-08
... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Average stem biomass of Gundelia ( Gundelia tournefortii L.) in ...
African Journals Online (AJOL)
We studied Gundelia tournefortii L. to determine its stem biomass characteristics. Data were collected with accidental sampling method (1*1 m) in this area. A total of 15 plots were collected and 75 samples were studied in this study. However, the minimum, maximum and mean stem biomass of this plant was 5.5, 22.6 and ...
MAXIMUM CORONAL MASS EJECTION SPEED AS AN INDICATOR OF SOLAR AND GEOMAGNETIC ACTIVITIES
International Nuclear Information System (INIS)
Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Gopalswamy, N.; Ozguc, A.; Rozelot, J. P.
2011-01-01
We investigate the relationship between the monthly averaged maximal speeds of coronal mass ejections (CMEs), international sunspot number (ISSN), and the geomagnetic Dst and Ap indices covering the 1996-2008 time interval (solar cycle 23). Our new findings are as follows. (1) There is a noteworthy relationship between monthly averaged maximum CME speeds and sunspot numbers, Ap and Dst indices. Various peculiarities in the monthly Dst index are correlated better with the fine structures in the CME speed profile than that in the ISSN data. (2) Unlike the sunspot numbers, the CME speed index does not exhibit a double peak maximum. Instead, the CME speed profile peaks during the declining phase of solar cycle 23. Similar to the Ap index, both CME speed and the Dst indices lag behind the sunspot numbers by several months. (3) The CME number shows a double peak similar to that seen in the sunspot numbers. The CME occurrence rate remained very high even near the minimum of the solar cycle 23, when both the sunspot number and the CME average maximum speed were reaching their minimum values. (4) A well-defined peak of the Ap index between 2002 May and 2004 August was co-temporal with the excess of the mid-latitude coronal holes during solar cycle 23. The above findings suggest that the CME speed index may be a useful indicator of both solar and geomagnetic activities. It may have advantages over the sunspot numbers, because it better reflects the intensity of Earth-directed solar eruptions.
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Low Streamflow Forcasting using Minimum Relative Entropy
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Proton transport properties of poly(aspartic acid) with different average molecular weights
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yuki, E-mail: ynagao@kuchem.kyoto-u.ac.j [Department of Mechanical Systems and Design, Graduate School of Engineering, Tohoku University, 6-6-01 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Imai, Yuzuru [Institute of Development, Aging and Cancer (IDAC), Tohoku University, 4-1 Seiryo-cho, Aoba-ku, Sendai 980-8575 (Japan); Matsui, Jun [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan); Ogawa, Tomoyuki [Department of Electronic Engineering, Graduate School of Engineering, Tohoku University, 6-6-05 Aoba Aramaki, Aoba-ku, Sendai 980-8579 (Japan); Miyashita, Tokuji [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 2-1-1 Katahira, Sendai 980-8577 (Japan)
2011-04-15
Research highlights: Seven polymers with different average molecular weights were synthesized. The proton conductivity depended on the number-average degree of polymerization. The difference of the proton conductivities was more than one order of magnitude. The number-average molecular weight contributed to the stability of the polymer. - Abstract: We synthesized seven partially protonated poly(aspartic acids)/sodium polyaspartates (P-Asp) with different average molecular weights to study their proton transport properties. The number-average degree of polymerization (DP) for each P-Asp was 30 (P-Asp30), 115 (P-Asp115), 140 (P-Asp140), 160 (P-Asp160), 185 (P-Asp185), 205 (P-Asp205), and 250 (P-Asp250). The proton conductivity depended on the number-average DP. The maximum and minimum proton conductivities under a relative humidity of 70% and 298 K were 1.7 . 10{sup -3} S cm{sup -1} (P-Asp140) and 4.6 . 10{sup -4} S cm{sup -1} (P-Asp250), respectively. Differential thermogravimetric analysis (TG-DTA) was carried out for each P-Asp. The results were classified into two categories. One exhibited two endothermic peaks between t = (270 and 300) {sup o}C, the other exhibited only one peak. The P-Asp group with two endothermic peaks exhibited high proton conductivity. The high proton conductivity is related to the stability of the polymer. The number-average molecular weight also contributed to the stability of the polymer.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
The Effect of the Minimum Compensating Cash Balance on School District Investments.
Dembowski, Frederick L.
Banks are usually reimbursed for their checking account services either by a fixed service charge or by requiring a minimum or minimum-average compensating cash balance. This paper demonstrates how to determine the optimal minimum balance for a school district to maintain in its account. It is assumed that both the bank and the school district use…
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Directory of Open Access Journals (Sweden)
Walter J. Gomes
2004-12-01
induce a chronic myocardial inflammatory reaction. We report on a modification of the ventricular reconstruction technique that eliminates the need of intraventricular patches and the use of synthetic material. METHOD: Eleven consecutive patients presenting with left ventricular aneurysms, evolving to functional class III and IV of the New York Heart Association, underwent direct left ventricular reconstruction surgery without the use of intraventricular patches or prosthetic strips. RESULTS: There was no operative mortality or need of mechanical circulatory support. The postoperative hospital stay ranged from 4 to 7 days (average 5.3 ± 1.1 days. The serial echocardiogram control showed reduction of the left ventricular diastolic diameter (from 69.0± 7.5 mm preoperatively to 62.6 ± 5.1 mm postoperatively. The left ventricular ejection fraction increased from 47.3% ± 6.6% to 56.3% ±10.5%. One-year follow-up revealed eight patients in functional class I and three in class II. CONCLUSION: This technique, with elimination of prosthetic materials, could contribute to an improvement of the clinical results in patients who undergo left ventricular reconstruction, providing virtual elimination of left ventricular akinetic areas and potentially attenuating the long-term myocardial chronic inflammatory reaction.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel; Cabello, Ana Marí a; Moran, Xose Anxelu G.; Massana, Ramon; Scharek, Renate
2016-01-01
and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer
Solar wind and coronal structure near sunspot minimum: Pioneer and SMM observations from 1985-1987
International Nuclear Information System (INIS)
Mihalov, J.D.; Barnes, A.; Hundhausen, A.J.; Smith, E.J.
1990-01-01
The solar wind speeds observed in the outer heliosphere (20 to 40 AU heliocentric distance, approximately) by Pioneers 10 an 11, and at a heliocentric distance of 0.7 AU by the Pioneer Venus spacecraft, reveal a complex set of changes in the years near the recent sunspot minimum, 1985-1987. The pattern of recurrent solar wind streams, the long-term average speed, and the sector polarity of the interplanetary magnetic field all changed in a manner suggesting both a temporal variation, and a changing dependence on heliographic latitude. Coronal observations made from the Solar Maximum Mission spacecraft during the same epoch show a systematic variation in coronal structure and (by implication) the magnetic structure imposed on the expanding solar wind. These observations suggest interpretation of the solar wind speed variations in terms of the familiar model where the speed increases with distance from a nearly flat interplanetary current sheet (or with heliomagnetic latitude), and where this current sheet becomes aligned with the solar equatorial plane as sunspot minimum approaches, but deviates rapidly from that orientation after minimum. The authors confirm here that this basic organization of the solar wind speed persists in the outer heliosphere with an orientation of the neutral sheet consistent with that inferred at a heliocentric distance of a few solar radii, from the coronal observations
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Split-plot fractional designs: Is minimum aberration enough?
DEFF Research Database (Denmark)
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
2017-08-21
number of neurons. Time is discretized and we assume any neuron can spike no more than once in a time bin. We have ν ≤ µ because ν is the probability of a...Comput. Appl. Math . 2000, 121, 331–354. 27. Shalizi, C.; Crutchfield, J. Computational mechanics: Pattern and prediction, structure and simplicity. J...Minimization of a Linearly Constrained Function by Partition of Feasible Domain. Math . Oper. Res. 1983, 8, 215–230. Entropy 2017, 19, 427 33 of 33 54. Candes, E
Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai
2017-11-01
Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Relationship between the minimum and maximum temperature thresholds for development in insects
Czech Academy of Sciences Publication Activity Database
Dixon, Anthony F. G.; Honěk, A.; Keil, P.; Kotela, M.A.A.; Šizling, A. L.; Jarošík, Vojtěch
2009-01-01
Roč. 23, č. 2 (2009), s. 257-264 ISSN 0269-8463 R&D Projects: GA MŠk(CZ) LC06073 Institutional research plan: CEZ:AV0Z60870520; CEZ:AV0Z60050516 Keywords : distribution * insects * thermal requirements for development * thermal window * thermal tolerance range * ectotherms Subject RIV: EG - Zoology Impact factor: 4.546, year: 2009
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim
2014-01-01
on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii
International Nuclear Information System (INIS)
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen
2016-01-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)
Maximum tech, minimum time. Response and cleanup of the Fidalgo Bay oil spill
International Nuclear Information System (INIS)
Pintler, L.R.
1991-01-01
A booster pump failure on a pipeline at Texaco's Anacortes refinery spilled more than 17,000 gallons of oil into Fidalgo Bay. A description is given of the spill control measures taken under Texaco's Spill Prevention and Control Countermeasures and facility contingency plans. The spill was addressed quickly, and containment booms were used to cordon off the spill. Vacuum trucks, rope mop machines and disk skimmers were used to collect the thickest concentrations of oil, and the oil and water collected was separated at the refinery's wastewater treatment centre. Nonwoven polypropylene sorbent pads, sweeps, booms and oil snares were used to clean up thinner concentrations of oil. Essential steps for a smooth spill response include the following: a comprehensive spill prevention and control countermeasures plan, training and regular drills and testing; immediate notification of appropriate regulatory agencies and company emergency response personnel; and the use of professional oil spill management contractors to assist in spill cleanup. 2 figs
The effect of land use change to maximum and minimum discharge in Cikapundung River Basin
Kuntoro, Arno Adi; Putro, Anton Winarto; Kusuma, M. Syahril B.; Natasaputra, Suardi
2017-11-01
Land use change are become issues for many river basin in the world, including Cikapundung River Basin in West Java. Cikapundung River is one of the main water sources of Bandung City water supply system. In the other hand, as one of the tributaries of Citarum River, Cikapundung also contributes to flooding in the Southern part of Bandung. Therefore, it is important to analyze the effect of land use change on Cikapundung river discharge, to maintain the reliability of water supply system and to minimize flooding in Bandung Basin. Land use map of Cikapundung River in 2009 shows that residential area (49.7%) and mixed farming (42.6%), are the most dominant land use type, while dry agriculture (19.4%) and forest (21.8%) cover the rest. The effect of land use change in Cikapundung River Basin is simulated by using Hydrological Simulation Program FORTRAN (HSPF) through 3 land use change scenarios: extreme, optimum, and existing. By using the calibrated parameters, simulation of the extreme land use change scenario with the decrease of forest area by 77.7% and increase of developed area by 57.0% from the existing condition resulted in increase of Qmax/Qmin ratio from 5.24 to 6.10. Meanwhile, simulation of the optimum land use change scenario with the expansion of forest area by 75.26% from the existing condition resulted in decrease of Qmax/Qmin ratio from 5.24 to 4.14. Although Qmax/Qmin ratio of Cikapundung is still relatively small, but the simulation shows the important of water resources analysis in providing river health indicator, as input for land use planning.
"A minimum of urbanism and a maximum of ruralism": the Cuban experience.
Gugler, J
1980-01-01
The case of Cuba provides social scientists with reasonably good information on urbanization policies and their implementation in 1 developing country committed to socialism. The demographic context is considered, and Cuban efforts to eliminate the rural-urban contradiction and to redefine the role of Havana are described. The impact of these policies is analyzed in terms of available data on urbanization patterns since January 1959 when the revolutionaries marched into Havana. Prerevolutionary urbanization trends are considered. Fertility in Cuba has declined simultaneously with mortality and even more rapidly. Projections assume a 1.85% annual growth rate, resulting in a population of nearly 15 million by the year 2000. Any estimate regarding the future trend in population growth must depend on prognosis of general living conditions and of specific government policies regarding contraception, abortion, female labor force participation, and child care facilities. If population growth in Cuba has been substantial, but less dramatic than that of many other developing countries, urban growth presents a similar picture. Cuba's highest rate of growth of the population living in urban centers with a population over 20,000, in any intercensal period during the 20th century, was 4.1%/year for 1943-1953. It dropped to 3.0% in the 1953-1970 period. Government policies achieved a measure of success in stemming the tide of rural-urban migration, but the aims of the revolutionary leadership went further. The objective was for urban dwellers to be involved in agriculture, and the living standards of the rural population were to be raised to approximate those of city dwellers. The goal of "urbanizing" the countryside found expression in a program designed to construct new small towns which could more easily be provided with services. A slowdown in the growth of Havana, and the concomitant weakening of its dominant position, was intended by the revolutionary leadership. Offical policies have been enunciated that connect the reduction in the dominance of Havana with the slowdown in urban growth and the urbanization of the countryside. Evidence is presented which suggests achievements along all of these dimensions, but by 1970 they were, as yet, quite limited.
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
Gurung, Prabin
2015-01-01
The thesis was written in order to find workable ideas and techniques of ecotourism for sustainable development and to find out the importance of ecotourism. It illustrates how ecotourism can play a beneficial role to visitors and local people. The thesis was based on ecotourism and its impact, the case study was Sauraha and Chitwan National Park. How ecotourism can be fruitful to local residents and nature, what are the drawbacks of ecotourism? Ecotourism also has negative impacts on both th...
Imber, S. M.; Milan, S. E.; Lester, M.
2012-04-01
We present a long term study, from 1995 - 2011, of the latitude of the Heppner-Maynard Boundary (HMB) determined using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection. We find that the average latitude of the HMB at midnight is 61° magnetic latitude during the solar maximum of 2003, but it moves significantly poleward during solar minimum, averaging 64° latitude during 1996, and 68° during 2010. This poleward motion is observed despite the increasing number of low latitude radars built in recent years as part of the StormDARN network, and so is not an artefact of data coverage. We believe that the recent extreme solar minimum lead to an average HMB location that was further poleward than previous solar cycles. We also calculated the open-closed field line boundary (OCB) from auroral images during the years 2000-2002 and find that on average the HMB is located equatorward of the OCB by ~6°. We suggest that the HMB may be a useful proxy for the OCB when global auroral images are not available.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
Minimum Q Electrically Small Antennas
DEFF Research Database (Denmark)
Kim, O. S.
2012-01-01
Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Feedback brake distribution control for minimum pitch
Tavernini, Davide; Velenis, Efstathios; Longo, Stefano
2017-06-01
The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
SS Cygni: The accretion disk in eruption and at minimum light
International Nuclear Information System (INIS)
Kiplinger, A.L.
1979-01-01
Absolute spectrophotometric observations of the dwarf nova SS Cygni have been obtained at maximum light, during the subsequent decline, and at minimum light. In order to provide a critical test of accretion disk theory, a model for a steady-state α-model accretion disk has been constructed which utilizes a grid of stellar energy distributions to synthesize the disk flux. Physical parameters for the accretion disk at maximum light are set by estimates of the intrinsic luminosity of the system that result from a desynthesis of a composite minimum light energy distribution. At maximum light, agreements between observational and theoretical continuum slopes and the Balmer jump are remarkably good. The model fails, however, during the eruption decline and at minimum light. It appears that the physical character of an accretion disk at minimum light must radiacally differ from the disk observed at maximum light
Fermat and the Minimum Principle
Indian Academy of Sciences (India)
Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...
Coupling between minimum scattering antennas
DEFF Research Database (Denmark)
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Directory of Open Access Journals (Sweden)
A. O. Akala
2013-11-01
Full Text Available GPS-TEC data were observed at the same local time at two equatorial stations on both longitudes: Lagos (6.52° N, 3.4° E, 3.04° S magnetic latitude, Nigeria; and Pucallpa (8.38° S, 74.57° W, 4.25° N magnetic latitude, Peru during the minimum (2009, 2010 and ascending (2011 phases of solar cycle 24. These data were grouped into daily, seasonal and solar activity sets. The day-to-day variations in vertical TEC (VTEC recorded the maximum during 14:00–16:00 LT and minimum during 04:00–06:00 LT at both longitudes. Seasonally, during solar minimum, maximum VTEC values were observed during March equinox and minimum during solstices. However, during the ascending phase of the solar activity, the maximum values were recorded during the December solstice and minimum during the June solstice. VTEC also increased with solar activity at both longitudes. On longitude by longitude comparison, the African GPS station generally recorded higher VTEC values than the American GPS station. Furthermore, harmonic analysis technique was used to extract the annual and semi-annual components of the amplitudes of the TEC series at both stations. The semi-annual variations dominated the TEC series over the African equatorial station, while the annual variations dominated those over the American equatorial station. The GPS-TEC-derived averages for non-storm days were compared with the corresponding values derived by the IRI-2007 with the NeQuick topside option. The NeQuick option of IRI-2007 showed better performance at the American sector than the African sector, but generally underestimating TEC during the early morning hours at both longitudes.
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
40 CFR 1045.140 - What is my engine's maximum engine power?
2010-07-01
...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Minimum resolvable power contrast model
Qian, Shuai; Wang, Xia; Zhou, Jingjing
2018-01-01
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru
Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.
2016-02-01
The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Calculation of the minimum critical mass of fissile nuclides
International Nuclear Information System (INIS)
Wright, R.Q.; Hopper, Calvin Mitchell
2008-01-01
The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.
International Nuclear Information System (INIS)
Um, Junshik; McFarquhar, Greg M.
2013-01-01
The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties (i.e., asymmetry parameter (g), single-scattering albedo (ω o ), extinction efficiency (Q ext ), scattering efficiency (Q sca ), absorption efficiency (Q abs ), and scattering phase function at scattering angles of 90° (P 11 (90°)), and 180° (P 11 (180°))) within a predefined accuracy level (i.e., 1.0%) were determined for four different nonspherical atmospheric ice crystal models (Gaussian random sphere, droxtal, budding Bucky ball, and column) with maximum dimension D=10μm using the Amsterdam discrete dipole approximation at λ=0.55, 3.78, and 11.0μm. The QMC required fewer orientations and less computing time than the lattice grid. The calculations of P 11 (90°) and P 11 (180°) required more orientations than the calculations of integrated scattering properties (i.e., g, ω o , Q ext , Q sca , and Q abs ) regardless of the orientation average scheme. The fewest orientations were required for calculating g and ω o . The minimum number of orientations and the corresponding computing time for single-scattering calculations decreased with an increase of wavelength, whereas they increased with the surface-area ratio that defines particle nonsphericity. -- Highlights: •The number of orientations required to calculate the average single-scattering properties of nonspherical ice crystals is investigated. •Single-scattering properties of ice crystals are calculated using ADDA. •Quasi Monte Carlo method is more efficient than lattice grid method for scattering calculations. •Single-scattering properties of ice crystals depend on a newly defined parameter called surface area ratio
Minimum ionizing particle detection using amorphous silicon diodes
Energy Technology Data Exchange (ETDEWEB)
Xi, J.; Hollingsworth, R.E.; Buitrago, R.H. (Glasstech Solar, Inc., Wheat Ridge, CO (USA)); Oakley, D.; Cumalat, J.P.; Nauenberg, U. (Colorado Univ., Boulder (USA). Dept. of Physics); McNeil, J.A. (Colorado School of Mines, Golden (USA). Dept. of Physics); Anderson, D.F. (Fermi National Accelerator Lab., Batavia, IL (USA)); Perez-Mendez, V. (Lawrence Berkeley Lab., CA (USA))
1991-03-01
Hydrogenated amorphous silicon pin diodes have been used to detect minimum ionizing electrons with a pulse height signal-to-noise ratio exceeding 3. A distinct signal was seen for shaping times from 100 to 3000 ns. The devices used had a 54 {mu}m thick intrinsic layer and an active area of 0.1 cm{sup 2}. The maximum signal was 3200 electrons with a noise width of 950 electrons for a shaping time of 250 ns. (orig.).
Power converter with maximum power point tracking MPPT for small wind-electric pumping systems
International Nuclear Information System (INIS)
Lara, David; Merino, Gabriel; Salazar, Lautaro
2015-01-01
Highlights: • We implement a wind electric pumping system of small power. • The power converter allowed to change the operating point of the electro pump. • Two control techniques were implemented in the power converter. • The control V/f variable allowed to increase the power generated by the permanent magnet generator. - Abstract: In this work, an AC–DC–AC direct-drive power converter was implemented for a wind electric pumping system consisting of a permanent magnet generator (PMG) of 1.3 kW and a peripheral single phase pump of 0.74 kW. In addition, the inverter linear V/f control scheme and the maximum power point tracking (MPPT) algorithm with variable V/f were developed. MPPT algorithm seeks to extract water in a wide range of power input using the maximum amount of wind power available. Experimental trials at different pump pressures were conducted. With a MPPT tracking system with variable V/f, a power value of 1.3 kW was obtained at a speed of 350 rpm and a maximum operating hydraulic head of 50 m. At lower operating heads pressures (between 10 and 40 m), variable V/f control increases the power generated by the PMG compared to the linear V/f control. This increase ranged between 4% and 23% depending on the operating pressure, with an average of 13%, getting close to the maximum electrical power curve of the PMG. The pump was driven at variable frequency reaching a minimum speed of 0.5 times the rated speed. Efficiency of the power converter ranges between 70% and 95% with a power factor between 0.4 and 0.85, depending on the operating pressure
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
Institute of Scientific and Technical Information of China (English)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
International Nuclear Information System (INIS)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations. In this paper, it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed. For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates, it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered. However, the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included, because the total heat into the system of interest is not fixed. An irreversible Carnot cycle and an irreversible Brayton cycle are analysed. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed. (general)
Average cross sections for the 252Cf neutron spectrum
International Nuclear Information System (INIS)
Dezso, Z.; Csikai, J.
1977-01-01
A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Weakest solar wind of the space age and the current 'MINI' solar maximum
International Nuclear Information System (INIS)
McComas, D. J.; Angold, N.; Elliott, H. A.; Livadiotis, G.; Schwadron, N. A.; Smith, C. W.; Skoug, R. M.
2013-01-01
The last solar minimum, which extended into 2009, was especially deep and prolonged. Since then, sunspot activity has gone through a very small peak while the heliospheric current sheet achieved large tilt angles similar to prior solar maxima. The solar wind fluid properties and interplanetary magnetic field (IMF) have declined through the prolonged solar minimum and continued to be low through the current mini solar maximum. Compared to values typically observed from the mid-1970s through the mid-1990s, the following proton parameters are lower on average from 2009 through day 79 of 2013: solar wind speed and beta (∼11%), temperature (∼40%), thermal pressure (∼55%), mass flux (∼34%), momentum flux or dynamic pressure (∼41%), energy flux (∼48%), IMF magnitude (∼31%), and radial component of the IMF (∼38%). These results have important implications for the solar wind's interaction with planetary magnetospheres and the heliosphere's interaction with the local interstellar medium, with the proton dynamic pressure remaining near the lowest values observed in the space age: ∼1.4 nPa, compared to ∼2.4 nPa typically observed from the mid-1970s through the mid-1990s. The combination of lower magnetic flux emergence from the Sun (carried out in the solar wind as the IMF) and associated low power in the solar wind points to the causal relationship between them. Our results indicate that the low solar wind output is driven by an internal trend in the Sun that is longer than the ∼11 yr solar cycle, and they suggest that this current weak solar maximum is driven by the same trend.
The minimum yield in channeling
International Nuclear Information System (INIS)
Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.
2000-01-01
A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation
International Nuclear Information System (INIS)
Kwee, Regina
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)
THE RISE AND FALL OF OPEN SOLAR FLUX DURING THE CURRENT GRAND SOLAR MAXIMUM
International Nuclear Information System (INIS)
Lockwood, M.; Rouillard, A. P.; Finch, I. D.
2009-01-01
We use geomagnetic activity data to study the rise and fall over the past century of the solar wind flow speed V SW , the interplanetary magnetic field strength B, and the open solar flux F S . Our estimates include allowance for the kinematic effect of longitudinal structure in the solar wind flow speed. As well as solar cycle variations, all three parameters show a long-term rise during the first half of the 20th century followed by peaks around 1955 and 1986 and then a recent decline. Cosmogenic isotope data reveal that this constitutes a grand maximum of solar activity which began in 1920, using the definition that such grand maxima are when 25-year averages of the heliospheric modulation potential exceeds 600 MV. Extrapolating the linear declines seen in all three parameters since 1985, yields predictions that the grand maximum will end in the years 2013, 2014, or 2027 using V SW , F S , or B, respectively. These estimates are consistent with predictions based on the probability distribution of the durations of past grand solar maxima seen in cosmogenic isotope data. The data contradict any suggestions of a floor to the open solar flux: we show that the solar minimum open solar flux, kinematically corrected to allow for the excess flux effect, has halved over the past two solar cycles.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Isoflurane minimum alveolar concentration reduction by fentanyl.
McEwan, A I; Smith, C; Dyar, O; Goodman, D; Smith, L R; Glass, P S
1993-05-01
Isoflurane is commonly combined with fentanyl during anesthesia. Because of hysteresis between plasma and effect site, bolus administration of fentanyl does not accurately describe the interaction between these drugs. The purpose of this study was to determine the MAC reduction of isoflurane by fentanyl when both drugs had reached steady biophase concentrations. Seventy-seven patients were randomly allocated to receive either no fentanyl or fentanyl at several predetermined plasma concentrations. Fentanyl was administered using a computer-assisted continuous infusion device. Patients were also randomly allocated to receive a predetermined steady state end-tidal concentration of isoflurane. Blood samples for fentanyl concentration were taken at 10 min after initiation of the infusion and before and immediately after skin incision. A minimum of 20 min was allowed between the start of the fentanyl infusion and skin incision. The reduction in the MAC of isoflurane by the measured fentanyl concentration was calculated using a maximum likelihood solution to a logistic regression model. There was an initial steep reduction in the MAC of isoflurane by fentanyl, with 3 ng/ml resulting in a 63% MAC reduction. A ceiling effect was observed with 10 ng/ml providing only a further 19% reduction in MAC. A 50% decrease in MAC was produced by a fentanyl concentration of 1.67 ng/ml. Defining the MAC reduction of isoflurane by all the opioids allows their more rational administration with inhalational anesthetics and provides a comparison of their relative anesthetic potencies.
Minimum Delay Moving Object Detection
Lao, Dong
2017-11-09
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Delay Moving Object Detection
Lao, Dong; Sundaramoorthi, Ganesh
2017-01-01
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Youth minimum wages and youth employment
Marimpi, Maria; Koning, Pierre
2018-01-01
This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
RCoronae Borealis at the 2003 light minimum
Kameswara Rao, N.; Lambert, David L.; Shetrone, Matthew D.
2006-08-01
A set of five high-resolution optical spectra of R CrB obtained in 2003 March is discussed. At the time of the first spectrum (March 8), the star was at V = 12.6, a decline of more than six magnitudes. By March 31, the date of the last observation, the star at V = 9.3 was on the recovery to maximum light (V = 6). The 2003 spectra are compared with the extensive collection of spectra from the 1995-1996 minimum presented previously. Spectroscopic features common to the two minima include the familiar ones also seen in spectra of other R Coronae Borealis stars (RCBs) in decline: sharp emission lines of neutral and singly ionized atoms, broad emission lines including HeI, [NII] 6583 Å, Na D and CaII H & K lines, and blueshifted absorption lines of Na D, and KI resonance lines. Prominent differences between the 2003 and 1995-1996 spectra are seen. The broad Na D and Ca H & K lines in 2003 and 1995-1996 are centred approximately on the mean stellar velocity. The 2003 profiles are fit by a single Gaussian, but in 1995-1996 two Gaussians separated by about 200 km s-1 were required. However, the HeI broad emission lines are fit by a single Gaussian at all times; the emitting He and Na-Ca atoms are probably not colocated. The C2 Phillips 2-0 lines were detected as sharp absorption lines and the C2 Swan band lines as sharp emission lines in 2003, but in 1995-1996 the Swan band emission lines were broad and the Phillips lines were undetected. The 2003 spectra show CI sharp emission lines at minimum light with a velocity changing in 5 d by about 20 km s-1 when the velocity of `metal' sharp lines is unchanged; the CI emission may arise from shock-heated gas. Reexamination of spectra obtained at maximum light in 1995 shows extended blue wings to strong lines with the extension dependent on a line's lower excitation potential; this is the signature of a stellar wind, also revealed by published observations of the HeI 10830 Å line at maximum light. Changes in the cores of the
Minimum wage development in the Russian Federation
Bolsheva, Anna
2012-01-01
The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...
Maximum vehicle cabin temperatures under different meteorological conditions
Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John
2009-05-01
A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Endogenous Sheet-Averaged Tension Within a Large Epithelial Cell Colony.
Dumbali, Sandeep P; Mei, Lanju; Qian, Shizhi; Maruthamuthu, Venkat
2017-10-01
Epithelial cells form quasi-two-dimensional sheets that function as contractile media to effect tissue shape changes during development and homeostasis. Endogenously generated intrasheet tension is a driver of such changes, but has predominantly been measured in the presence of directional migration. The nature of epithelial cell-generated forces transmitted over supracellular distances, in the absence of directional migration, is thus largely unclear. In this report, we consider large epithelial cell colonies which are archetypical multicell collectives with extensive cell-cell contacts but with a symmetric (circular) boundary. Using the traction force imbalance method (TFIM) (traction force microscopy combined with physical force balance), we first show that one can determine the colony-level endogenous sheet forces exerted at the midline by one half of the colony on the other half with no prior assumptions on the uniformity of the mechanical properties of the cell sheet. Importantly, we find that this colony-level sheet force exhibits large variations with orientation-the difference between the maximum and minimum sheet force is comparable to the average sheet force itself. Furthermore, the sheet force at the colony midline is largely tensile but the shear component exhibits significantly more variation with orientation. We thus show that even an unperturbed epithelial colony with a symmetric boundary shows significant directional variation in the endogenous sheet tension and shear forces that subsist at the colony level.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Maximum time-dependent space-charge limited diode currents
Energy Technology Data Exchange (ETDEWEB)
Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)
2016-01-15
Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Minimum Additive Waste Stabilization (MAWS)
International Nuclear Information System (INIS)
1994-02-01
In the Minimum Additive Waste Stabilization(MAWS) concept, actual waste streams are utilized as additive resources for vitrification, which may contain the basic components (glass formers and fluxes) for making a suitable glass or glassy slag. If too much glass former is present, then the melt viscosity or temperature will be too high for processing; while if there is too much flux, then the durability may suffer. Therefore, there are optimum combinations of these two important classes of constituents depending on the criteria required. The challenge is to combine these resources in such a way that minimizes the use of non-waste additives yet yields a processable and durable final waste form for disposal. The benefit to this approach is that the volume of the final waste form is minimized (waste loading maximized) since little or no additives are used and vitrification itself results in volume reduction through evaporation of water, combustion of organics, and compaction of the solids into a non-porous glass. This implies a significant reduction in disposal costs due to volume reduction alone, and minimizes future risks/costs due to the long term durability and leach resistance of glass. This is accomplished by using integrated systems that are both cost-effective and produce an environmentally sound waste form for disposal. individual component technologies may include: vitrification; thermal destruction; soil washing; gas scrubbing/filtration; and, ion-exchange wastewater treatment. The particular combination of technologies will depend on the waste streams to be treated. At the heart of MAWS is vitrification technology, which incorporates all primary and secondary waste streams into a final, long-term, stabilized glass wasteform. The integrated technology approach, and view of waste streams as resources, is innovative yet practical to cost effectively treat a broad range of DOE mixed and low-level wastes
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Latitudinal Change of Tropical Cyclone Maximum Intensity in the Western North Pacific
Choi, Jae-Won; Cha, Yumi; Kim, Hae-Dong; Kang, Sung-Dae
2016-01-01
This study obtained the latitude where tropical cyclones (TCs) show maximum intensity and applied statistical change-point analysis on the time series data of the average annual values. The analysis results found that the latitude of the TC maximum intensity increased from 1999. To investigate the reason behind this phenomenon, the difference of the average latitude between 1999 and 2013 and the average between 1977 and 1998 was analyzed. In a difference of 500 hPa streamline between the two ...
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Energy Technology Data Exchange (ETDEWEB)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Minimum emittance of three-bend achromats
International Nuclear Information System (INIS)
Li Xiaoyu; Xu Gang
2012-01-01
The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
The minimum wage in the Czech enterprises
Eva Lajtkepová
2010-01-01
Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
MONTHLY AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN
Directory of Open Access Journals (Sweden)
VIGH MELINDA
2014-03-01
Full Text Available Râul Negru hydrographic basin represents a well individualised and relatively homogenous physical-geographical unity from Braşov Depression. The flow is controlled by six hydrometric stations placed on the main collector and on two of the most powerful tributaries. Our analysis period is represented by the last 25 years (1988 - 2012 and it’s acceptable for make pertinent conclusions. The maximum discharge month is April, that it’s placed in the high flow period: March – June. Minimum discharges appear in November - because of the lack of pluvial precipitations; in January because of high solid precipitations and because of water volume retention in ice. Extreme discharge frequencies vary according to their position: in the mountain area – small basin surface; into a depression – high basin surface. Variation coefficients point out very similar variation principles, showing a relative homogeneity of flow processes.
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Stochastic variational approach to minimum uncertainty states
Energy Technology Data Exchange (ETDEWEB)
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
30 CFR 281.30 - Minimum royalty.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Electron density variations in the F2 layer maximum during solar activity cycle
International Nuclear Information System (INIS)
Besprozvannaya, A.S.; Kozina, P.E.; AN Kazakhskoj SSR, Alma-Ata. Sektor Ionosfery)
1988-01-01
R value, characterizing for F2 relation of hourly median values in solar activity minimum and maximum, is calculated by average monthly values of F2 layer critical frequencies for June, October and December 1958 and 1964. R latitudinal-temporal distributions are plotted for different seasons according to the data from the north hemisphere west and east stations, placed within the Φ'=35-70deg latitudes interval. The following peculiarities of F2 lyer ionization relation with solar activity are pointed out. There are day-time hours, they are - winter one characterized by the gain rate increase with the widths increase, and summer one, realizing the opposite regularity. In night-time hours R value is characterized by the abnormally low values (∼ 1.2) at the latitudes to the south of the ionospheric through and to the pole from it. For all three seasons during 24 hours the periods with ionization gain maximal rate, which occur at nights in summer time and in the hours after the sunset - in winter and equinoctial months, are observed. The quantitative explanation of the peculiarities detected concerning the to-day concepts on F2 layer formation mechanisms is given
A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition.
Directory of Open Access Journals (Sweden)
Muhammad Hameed Siddiqi
Full Text Available Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.
LDPC Codes with Minimum Distance Proportional to Block Size
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low
12 CFR 702.105 - Weighted-average life of investments.
2010-01-01
... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Kumaraswamy autoregressive moving average models for double bounded environmental data
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Zonally averaged chemical-dynamical model of the lower thermosphere
International Nuclear Information System (INIS)
Kasting, J.F.; Roble, R.G.
1981-01-01
A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
A minimum-size tokamak concept for conditions near ignition
International Nuclear Information System (INIS)
Lehnert, B.
1983-01-01
Based on a combination of Alcator scaling and a recent theory on the Murakami density limit, a minimum-size tokamak concept (Minitor) is proposed. Even if this concept does not aim at alpha particle containment, it has the important goal of reaching plasma core temperatures and Lawson parameter values required for ignition, by ohmic heating alone and under macroscopically stable conditions. The minimized size, and the associated enhancement of the plasma current density, are found to favour high plasma temperatues, average densities, and beta values. The goal of this concept appears to be realizable by relatively modest technical means. (author)
Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast
Goseling, J.; Matsumoto, R.; Uyematsu, T.; Weber, J.H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Minimum emittance in TBA and MBA lattices
International Nuclear Information System (INIS)
Xu Gang; Peng Yuemei
2015-01-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)
Who Benefits from a Minimum Wage Increase?
John W. Lopresti; Kevin J. Mumford
2015-01-01
This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...
Wage inequality, minimum wage effects and spillovers
Stewart, Mark B.
2011-01-01
This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...
14 CFR 205.5 - Minimum coverage.
2010-01-01
... 18,000 pounds maximum payload capacity, carriers need only maintain coverage of $2,000,000 per... than 30 seats or 7,500 pounds maximum cargo payload capacity, and a maximum authorized takeoff weight... not be contingent upon the financial condition, solvency, or freedom from bankruptcy of the carrier...
Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity
Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie
2017-07-01
Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.
Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim
2008-07-19
Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.
Reference respiratory waveforms by minimum jerk model analysis
Energy Technology Data Exchange (ETDEWEB)
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)
2015-09-15
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy
Reference respiratory waveforms by minimum jerk model analysis
International Nuclear Information System (INIS)
Anetai, Yusuke; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko; Ota, Seiichi
2015-01-01
Purpose: CyberKnife"® robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony"® mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife"®. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony"® mode, a tracking laser projection from CyberKnife"® was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Impact of the Minimum Wage on Compression.
Wolfe, Michael N.; Candland, Charles W.
1979-01-01
Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Determining minimum lubrication film for machine parts
Hamrock, B. J.; Dowson, D.
1978-01-01
Formula predicts minimum film thickness required for fully-flooded ball bearings, gears, and cams. Formula is result of study to determine complete theoretical solution of isothermal elasto-hydrodynamic lubrication of fully-flooded elliptical contacts.
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
The SME gauge sector with minimum length
Energy Technology Data Exchange (ETDEWEB)
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Talsma, D.
2008-01-01
The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the
Talsma, D.
2008-01-01
The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Energy Technology Data Exchange (ETDEWEB)
Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Load averaging system for co-generation plant; Jikayo hatsuden setsubi ni okeru fuka heijunka system
Energy Technology Data Exchange (ETDEWEB)
Ueno, Y. [Fuji Electric Co. Ltd., Tokyo (Japan)
1995-07-30
MAZDA Motor Corp. planed the construction of a 20.5MW co-generation plant in 1991 for responding to an increase in power demand due to expansion of the Hofu factory. On introduction of this co-generation plant, it was decided that the basic system would adopt the following. (1) A circulating fluidized bed boiler which can be operated by burning multiple kinds of fuels with minimum environmental pollution. (2) A heat accumulation system which can be operated through reception of a constant power from electric power company despite a sudden and wide range change in power demand. (3) A circulating-water exchange heat recovery system which recovers exhaust heat of the turbine plant as the hot water to be utilized for heating and air-conditioning of the factory mainly in winter. Power demand in MAZDA`s Hofu factory changes 15% per minute within a maximum range from 20MW to 8MW. This change is difficult to be followed even by an oil burning boiler excellent in load follow-up. The circulating Fluidized bed boiler employed this time is lower in the follow-up performance than the oil boiler. For the newly schemed plant, however, load averaging system named a heat accumulation system capable of responding fully to the above change has been developed. This co-generation plant satisfied the official inspection before commercial operation according the Ministerial Ordinance in 1993. Since then, with regard to the rapid load following, which was one of the initial targets, operation is now performed steadily. This paper introduces an outline of the system and operation conditions. 10 refs.
Seasonal regional forecast of the minimum sea ice extent in the LapteV Sea
Tremblay, B.; Brunette, C.; Newton, R.
2017-12-01
Late winter anomaly of sea ice export from the peripheral seas of the Atctic Ocean was found to be a useful predictor for the minimum sea ice extent (SIE) in the Arctic Ocean (Williams et al., 2017). In the following, we present a proof of concept for a regional seasonal forecast of the min SIE for the Laptev Sea based on late winter coastal divergence quantified using a Lagrangian Ice Tracking System (LITS) forced with satellite derived sea-ice drifts from the Polar Pathfinder. Following Nikolaeva and Sesterikov (1970), we track an imaginary line just offshore of coastal polynyas in the Laptev Sea from December of the previous year to May 1 of the following year using LITS. Results show that coastal divergence in the Laptev Sea between February 1st and May 1st is best correlated (r = -0.61) with the following September minimum SIE in accord with previous results from Krumpen et al. (2013, for the Laptev Sea) and Williams et a. (2017, for the pan-Arctic). This gives a maximum seasonal predictability of Laptev Sea min SIE anomalies from observations of approximately 40%. Coastal ice divergence leads to formation of thinner ice that melts earlier in early summer, hence creating areas of open water that have a lower albedo and trigger an ice-albedo feedback. In the Laptev Sea, we find that anomalies of coastal divergence in late winter are amplified threefold to result in the September SIE. We also find a correlation coefficient r = 0.49 between February-March-April (FMA) anomalies of coastal divergence with the FMA averaged AO index. Interestingly, the correlation is stronger, r = 0.61, when comparing the FMA coastal divergence anomalies to the DJFMA averaged AO index. It is hypothesized that the AO index at the beginning of the winter (and the associated anomalous sea ice export) also contains information that impact the magnitude of coastal divergence opening later in the winter. Our approach differs from previous approaches (e.g. Krumpen et al and Williams et al
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Kandaswamy, Krishna Kumar Umar
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred. © 2012 Elsevier Ltd.
2015-12-15
propagating , planetary-scale waves (wavenumber 1 and wavenumber 2) in the lower thermosphere that are associated with different stratospheric conditions. To...prominent meridional propagation of wave activity from the mid- latitudes toward the tropics. In combination with strong eastward meridional wind shear, our...Neutral and Ionized Atmosphere, Whole Atmosphere Model, and WACCM-X. The comparison focuses on the zonal mean, planetary wave , and tidal variability in
Kandaswamy, Krishna Kumar Umar; Ganesan, Pugalenthi; Kalies, Kai Uwe; Hartmann, Enno; Martinetz, Thomas M.
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan
2013-04-16
... brittleness and loss, gastrointestinal upsets, skin rash, garlic breath odor, fatigue, irritability, and... adult values on the basis of body weight and with a factor allowed for growth (Ref. 2). Although... infants 0 to 6 months of age is 750 milliliter (ml)/day; (2) a representative body weight for infants over...
Differential rotation of the Sun and the Maunder minimum of solar activity
International Nuclear Information System (INIS)
Ikhsanov, R.N.; Vitinskij, Yu.I.
1980-01-01
Nature of differential rotation of the Sun is discussed. Investigation of long term changes in differential rotation separately for two phase of 11 year cycle of the Sun activity is carried out. Data on heliographic coordinates for every day of all groups of the Sun spots for the years preceding the epoch of the minimum of the 11 year cycle and the Sun groups for the years of maximum from ''Greenwich Photoheliographic Results'' for 1875-1954 are used as initial material. It is shown that differential rotation of the Sun changes in time from one 11 year cycle of the Sun activity to another. This change is connected with the power of 11 year cycle. During the maximum phase of 11 year cycle differentiality of the rotation increases in the cycles where the cycle maximum is higher. Before the minimum of 11 year cycle rotation differentiability is lower in the cycles for which activity maximum is higher in the next 11 year cycle. Equatorial rate of the Sun rotation increases with the decrease in the cycle power when the maximum Wolf number is less than 110. The mentioned regularities took place both during Maunder minimum and before its beginning [ru
Minimum number of transfer units and reboiler duty for multicomponent distillation columns
International Nuclear Information System (INIS)
Pleşu, Valentin; Bonet Ruiz, Alexandra Elena; Bonet, Jordi; Llorens, Joan; Iancu, Petrica
2013-01-01
Some guidelines to evaluate distillation columns, considering only basic thermodynamic data and principles, are provided in this paper. The method allows a first insight to the problem by simple calculations, without requiring column variables to ensure rational use of energy and low environmental impact. The separation system is approached by two complementary ways: minimum and infinite reflux flow rate. The minimum reflux provides the minimum energy requirements, and the infinite reflux provides the feasibility conditions. The difficulty of separation can be expressed in terms of number of transfer units (NTU). The applicability of the method is not mathematically limited by the number of components in the mixture. It is also applicable to reactive distillation. Several mixtures, including reactive distillation, are rigorously simulated as illustrative examples, to verify the applicability of the approach. The separation of the mixtures, performed by distillation columns, is feasible if a minimum NTU can be calculated between the distillate and bottom products. Once verified the feasibility of the separation, the maximum thermal efficiency depends only on boiling point of bottom and distillate streams. The minimum energy requirements corresponding to the reboiler can be calculated from the maximum thermal efficiency, and the variation of entropy and enthalpy of mixing between distillate and bottom streams. -- Highlights: • Feasibility analysis complemented with difficulty of separation parameters • Minimum and infinite reflux simplified models for distillation columns • Minimum number of transfer units (NTU) for packed columns at early design stages • Calculation of minimum energy distillation requirements at early design stages • Thermodynamic cycle approach and efficiency for distillation columns
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Application of the maximum entropy production principle to electrical systems
International Nuclear Information System (INIS)
Christen, Thomas
2006-01-01
For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated
Direct maximum parsimony phylogeny reconstruction from genotype data
Directory of Open Access Journals (Sweden)
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
The minimum wage in the Czech enterprises
Directory of Open Access Journals (Sweden)
Eva Lajtkepová
2010-01-01
Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
Risk control and the minimum significant risk
International Nuclear Information System (INIS)
Seiler, F.A.; Alvarez, J.L.
1996-01-01
Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented
International Nuclear Information System (INIS)
Peřinová, Vlasta; Lukš, Antonín
2015-01-01
The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated. (paper)
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
Maximum power analysis of photovoltaic module in Ramadi city
Energy Technology Data Exchange (ETDEWEB)
Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)
2013-07-01
Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.
Minimum weight design of composite laminates for multiple loads
International Nuclear Information System (INIS)
Krikanov, A.A.; Soni, S.R.
1995-01-01
A new design method of constructing optimum weight composite laminates for multiple loads is proposed in this paper. A netting analysis approach is used to develop an optimization procedure. Three ply orientations permit development of optimum laminate design without using stress-strain relations. It is proved that stresses in minimum weight laminate reach allowable values in each ply with given load. The optimum ply thickness is defined at maximum value among tensile and compressive loads. Two examples are given to obtain optimum ply orientations, thicknesses and materials. For comparison purposes, calculations of stresses are done in orthotropic material using classical lamination theory. Based upon these calculations, matrix degrades at 30 to 50% of ultimate load. There is no fiber failure and therefore laminates withstand all applied loads in both examples
A minimum attention control center for nuclear power plants
International Nuclear Information System (INIS)
Meijer, C.H.
1986-01-01
Control Centers for Nuclear Power Plants have characteristically been designed for maximum attention by the operating staffs of these plants. Consequently, the monitoring, control and diagnostics oriented cognitive activities by these staffs, were mostly ''data-driven'' in nature. This paper addresses a control center concept, under development by Combustion Engineering, that promotes a more ''information-driven'' cognitive interaction process between the operator and the plant. The more ''intelligent'' and therefore less attentive nature of such interactive process utilizes computer implemented cognitive engineered algorithms. The underlying structure of these algorithms is based upon the Critical Function/Success Path monitoring principle. The paper highlights a typical implementation of the minimum attention concept for the handling of unfamiliar safety related events. (author)
Wind Turbine Down-regulation Strategy for Minimum Wake Deficit
DEFF Research Database (Denmark)
Ma, Kuichao; Zhu, Jiangsheng; N. Soltani, Mohsen
2017-01-01
Down-regulation mode of wind turbine is commonly used no matter for the reserve power for supporting ancillary service to the grid, power optimization in wind farm or reducing power loss in the fault condition. It is also a method to protect faulty turbine. A down-regulation strategy based...... on minimum wake deficit is proposed in this paper, for the power improvement of the downwind turbine in low and medium wind speed region. The main idea is to operate turbine work at an appropriate operating point through rotor speed and torque control. The effectiveness of the strategy is verified...... by comparing with maximum rotor speed strategy. The result shows that the proposed strategy can improve the power of downwind turbine effectively....
Minimum qualifications for nuclear criticality safety professionals
International Nuclear Information System (INIS)
Ketzlach, N.
1990-01-01
A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals
A minimum achievable PV electrical generating cost
International Nuclear Information System (INIS)
Sabisky, E.S.
1996-01-01
The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
How to design your stand-by diesel generator unit for maximum reliability
International Nuclear Information System (INIS)
Kauffmann, W.M.
1979-01-01
Critical stand-by power applications, such as in a nuclear plant, or radio support stations, demand exacting guidelines for positive start, rapid acceleration, load acceptance with minimum voltage drop, and quick recovery to rated voltage. The design of medium-speed turbocharged and intercooled diesel-engine-generator for this purpose is considered. Selection of the diesel engine, size, and number of units, from the standpoint of cost, favors minimum number of units with maximum horsepower capability. Four-cycle diesels are available in 16 to 20 cyinders V-configurations, with 200 BMEP (brake mean-effective pressure) continuous and 250 BMEP peaking
Effects of bruxism on the maximum bite force
Directory of Open Access Journals (Sweden)
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters
Directory of Open Access Journals (Sweden)
Marc Thuillard
2009-01-01
Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.
RR Tel: Determination of Dust Properties During Minimum Obscuration
Directory of Open Access Journals (Sweden)
Jurkić T.
2012-06-01
Full Text Available the ISO infrared spectra and the SAAO long-term JHKL photometry of RR Tel in the epochs during minimum obscuration are studied in order to construct a circumstellar dust model. the spectral energy distribution in the near- and the mid-IR spectral range (1–15 μm was obtained for an epoch without the pronounced dust obscuration. the DUSTY code was used to solve the radiative transfer through the dust and to determine the circumstellar dust properties of the inner dust regions around the Mira component. Dust temperature, maximum grain size, dust density distribution, mass-loss rate, terminal wind velocity and optical depth are determined. the spectral energy distribution and the long-term JHKL photometry during an epoch of minimum obscuration show almost unattenuated stellar source and strong dust emission which cannot be explained by a single dust shell model. We propose a two-component model consisting of an optically thin circmustellar dust shell and optically thick dust outside the line of sight in some kind of a flattened geometry, which is responsible for most of the observed dust thermal emission.
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
The radial distribution of cosmic rays in the heliosphere at solar maximum
McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.
2003-08-01
To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Discretization of space and time: determining the values of minimum length and minimum time
Roatta , Luca
2017-01-01
Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .
Jarzynski equality in the context of maximum path entropy
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.
Pennsylvania State Dept. of Public Instruction, Harrisburg.
MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…
Dirac's minimum degree condition restricted to claws
Broersma, Haitze J.; Ryjacek, Z.; Schiermeyer, I.
1997-01-01
Let G be a graph on n 3 vertices. Dirac's minimum degree condition is the condition that all vertices of G have degree at least . This is a well-known sufficient condition for the existence of a Hamilton cycle in G. We give related sufficiency conditions for the existence of a Hamilton cycle or a
7 CFR 33.10 - Minimum requirements.
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... shipment of apples to any foreign destination unless: (a) Apples grade at least U.S. No. 1 or U.S. No. 1...
Minimum Risk Pesticide: Definition and Product Confirmation
Minimum risk pesticides pose little to no risk to human health or the environment and therefore are not subject to regulation under FIFRA. EPA does not do any pre-market review for such products or labels, but violative products are subject to enforcement.
The Minimum Distance of Graph Codes
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn
2011-01-01
We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....
Minimum maintenance solar pump | Assefa | Zede Journal
African Journals Online (AJOL)
A minimum maintenance solar pump (MMSP), Fig 1, has been simulated for Addis Ababa, taking solar meteorological data of global radiation, diffuse radiation and ambient air temperature as input to a computer program that has been developed. To increase the performance of the solar pump, by trapping the long-wave ...
Context quantization by minimum adaptive code length
DEFF Research Database (Denmark)
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....
7 CFR 35.13 - Minimum quantity.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... part, transport or receive for transportation to any foreign destination, a shipment of 25 packages or...
Minimum impact house prototype for sustainable building
Götz, E.; Klenner, K.; Lantelme, M.; Mohn, A.; Sauter, S.; Thöne, J.; Zellmann, E.; Drexler, H.; Jauslin, D.
2010-01-01
The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the
49 CFR 639.27 - Minimum criteria.
2010-10-01
... dollar value to any non-financial factors that are considered by using performance-based specifications..., DEPARTMENT OF TRANSPORTATION CAPITAL LEASES Cost-Effectiveness § 639.27 Minimum criteria. In making the... used where possible and appropriate: (a) Operation costs; (b) Reliability of service; (c) Maintenance...
Computing nonsimple polygons of minimum perimeter
Fekete, S.P.; Haas, A.; Hemmer, M.; Hoffmann, M.; Kostitsyna, I.; Krupke, D.; Maurer, F.; Mitchell, J.S.B.; Schmidt, A.; Schmidt, C.; Troegel, J.
2018-01-01
We consider the Minimum Perimeter Polygon Problem (MP3): for a given set V of points in the plane, find a polygon P with holes that has vertex set V , such that the total boundary length is smallest possible. The MP3 can be considered a natural geometric generalization of the Traveling Salesman
Minimum-B mirrors plus EBT principles
International Nuclear Information System (INIS)
Yoshikawa, S.
1983-01-01
Electrons are heated at the minimum B location(s) created by the multipole field and the toroidal field. Resulting hot electrons can assist plasma confinement by (1) providing mirror, (2) creating azimuthally symmetric toroidal confinement, or (3) creating modified bumpy torus
Completeness properties of the minimum uncertainty states
Trifonov, D. A.
1993-01-01
The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.
Minimum Description Length Shape and Appearance Models
DEFF Research Database (Denmark)
Thodberg, Hans Henrik
2003-01-01
The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...
Faster Fully-Dynamic minimum spanning forest
DEFF Research Database (Denmark)
Holm, Jacob; Rotenberg, Eva; Wulff-Nilsen, Christian
2015-01-01
We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in O(log4 n/log logn) expected amortized time per operation, improving the O(log4 n) amortized bound of Holm et al. (STOC’98, JACM’01).We also provide a deterministic data...
Minimum Wage Effects throughout the Wage Distribution
Neumark, David; Schweitzer, Mark; Wascher, William
2004-01-01
This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…
Asymptotics for the minimum covariance determinant estimator
Butler, R.W.; Davies, P.L.; Jhun, M.
1993-01-01
Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown
Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis
Directory of Open Access Journals (Sweden)
LiMin Wang
2014-01-01
Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
76 FR 22802 - Interim Enforcement Policy for Minimum Days Off Requirements
2011-04-25
... work hours that an individual can work by using a comparable but simpler and more flexible requirement... having only one in every nine days off or consistently working the maximum allowable hours, which would...-based objective, consisting of an average of 54 hours worked per week, averaged over a calendar quarter...
Nosé, M.; Ieda, A.; Christon, S. P.
2009-07-01
We examined long-term variations of ion composition in the plasma sheet, using energetic (9.4-212.1 keV/e) ion flux data obtained by the suprathermal ion composition spectrometer (STICS) sensor of the energetic particle and ion composition (EPIC) instrument on board the Geotail spacecraft. EPIC/STICS observations are available from 17 October 1992 for more than 16 years, covering the declining phase of solar cycle 22, all of solar cycle 23, and the early phase of solar cycle 24. This unprecedented long-term data set revealed that (1) the He+/H+ and O+/H+ flux ratios in the plasma sheet were dependent on the F10.7 index; (2) the F10.7 index dependence is stronger for O+/H+ than He+/H+; (3) the O+/H+ flux ratio is also weakly correlated with the ΣKp index; and (4) the He2+/H+ flux ratio in the plasma sheet appeared to show no long-term trend. From these results, we derived empirical equations related to plasma sheet ion composition and the F10.7 index and estimated that the average plasma ion mass changes from ˜1.1 amu during solar minimum to ˜2.8 amu during solar maximum. In such a case, the Alfvén velocity during solar maximum decreases to ˜60% of the solar minimum value. Thus, physical processes in the plasma sheet are considered to be much different between solar minimum and solar maximum. We also compared long-term variation of the plasma sheet ion composition with that of the substorm occurrence rate, which is evaluated by the number of Pi2 pulsations. No correlation or negative correlation was found between them. This result contradicts the O+ triggering substorm model, in which heavy ions in the plasma sheet increase the growth rate of the linear ion tearing mode and play an important role in localization and initiation of substorms. In contrast, O+ ions in the plasma sheet may prevent occurrence of substorms.
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
Planetary tides during the Maunder sunspot minimum
International Nuclear Information System (INIS)
Smythe, C.M.; Eddy, J.A.
1977-01-01
Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet
International Nuclear Information System (INIS)
Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi
2010-01-01
Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm
MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.
Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang
2018-02-02
The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Entanglement in random pure states: spectral density and average von Neumann entropy
Energy Technology Data Exchange (ETDEWEB)
Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)
2011-11-04
Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)
The Control Based on Internal Average Kinetic Energy in Complex Environment for Multi-robot System
Yang, Mao; Tian, Yantao; Yin, Xianghua
In this paper, reference trajectory is designed according to minimum energy consumed for multi-robot system, which nonlinear programming and cubic spline interpolation are adopted. The control strategy is composed of two levels, which lower-level is simple PD control and the upper-level is based on the internal average kinetic energy for multi-robot system in the complex environment with velocity damping. Simulation tests verify the effectiveness of this control strategy.
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass
Detection of minimum-ionizing particles in hydrogenated amorphous silicon
International Nuclear Information System (INIS)
Kaplan, S.N.; Fujieda, I.; Perez-Mendez, V.; Qureshi, S.; Ward, W.; Street, R.A.
1987-09-01
Based on previously-reported results of the successful detection of alpha particles and 1- and 2-MeV protons with hydrogenated amorphous silicon (a-Si : H) diodes, detection of a single minimum-ionizing particle will require a total sensitive thickness of approximately 100 to 150 μm, either in the form of a single thick diode, or as a stack of several thinner diodes. Signal saturation at high dE/dx makes it necessary to simulate minimum ionization in order to evaluate present detectors. Two techniques, using pulsed infrared light, and pulsed x-rays, give single-pulse signals large enough for direct measurements. A third, using beta rays, requires multiple-transit signal averaging to produce signals measurable above noise. Signal amplitudes from the a-Si : H limit at 60% of the signal size from Si crystals extrapolated to the same thickness. This is consistent with an a-Si : H radiation ionization energy, W = 6 eV/electron-hole pair. Beta-ray signals are observed at the expected amplitude
Measurements and IRI Model Predictions During the Recent Solar Minimum
Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.
2012-01-01
Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.
DEFF Research Database (Denmark)
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
Measurement of Minimum Bias Observables with ATLAS
Kvita, Jiri; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.
Comments on the 'minimum flux corona' concept
International Nuclear Information System (INIS)
Antiochos, S.K.; Underwood, J.H.
1978-01-01
Hearn's (1975) models of the energy balance and mass loss of stellar coronae, based on a 'minimum flux corona' concept, are critically examined. First, it is shown that the neglect of the relevant length scales for coronal temperature variation leads to an inconsistent computation of the total energy flux F. The stability arguments upon which the minimum flux concept is based are shown to be fallacious. Errors in the computation of the stellar wind contribution to the energy budget are identified. Finally we criticize Hearn's (1977) suggestion that the model, with a value of the thermal conductivity modified by the magnetic field, can explain the difference between solar coronal holes and quiet coronal regions. (orig.) 891 WL [de
Minimum wakefield achievable by waveguide damped cavity
International Nuclear Information System (INIS)
Lin, X.E.; Kroll, N.M.
1995-01-01
The authors use an equivalent circuit to model a waveguide damped cavity. Both exponentially damped and persistent (decay t -3/2 ) components of the wakefield are derived from this model. The result shows that for a cavity with resonant frequency a fixed interval above waveguide cutoff, the persistent wakefield amplitude is inversely proportional to the external Q value of the damped mode. The competition of the two terms results in an optimal Q value, which gives a minimum wakefield as a function of the distance behind the source particle. The minimum wakefield increases when the resonant frequency approaches the waveguide cutoff. The results agree very well with computer simulation on a real cavity-waveguide system
Protocol for the verification of minimum criteria
International Nuclear Information System (INIS)
Gaggiano, M.; Spiccia, P.; Gaetano Arnetta, P.
2014-01-01
This Protocol has been prepared with reference to the provisions of article 8 of the Legislative Decree of May 26, 2000 No. 187. Quality controls of radiological equipment fit within the larger 'quality assurance Program' and are intended to ensure the correct operation of the same and the maintenance of that State. The pursuit of this objective guarantees that the radiological equipment subjected to those controls also meets the minimum criteria of acceptability set out in annex V of the aforementioned legislative decree establishing the conditions necessary to allow the functions to which each radiological equipment was designed, built and for which it is used. The Protocol is established for the purpose of quality control of radiological equipment of Cone Beam Computer Tomography type and reference document, in the sense that compliance with stated tolerances also ensures the subsistence minimum acceptability requirements, where applicable.
On the average luminosity of electron positron collider and positron-producing energy
International Nuclear Information System (INIS)
Xie Jialin
1985-01-01
In this paper, the average luminosity of linac injected electron positron collider is investigated from the positron-producing energy point of view. When the energy of the linac injector is fixed to be less than the operating energy of the storage ring, it has been found that there exists a positron-producing energy to give optimum average luminosity. Two cases have been studied, one for an ideal storage ring with no single-beam instability and the other for practical storage ring with fast head-tail instability. The result indicates that there is a positron-producing energy corresponding to the minimum injection time, but this does not correspond to the optimum average luminosity for the practical storage rings. For Beijing Electron Positron Collider (BEPC), the positron-producing energy corresponding to the optimum average luminosity is about one tenth of the total injector energy
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Minimum Wage Laws and the Distribution of Employment.
Lang, Kevin
The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…
Minimum intervention dentistry: periodontics and implant dentistry.
Darby, I B; Ngo, L
2013-06-01
This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease. © 2013 Australian Dental Association.
Minimum quality standards and international trade
DEFF Research Database (Denmark)
Baltzer, Kenneth Thomas
2011-01-01
This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which a domes...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....
''Reduced'' magnetohydrodynamics and minimum dissipation rates
International Nuclear Information System (INIS)
Montgomery, D.
1992-01-01
It is demonstrated that all solutions of the equations of ''reduced'' magnetohydrodynamics approach a uniform-current, zero-flow state for long times, given a constant wall electric field, uniform scalar viscosity and resistivity, and uniform mass density. This state is the state of minimum energy dissipation rate for these boundary conditions. No steady-state turbulence is possible. The result contrasts sharply with results for full three-dimensional magnetohydrodynamics before the reduction occurs
Minimum K_2,3-saturated Graphs
Chen, Ya-Chen
2010-01-01
A graph is K_{2,3}-saturated if it has no subgraph isomorphic to K_{2,3}, but does contain a K_{2,3} after the addition of any new edge. We prove that the minimum number of edges in a K_{2,3}-saturated graph on n >= 5 vertices is sat(n, K_{2,3}) = 2n - 3.
Minimum degree and density of binary sequences
DEFF Research Database (Denmark)
Brandt, Stephan; Müttel, J.; Rautenbach, D.
2010-01-01
For d,k∈N with k ≤ 2d, let g(d,k) denote the infimum density of binary sequences (x)∈{0,1} which satisfy the minimum degree condition σ(x+) ≥ k for all i∈Z with xi=1. We reduce the problem of computing g(d,k) to a combinatorial problem related to the generalized k-girth of a graph G which...
A simple maximum power point tracker for thermoelectric generators
International Nuclear Information System (INIS)
Paraskevas, Alexandros; Koutroulis, Eftichios
2016-01-01
Highlights: • A Maximum Power Point Tracking (MPPT) method for thermoelectric generators is proposed. • A power converter is controlled to operate on a pre-programmed locus. • The proposed MPPT technique has the advantage of operational and design simplicity. • The experimental average deviation from the MPP power of the TEG source is 1.87%. - Abstract: ThermoElectric Generators (TEGs) are capable to harvest the ambient thermal energy for power-supplying sensors, actuators, biomedical devices etc. in the μW up to several hundreds of Watts range. In this paper, a Maximum Power Point Tracking (MPPT) method for TEG elements is proposed, which is based on controlling a power converter such that it operates on a pre-programmed locus of operating points close to the MPPs of the power–voltage curves of the TEG power source. Compared to the past-proposed MPPT methods for TEGs, the technique presented in this paper has the advantage of operational and design simplicity. Thus, its implementation using off-the-shelf microelectronic components with low-power consumption characteristics is enabled, without being required to employ specialized integrated circuits or signal processing units of high development cost. Experimental results are presented, which demonstrate that for MPP power levels of the TEG source in the range of 1–17 mW, the average deviation of the power produced by the proposed system from the MPP power of the TEG source is 1.87%.
Design for minimum energy in interstellar communication
Messerschmitt, David G.
2015-02-01
Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Bounds and maximum principles for the solution of the linear transport equation
International Nuclear Information System (INIS)
Larsen, E.W.
1981-01-01
Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist
Fallahpour, M.; Case, J. T.; Kharkovsky, S.; Zoughi, R.
2010-01-01
Microwave imaging techniques, an integral component of nondestructive testing and evaluation (NDTE), have received significant attention in the past decade. These techniques have included the implementation of synthetic aperture focusing (SAF) algorithms for obtaining high spatial resolution images. The next important step in these developments is the implementation of 3-D holographic imaging algorithms. These are well-known wideband imaging technique requiring a swept-frequency (i.e., wideband), which unlike SAF that is a single frequency technique, are not easily performed on a real-time basis. This is due to the fact that a significant number of data points (in the frequency domain) must be obtained within the frequency band of interest. This not only makes for a complex imaging system design, it also significantly increases the image-production time. Consequently in an attempt to reduce the measurement time and system complexity, an investigation was conducted to determine the minimum required number of frequency samples needed to image a specific object while preserving a desired maximum measurement range and range resolution. To this end the 3-D holographic algorithm was modified to use properlyinterpolated frequency data. Measurements of the complex reflection coefficient for several samples were conducted using a swept-frequency approach. Subsequently, holographical images were generated using data containing a relatively large number of frequency samples and were compared with images generated by the reduced data set data. Quantitative metrics such as average, contrast, and signal-to-noise ratio were used to evaluate the quality of images generated using reduced data sets. Furthermore, this approach was applied to both weakly- and strongly-scattering indications. This paper presents the methods used and the results of this investigation.
Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi
2017-07-01
Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Energy Technology Data Exchange (ETDEWEB)
Dutton, Spencer M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fisk, William J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-01-01
For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% as the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.
Deformed special relativity with an energy barrier of a minimum speed
International Nuclear Information System (INIS)
Nassif, Claudio
2011-01-01
Full text: This research aims to introduce a new principle of symmetry in the flat space-time by means of the elimination of the classical idea of rest, and by including a universal minimum limit of speed in the quantum world. Such a limit, unattainable by the particles, represents a preferred inertial reference frame associated with a universal background field that breaks Lorentz symmetry. So there emerges a new relativistic dynamics where a minimum speed forms an inferior energy barrier. One of the interesting implications of the existence of such a minimum speed is that it prevents the absolute zero temperature for an ultracold gas, according to the third law of thermodynamics. So we will be able to provide a fundamental dynamical explanation for the third law by means of a connection between such a phenomenological law and the new relativistic dynamics with a minimum speed. In other words we say that our relevant investigation is with respect to the problem of the absolute zero temperature in the thermodynamics of an ideal gas. We have made a connection between the 3 rd law of Thermodynamics and the new dynamics with a minimum speed by means of a relation between the absolute zero temperature (T = 0 deg K) and a minimum average speed (V) for a gas with N particles (molecules or atoms). Since T = 0 deg K is thermodynamically unattainable, we have shown this is due to the impossibility of reaching V from the new dynamics standpoint. (author)
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.