International Nuclear Information System (INIS)
Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen
2016-01-01
Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)
Directory of Open Access Journals (Sweden)
Xin Ma
2015-01-01
Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong
2016-01-01
The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Maximum and minimum entropy states yielding local continuity bounds
Hanson, Eric P.; Datta, Nilanjana
2018-04-01
Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.
Future changes over the Himalayas: Maximum and minimum temperature
Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.
2018-03-01
An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with
CO2 maximum in the oxygen minimum zone (OMZ
Directory of Open Access Journals (Sweden)
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2Chile during 4 cruises (2000-2002) and a monthly monitoring (2000-2001) in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the
Maximum effort in the minimum-effort game
Czech Academy of Sciences Publication Activity Database
Engelmann, Dirk; Normann, H.-T.
2010-01-01
Roč. 13, č. 3 (2010), s. 249-259 ISSN 1386-4157 Institutional research plan: CEZ:AV0Z70850503 Keywords : minimum-effort game * coordination game * experiments * social capital Subject RIV: AH - Economics Impact factor: 1.868, year: 2010
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, Aurélien; Ruiz-Pino, D.; Garcon, V.
2011-01-01
International audience; Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. ...
50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.
2010-10-01
... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...
Minimum disturbance rewards with maximum possible classical correlations
Energy Technology Data Exchange (ETDEWEB)
Pande, Varad R., E-mail: varad_pande@yahoo.in [Department of Physics, Indian Institute of Science Education and Research Pune, 411008 (India); Shaji, Anil [School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, 695016 (India)
2017-07-12
Weak measurements done on a subsystem of a bipartite system having both classical and nonClassical correlations between its components can potentially reveal information about the other subsystem with minimal disturbance to the overall state. We use weak quantum discord and the fidelity between the initial bipartite state and the state after measurement to construct a cost function that accounts for both the amount of information revealed about the other system as well as the disturbance to the overall state. We investigate the behaviour of the cost function for families of two qubit states and show that there is an optimal choice that can be made for the strength of the weak measurement. - Highlights: • Weak measurements done on one part of a bipartite system with controlled strength. • Weak quantum discord & fidelity used to quantify all correlations and disturbance. • Cost function to probe the tradeoff between extracted correlations and disturbance. • Optimal measurement strength for maximum extraction of classical correlations.
Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation
Directory of Open Access Journals (Sweden)
Su Yeon Oh
2006-06-01
Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating pressure...
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
International Nuclear Information System (INIS)
Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George
2012-01-01
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Energy Technology Data Exchange (ETDEWEB)
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
78 FR 22798 - Hazardous Materials: Revision of Maximum and Minimum Civil Penalties
2013-04-17
.... 5101 et seq.). Section 5123(a) of that law provides civil penalties for knowing violations of Federal... 107--Guidelines for Civil Penalties * * * * * IV. * * * C. * * * Under the Federal hazmat law, 49 U.S... Maximum and Minimum Civil Penalties AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA...
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Energy Technology Data Exchange (ETDEWEB)
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
International Nuclear Information System (INIS)
Kaya, Savaş; Kaya, Cemal; Islam, Nazmul
2016-01-01
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Challenging Minimum Deterrence: Articulating the Contemporary Relevance of Nuclear Weapons
2016-07-13
elements of the US nuclear force gives this debate added meaning and urgency . One alternative currently under discus- sion is minimum deterrence. This...in 2013 illustrates this concept well.55 In this sense , an escalation-deterrence force would supply the tools neces- sary for context-specific...Shaub, “Remembrance of Things Past,” 78–79, 82. 16. Ibid., 80. For further elaboration of this argument, see James Forsyth’s “The Common Sense of
Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model
Yang, Yuefang; Gan, Chunhui; Shen, Tingting
2017-05-01
In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.
Maximum And Minimum Temperature Trends In Mexico For The Last 31 Years
Romero-Centeno, R.; Zavala-Hidalgo, J.; Allende Arandia, M. E.; Carrasco-Mijarez, N.; Calderon-Bustamante, O.
2013-05-01
Based on high-resolution (1') daily maps of the maximum and minimum temperatures in Mexico, an analysis of the last 31-year trends is performed. The maps were generated using all the available information from more than 5,000 stations of the Mexican Weather Service (Servicio Meteorológico Nacional, SMN) for the period 1979-2009, along with data from the North American Regional Reanalysis (NARR). The data processing procedure includes a quality control step, in order to eliminate erroneous daily data, and make use of a high-resolution digital elevation model (from GEBCO), the relationship between air temperature and elevation by means of the average environmental lapse rate, and interpolation algorithms (linear and inverse-distance weighting). Based on the monthly gridded maps for the mentioned period, the maximum and minimum temperature trends calculated by least-squares linear regression and their statistical significance are obtained and discussed.
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Trends in Mean Annual Minimum and Maximum Near Surface Temperature in Nairobi City, Kenya
Directory of Open Access Journals (Sweden)
George Lukoye Makokha
2010-01-01
Full Text Available This paper examines the long-term urban modification of mean annual conditions of near surface temperature in Nairobi City. Data from four weather stations situated in Nairobi were collected from the Kenya Meteorological Department for the period from 1966 to 1999 inclusive. The data included mean annual maximum and minimum temperatures, and was first subjected to homogeneity test before analysis. Both linear regression and Mann-Kendall rank test were used to discern the mean annual trends. Results show that the change of temperature over the thirty-four years study period is higher for minimum temperature than maximum temperature. The warming trends began earlier and are more significant at the urban stations than is the case at the sub-urban stations, an indication of the spread of urbanisation from the built-up Central Business District (CBD to the suburbs. The established significant warming trends in minimum temperature, which are likely to reach higher proportions in future, pose serious challenges on climate and urban planning of the city. In particular the effect of increased minimum temperature on human physiological comfort, building and urban design, wind circulation and air pollution needs to be incorporated in future urban planning programmes of the city.
Medina-Silva, Renata; de Oliveira, Rafael R.; Pivel, Maria A. G.; Borges, Luiz G. A.; Simão, Taiz L. L.; Pereira, Leandro M.; Trindade, Fernanda J.; Augustin, Adolpho H.; Valdez, Fernanda P.; Eizirik, Eduardo; Utz, Laura R. P.; Groposo, Claudia; Miller, Dennis J.; Viana, Adriano R.; Ketzer, João M. M.; Giongo, Adriana
2018-02-01
Conspicuous physicochemical vertical stratification in the deep sea is one of the main forces driving microbial diversity in the oceans. Oxygen and sunlight availability are key factors promoting microbial diversity throughout the water column. Ocean currents also play a major role in the physicochemical stratification, carrying oxygen down to deeper zones as well as moving deeper water masses up towards shallower depths. Water samples within a 50-km radius in a pockmark location of the southwestern Atlantic Ocean were collected and the prokaryotic communities from different water depths - chlorophyll maximum, oxygen minimum and deep-sea bottom (down to 1355 m) - were described. At phylum level, Proteobacteria were the most frequent in all water depths, Cyanobacteria were statistically more frequent in chlorophyll maximum zone, while Thaumarchaeota were significantly more abundant in both oxygen minimum and bottom waters. The most frequent microorganism in the chlorophyll maximum and oxygen minimum zones was a Pelagibacteraceae operational taxonomic unit (OTU). At the bottom, the most abundant genus was the archaeon Nitrosopumilus. Beta diversity analysis of the 16S rRNA gene sequencing data uncovered in this study shows high spatial heterogeneity among water zones communities. Our data brings important contribution for the characterisation of oceanic microbial diversity, as it consists of the first description of prokaryotic communities occurring in different oceanic water zones in the southwestern Atlantic Ocean.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
Energy Technology Data Exchange (ETDEWEB)
Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)
2015-06-15
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
International Nuclear Information System (INIS)
Gong, Y; Yu, J; Xiao, Y
2015-01-01
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant
Directory of Open Access Journals (Sweden)
Md. Sanaul H. Mondal
2017-03-01
Full Text Available Bangladesh shares a common border with India in the west, north and east and with Myanmar in the southeast. These borders cut across 57 rivers that discharge through Bangladesh into the Bay of Bengal in the south. The upstream courses of these rivers traverse India, China, Nepal and Bhutan. Transboundary flows are the important sources of water resources in Bangladesh. Among the 57 transboundary rivers, the Teesta is the fourth major river in Bangladesh after the Ganges, the Brahmaputra and the Meghna and Bangladesh occupies about 2071 km2 . The Teesta River floodplain in Bangladesh accounts for 14% of the total cropped area and 9.15 million people of the country. The objective of this study was to investigate trends in both maximum and minimum water flow at Kaunia and Dalia stations for the Teesta River and the coping strategies developed by the communities to adjust with uncertain flood situations. The flow characteristics of the Teesta were analysed by calculating monthly maximum and minimum water levels and discharges from 1985 to 2006. Discharge of the Teesta over the last 22 years has been decreasing. Extreme low-flow conditions were likely to occur more frequently after the implementation of the Gozoldoba Barrage by India. However, a very sharp decrease in peak flows was also observed albeit unexpected high discharge in 1988, 1989, 1991, 1997, 1999 and 2004 with some in between April and October. Onrush of water causes frequent flash floods, whereas decreasing flow leaves the areas dependent on the Teesta vulnerable to droughts. Both these extreme situations had a negative impact on the lives and livelihoods of people dependent on the Teesta. Over the years, people have developed several risk mitigation strategies to adjust with both natural and anthropogenic flood situations. This article proposed the concept of ‘MAXIN (maximum and minimum flows’ for river water justice for riparian land.
Dopant density from maximum-minimum capacitance ratio of implanted MOS structures
International Nuclear Information System (INIS)
Brews, J.R.
1982-01-01
For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)
EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY
Directory of Open Access Journals (Sweden)
Barbaros Gönençgil
2016-01-01
Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.
Barbarossa, Valerio; Huijbregts, Mark A. J.; Beusen, Arthur H. W.; Beck, Hylke E.; King, Henry; Schipper, Aafke M.
2018-03-01
Streamflow data is highly relevant for a variety of socio-economic as well as ecological analyses or applications, but a high-resolution global streamflow dataset is yet lacking. We created FLO1K, a consistent streamflow dataset at a resolution of 30 arc seconds (~1 km) and global coverage. FLO1K comprises mean, maximum and minimum annual flow for each year in the period 1960-2015, provided as spatially continuous gridded layers. We mapped streamflow by means of artificial neural networks (ANNs) regression. An ensemble of ANNs were fitted on monthly streamflow observations from 6600 monitoring stations worldwide, i.e., minimum and maximum annual flows represent the lowest and highest mean monthly flows for a given year. As covariates we used the upstream-catchment physiography (area, surface slope, elevation) and year-specific climatic variables (precipitation, temperature, potential evapotranspiration, aridity index and seasonality indices). Confronting the maps with independent data indicated good agreement (R2 values up to 91%). FLO1K delivers essential data for freshwater ecology and water resources analyses at a global scale and yet high spatial resolution.
Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer
Lee, Jae Nyung
2008-10-01
Statistically significant climate responses to the solar variability are found in Northern Annular Mode (NAM) and in the tropical circulation. This study is based on the statistical analysis of numerical simulations with ModelE version of the chemistry coupled Goddard Institute for Space Studies (GISS) general circulation model (GCM) and National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The low frequency large scale variability of the winter and summer circulation is described by the NAM, the leading Empirical Orthogonal Function (EOF) of geopotential heights. The newly defined seasonal annular modes and its dynamical significance in the stratosphere and troposphere in the GISS ModelE is shown and compared with those in the NCEP/NCAR reanalysis. In the stratosphere, the summer NAM obtained from NCEP/NCAR reanalysis as well as from the ModelE simulations has the same sign throughout the northern hemisphere, but shows greater variability at low latitudes. The patterns in both analyses are consistent with the interpretation that low NAM conditions represent an enhancement of the seasonal difference between the summer and the annual averages of geopotential height, temperature and velocity distributions, while the reverse holds for high NAM conditions. Composite analysis of high and low NAM cases in both the model and observation suggests that the summer stratosphere is more "summer-like" when the solar activity is near a maximum. This means that the zonal easterly wind flow is stronger and the temperature is higher than normal. Thus increased irradiance favors a low summer NAM. A quantitative comparison of the anti-correlation between the NAM and the solar forcing is presented in the model and in the observation, both of which show lower/higher NAM index in solar maximum/minimum conditions. The summer NAM in the troposphere obtained from NCEP/NCAR reanalysis has a dipolar zonal structure with maximum
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa
2014-04-01
In this work, we derive the statistical characteristics of the maximum and the minimum of two modified1 Gamma-Gamma variates in closed-form in terms of Meijer\\'s G-function and the extended generalized bivariate Meijer\\'s G-function. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii) a dual-hop free-space optical relay transmission system. Computer-based Monte-Carlo simulations verify our new analytical results.
Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008
Directory of Open Access Journals (Sweden)
S. Federico
2011-02-01
Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.
Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.
Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
2013-02-12
... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...
Laboratory test on maximum and minimum void ratio of tropical sand matrix soils
Othman, B. A.; Marto, A.
2018-04-01
Sand is generally known as loose granular material which has a grain size finer than gravel and coarser than silt and can be very angular to well-rounded in shape. The present of various amount of fines which also influence the loosest and densest state of sand in natural condition have been well known to contribute to the deformation and loss of shear strength of soil. This paper presents the effect of various range of fines content on minimum void ratio e min and maximum void ratio e max of sand matrix soils. Laboratory tests to determine e min and e max of sand matrix soil were conducted using non-standard method introduced by previous researcher. Clean sand was obtained from natural mining site at Johor, Malaysia. A set of 3 different sizes of sand (fine sand, medium sand, and coarse sand) were mixed with 0% to 40% by weight of low plasticity fine (kaolin). Results showed that generally e min and e max decreased with the increase of fines content up to a minimal value of 0% to 30%, and then increased back thereafter.
A novel minimum cost maximum power algorithm for future smart home energy management
Directory of Open Access Journals (Sweden)
A. Singaravelan
2017-11-01
Full Text Available With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
A novel minimum cost maximum power algorithm for future smart home energy management.
Singaravelan, A; Kowsalya, M
2017-11-01
With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
Martucci, M.; Munini, R.; Boezio, M.; Di Felice, V.; Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Santis, C.; Galper, A. M.; Karelin, A. V.; Koldashov, S. V.; Koldobskiy, S.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malakhov, V.; Marcelli, L.; Marcelli, N.; Mayorov, A. G.; Menn, W.; Mergè, M.; Mikhailov, V. V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Osteria, G.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S. B.; Simon, M.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y. I.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S. A.; Yurkin, Y. T.; Zampa, G.; Zampa, N.; Potgieter, M. S.; Raath, J. L.
2018-02-01
Precise measurements of the time-dependent intensity of the low-energy (solar activity periods, i.e., from minimum to maximum, are needed to achieve comprehensive understanding of such physical phenomena. The minimum phase between solar cycles 23 and 24 was peculiarly long, extending up to the beginning of 2010 and followed by the maximum phase, reached during early 2014. In this Letter, we present proton differential spectra measured from 2010 January to 2014 February by the PAMELA experiment. For the first time the GCR proton intensity was studied over a wide energy range (0.08–50 GeV) by a single apparatus from a minimum to a maximum period of solar activity. The large statistics allowed the time variation to be investigated on a nearly monthly basis. Data were compared and interpreted in the context of a state-of-the-art three-dimensional model describing the GCRs propagation through the heliosphere.
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
Energy Technology Data Exchange (ETDEWEB)
SAROFIM, A F; LISAUSKAS, R; RILEY, D; EDDINGS, E G; BROUWER, J; KLEWICKI, J P; DAVIS, K A; BOCKELIE, M J; HEAP, M P; PERSHING, D
1998-01-01
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers,"in-furnace NO_{x} control," which includes: staged low-NO_{x} burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of "in-furnace" NO_{x} control, processes. 2) To devise new, or improve existing, approaches for maximum "in-furnace" NO_{x} control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO_{x} burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO_{x} burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO_{x} burners. 3 Determine the limits on NO control by in-furnace NO_{x} control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO_{x} burners and coal reburning systems. 6 Modify the char burnout model in REI's coal
The ancient Egyptian civilization: maximum and minimum in coincidence with solar activity
Shaltout, M.
It is proved from the last 22 years observations of the total solar irradiance (TSI) from space by artificial satellites, that TSI shows negative correlation with the solar activity (sunspots, flares, and 10.7cm Radio emissions) from day to day, but shows positive correlations with the same activity from year to year (on the base of the annual average for each of them). Also, the solar constant, which estimated fromth ground stations for beam solar radiations observations during the 20 century indicate coincidence with the phases of the 11- year cycles. It is known from sunspot observations (250 years) , and from C14 analysis, that there are another long-term cycles for the solar activity larger than 11-year cycle. The variability of the total solar irradiance affecting on the climate, and the Nile flooding, where there is a periodicities in the Nile flooding similar to that of solar activity, from the analysis of about 1300 years of the Nile level observations atth Cairo. The secular variations of the Nile levels, regularly measured from the 7 toth 15 century A.D., clearly correlate with the solar variations, which suggests evidence for solar influence on the climatic changes in the East African tropics The civilization of the ancient Egyptian was highly correlated with the Nile flooding , where the river Nile was and still yet, the source of the life in the Valley and Delta inside high dry desert area. The study depends on long -time historical data for Carbon 14 (more than five thousands years), and chronical scanning for all the elements of the ancient Egyptian civilization starting from the firs t dynasty to the twenty six dynasty. The result shows coincidence between the ancient Egyptian civilization and solar activity. For example, the period of pyramids building, which is one of the Brilliant periods, is corresponding to maximum solar activity, where the periods of occupation of Egypt by Foreign Peoples corresponding to minimum solar activity. The decline
Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains
Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.
2018-01-01
We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.
Maximum attainable power density and wall load in tokamaks underlying reactor relevant constraints
International Nuclear Information System (INIS)
Borrass, K.; Buende, R.
1979-09-01
The characteristic data of tokamaks optimized with respect to their power density or wall load are determined. Reactor relevant constraints are imposed, such as a fixed plant net power output, a fixed blanket thickness and the dependence of the maximum toroidal field on the geometry and conductor material. The impact of finite burn times is considered. Various scaling laws of the toroidal beta with the aspect ratio are discussed. (orig.) 891 GG/orig. 892 RDG [de
International Nuclear Information System (INIS)
Ackroyd, R.T.
1982-01-01
Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)
International Nuclear Information System (INIS)
Bispo, Heleno; Silva, Nilton; Brito, Romildo; Manzi, João
2013-01-01
Highlights: • Minimum entropy generation (MEG) principle improved the reaction performance. • MEG rate and the maximum conversion equivalence have been analyzed. • Temperature and residence time are used to the domain establishment of MEG. • Satisfying the temperature and residence time relationship results a optimal performance. - Abstract: The analysis of the equivalence between the minimum entropy generation (MEG) rate and the maximum conversion rate for a reactive system is the main purpose of this paper. While being used as a strategy of optimization, the minimum entropy production was applied to the production of propylene glycol in a Continuous Stirred-Tank Reactor (CSTR) with a view to determining the best operating conditions, and under such conditions, a high conversion rate was found. The effects of the key variables and restrictions on the validity domain of MEG were investigated, which raises issues that are included within a broad discussion. The results from simulations indicate that from the chemical reaction standpoint a maximum conversion rate can be considered as equivalent to MEG. Such a result can be clearly explained by examining the classical Maxwell–Boltzmann distribution, where the molecules of the reactive system under the condition of the MEG rate present a distribution of energy with reduced dispersion resulting in a better quality of collision between molecules with a higher conversion rate
Directory of Open Access Journals (Sweden)
Roham Vali, Mohammad Nasrollahzadeh Masouleh* and Siamak Mashhady Rafie1
2013-04-01
Full Text Available There is no data on the effect of maximum and minimum doses of furosemide on heart's work performance and amount of fractional shortening (FS in echocardiography of rabbit. This study was designed to validate probability of the mentionable effect. Twenty-four healthy female New Zealand white rabbits were divided into four equal groups. Maximum and minimum doses of furosemide were used for the first and second groups and the injection solution for the third and fourth groups was sodium chloride 0.9% which had the same calculated volumes of furosemide for the first two groups, respectively. The left ventricle FS in statutory times (0, 2, 5, 15, 30 minutes was determined by echocardiography. Measurements of Mean±SD, maximum and minimum amounts for FS values in all groups before injection and in statutory times were calculated. Statistical analysis revealed non-significant correlation between the means of FS. The results of this study showed that furosemide can be used as a diuretic agent for preparing a window approach in abdominal ultrasonography examination with no harmful effect on cardiac function.
Minimum and Maximum Potential Contributions to Future Sea Level Rise from Polar Ice Sheets
Deconto, R. M.; Pollard, D.
2017-12-01
New climate and ice-sheet modeling, calibrated to past changes in sea-level, is painting a stark picture of the future fate of the great polar ice sheets if greenhouse gas emissions continue unabated. This is especially true for Antarctica, where a substantial fraction of the ice sheet rests on bedrock more than 500-meters below sea level. Here, we explore the sensitivity of the polar ice sheets to a warming atmosphere and ocean under a range of future greenhouse gas emissions scenarios. The ice sheet-climate-ocean model used here considers time-evolving changes in surface mass balance and sub-ice oceanic melting, ice deformation, grounding line retreat on reverse-sloped bedrock (Marine Ice Sheet Instability), and newly added processes including hydrofracturing of ice shelves in response to surface meltwater and rain, and structural collapse of thick, marine-terminating ice margins with tall ice-cliff faces (Marine Ice Cliff Instability). The simulations improve on previous work by using 1) improved atmospheric forcing from a Regional Climate Model and 2) a much wider range of model physical parameters within the bounds of modern observations of ice dynamical processes (particularly calving rates) and paleo constraints on past ice-sheet response to warming. Approaches to more precisely define the climatic thresholds capable of triggering rapid and potentially irreversible ice-sheet retreat are also discussed, as is the potential for aggressive mitigation strategies like those discussed at the 2015 Paris Climate Conference (COP21) to substantially reduce the risk of extreme sea-level rise. These results, including physics that consider both ice deformation (creep) and calving (mechanical failure of marine terminating ice) expand on previously estimated limits of maximum rates of future sea level rise based solely on kinematic constraints of glacier flow. At the high end, the new results show the potential for more than 2m of global mean sea level rise by 2100
Directory of Open Access Journals (Sweden)
Syed S. Ghani
2017-12-01
Full Text Available The current work observes the trends in Lautoka’s temperature and relative humidity during the period 2003 – 2013, which were analyzed using the recently updated data obtained from Fiji Meteorological Services (FMS. Four elements, mean maximum temperature, mean minimum temperature along with diurnal temperature range (DTR and mean relative humidity are investigated. From 2003–2013, the annual mean temperature has been enhanced between 0.02 and 0.080C. The heating is more in minimum temperature than in maximum temperature, resulting in a decrease of diurnal temperature range. The statistically significant increase was mostly seen during the summer months of December and January. Mean Relative Humidity has also increased from 3% to 8%. The bases of abnormal climate conditions are also studied. These bases were defined with temperature or humidity anomalies in their appropriate time sequences. These established the observed findings and exhibited that climate has been becoming gradually damper and heater throughout Lautoka during this period. While we are only at an initial phase in the probable inclinations of temperature changes, ecological reactions to recent climate change are already evidently noticeable. So it is proposed that it would be easier to identify climate alteration in a small island nation like Fiji.
International Nuclear Information System (INIS)
Nukiyama, S.
1991-01-01
The quantity of heat transmitted from a metal surface to boiling water increases as the temperature difference ΔT is increased, but after the ΔT has reached a certain limit, quantity Q decreases with further increase in ΔT. This turning point is the maximum value of heat transmitted. The existence of this point was actually observed in the experiment. Under atmospheric pressure, ΔT corresponding to the maximum value of heat transfer for water at 100 degrees C falls between 20-40 degrees C, and Q is between 1,080,000 and 1,800,000 kcal/m 2 h (i.e. between 2,000 and 3,000 kg/m 2 h, if expressed in constant evaporation rate at 100 degrees C); this figure is larger than the maximum value of heat transfer as was previously considered. In this paper the minimum value of heat transfer was obtained, and in the Q-ΔT curve for the high temperature region, the burn-out effect is discussed
Directory of Open Access Journals (Sweden)
S. Vignesh
2017-04-01
Full Text Available Flow based Erosion – corrosion problems are very common in fluid handling equipments such as propellers, impellers, pumps in warships, submarine. Though there are many coating materials available to combat erosion–corrosion damage in the above components, iron based amorphous coatings are considered to be more effective to combat erosion–corrosion problems. High velocity oxy-fuel (HVOF spray process is considered to be a better process to coat the iron based amorphous powders. In this investigation, iron based amorphous metallic coating was developed on 316 stainless steel substrate using HVOF spray technique. Empirical relationships were developed to predict the porosity and micro hardness of iron based amorphous coating incorporating HVOF spray parameters such as oxygen flow rate, fuel flow rate, powder feed rate, carrier gas flow rate, and spray distance. Response surface methodology (RSM was used to identify the optimal HVOF spray parameters to attain coating with minimum porosity and maximum hardness.
THE 2003 -2007 MINIMUM, MAXIMUM AND MEDIUM DISCHARGE ANALYSIS OF THE LATORIŢA-LOTRU WATER SYSTEM
Directory of Open Access Journals (Sweden)
Simona-Elena MIHĂESCU
2010-06-01
Full Text Available The 2003 -2007 minimum, maximum and medium discharge analysis of the Latoriţa-Lotru water system From a functional point of view, the Lotru and Latoriţa make up a water system by the junction of the two high hydro energetic potential water flows. The Lotru springs from the Parâng Massif with a spring quota of over 1900m and an outfall quota of 298m, which makes for an altitude difference of 1602m; it is the affluent of the Olt River, has a course length of 76 km and a minimum discharge of 20m3/s. Its reception hollow is of 1024 km2. Latoriţa springs from the Latoriţa Mountains, it is a small river with an average discharge of 2.7m3/s and is an affluent of the Lotru. Together, the two make up a high hydro energetic potential system, valorized in the system of lakes which serve the Ciunget Hydro-Electric Power Plant. Galbenu and Petrimanu are two reservoirs built on the Latoriţa River and on the Lotru, we have Vidra reservoir, Balindru, Mălaia and Brădişor. The discharge analysis of these rivers is very important in view of a good risk management, especially consisting in floods and high level waters, even in the case of artificial water flows such as the Latoriţa-Lotru water system.
2010-04-01
... assisted with NAHASDA grant amounts? 1000.124 Section 1000.124 Housing and Urban Development Regulations... Activities § 1000.124 What maximum and minimum rent or homebuyer payment can a recipient charge a low-income...
Abaurrea, J.; Asín, J.; Cebrián, A. C.
2018-02-01
The occurrence of extreme heat events in maximum and minimum daily temperatures is modelled using a non-homogeneous common Poisson shock process. It is applied to five Spanish locations, representative of the most common climates over the Iberian Peninsula. The model is based on an excess over threshold approach and distinguishes three types of extreme events: only in maximum temperature, only in minimum temperature and in both of them (simultaneous events). It takes into account the dependence between the occurrence of extreme events in both temperatures and its parameters are expressed as functions of time and temperature related covariates. The fitted models allow us to characterize the occurrence of extreme heat events and to compare their evolution in the different climates during the observed period. This model is also a useful tool for obtaining local projections of the occurrence rate of extreme heat events under climate change conditions, using the future downscaled temperature trajectories generated by Earth System Models. The projections for 2031-60 under scenarios RCP4.5, RCP6.0 and RCP8.5 are obtained and analysed using the trajectories from four earth system models which have successfully passed a preliminary control analysis. Different graphical tools and summary measures of the projected daily intensities are used to quantify the climate change on a local scale. A high increase in the occurrence of extreme heat events, mainly in July and August, is projected in all the locations, all types of event and in the three scenarios, although in 2051-60 the increase is higher under RCP8.5. However, relevant differences are found between the evolution in the different climates and the types of event, with a specially high increase in the simultaneous ones.
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
laboratories comparable. The minimum and maximum concentrations of a selected isotope in naturally occurring terrestrial materials for selected chemical elements reviewed in this report are given below: Isotope Minimum mole fraction Maximum mole fraction -------------------------------------------------------------------------------- 2H 0 .000 0255 0 .000 1838 7Li 0 .9227 0 .9278 11B 0 .7961 0 .8107 13C 0 .009 629 0 .011 466 15N 0 .003 462 0 .004 210 18O 0 .001 875 0 .002 218 26Mg 0 .1099 0 .1103 30Si 0 .030 816 0 .031 023 34S 0 .0398 0 .0473 37Cl 0 .240 77 0 .243 56 44Ca 0 .020 82 0 .020 92 53Cr 0 .095 01 0 .095 53 56Fe 0 .917 42 0 .917 60 65Cu 0 .3066 0 .3102 205Tl 0 .704 72 0 .705 06 The numerical values above have uncertainties that depend upon the uncertainties of the determinations of the absolute isotope-abundance variations of reference materials of the elements. Because reference materials used for absolute isotope-abundance measurements have not been included in relative isotope abundance investigations of zinc, selenium, molybdenum, palladium, and tellurium, ranges in isotopic composition are not listed for these elements, although such ranges may be measurable with state-of-the-art mass spectrometry. This report is available at the url: http://pubs.water.usgs.gov/wri014222.
Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.
2018-03-01
Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.
Liu, Saiyan; Huang, Shengzhi; Xie, Yangyang; Huang, Qiang; Leng, Guoyong; Hou, Beibei; Zhang, Ying; Wei, Xiu
2018-05-01
Due to the important role of temperature in the global climate system and energy cycles, it is important to investigate the spatial-temporal change patterns, causes and implications of annual maximum (Tmax) and minimum (Tmin) temperatures. In this study, the Cloud model were adopted to fully and accurately analyze the changing patterns of annual Tmax and Tmin from 1958 to 2008 by quantifying their mean, uniformity, and stability in the Wei River Basin (WRB), a typical arid and semi-arid region in China. Additionally, the cross wavelet analysis was applied to explore the correlations among annual Tmax and Tmin and the yearly sunspots number, Arctic Oscillation, Pacific Decadal Oscillation, and soil moisture with an aim to determine possible causes of annual Tmax and Tmin variations. Furthermore, temperature-related impacts on vegetation cover and precipitation extremes were also examined. Results indicated that: (1) the WRB is characterized by increasing trends in annual Tmax and Tmin, with a more evident increasing trend in annual Tmin, which has a higher dispersion degree and is less uniform and stable than annual Tmax; (2) the asymmetric variations of Tmax and Tmin can be generally explained by the stronger effects of solar activity (primarily), large-scale atmospheric circulation patterns, and soil moisture on annual Tmin than on annual Tmax; and (3) increasing annual Tmax and Tmin have exerted strong influences on local precipitation extremes, in terms of their duration, intensity, and frequency in the WRB. This study presents new analyses of Tmax and Tmin in the WRB, and the findings may help guide regional agricultural production and water resources management.
Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.
1999-05-01
Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.
Panagoulia, Dionysia; Vlahogianni, Eleni I.
2018-06-01
A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to
Kandaswamy, Krishna Kumar Umar
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred. © 2012 Elsevier Ltd.
Kandaswamy, Krishna Kumar Umar; Ganesan, Pugalenthi; Kalies, Kai Uwe; Hartmann, Enno; Martinetz, Thomas M.
2013-01-01
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan
Energy Technology Data Exchange (ETDEWEB)
Ngeow, Chow-Choong [Graduate Institute of Astronomy, National Central University, Jhongli 32001, Taiwan (China); Kanbur, Shashi M.; Schrecengost, Zachariah [Department of Physics, SUNY Oswego, Oswego, NY 13126 (United States); Bhardwaj, Anupam; Singh, Harinder P. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India)
2017-01-10
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the ( V − R ){sub MACHO} or ( V − I ) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five ( ugriz ) bands. We present the PC and AC relations at maximum and minimum light in four colors: ( u − g ){sub 0}, ( g − r ){sub 0}, ( r − i ){sub 0}, and ( i − z ){sub 0}, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the ( g − r ){sub 0} and ( r − i ){sub 0} colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim
2016-01-01
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Al-Quwaiee, Hessa
2016-01-07
In this work, we derive the exact statistical characteristics of the maximum and the minimum of two modified1 double generalized gamma variates in closed-form in terms of Meijer’s G-function, Fox’s H-function, the extended generalized bivariate Meijer’s G-function and H-function in addition to simple closed-form asymptotic results in terms of elementary functions. Then, we rely on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity and of (ii) a dual-hop free-space optical relay transmission system over double generalized gamma fading channels with the impact of pointing errors. In addition, we provide asymptotic results of the bit error rate of the two systems at high SNR regime. Computer-based Monte-Carlo simulations verify our new analytical results.
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
Directory of Open Access Journals (Sweden)
Stefan Krähenmann
2013-07-01
Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly
Energy Technology Data Exchange (ETDEWEB)
Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)
2013-10-15
The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
Directory of Open Access Journals (Sweden)
Phan Thanh Noi
2016-12-01
Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.
Energy Technology Data Exchange (ETDEWEB)
Shen, Tengming [Fermilab; Ye, Liyang [NCSU, Raleigh; Turrioni, Daniele [Fermilab; Li, Pei [Fermilab
2015-01-01
Small insert coils have been built using a multifilamentary Bi2Sr2CaCu2Ox round wire, and characterized in background fields to explore the quench behaviors and limits of Bi2Sr2CaCu2Ox superconducting magnets, with an emphasis on assessing the impact of slow normal zone propagation on quench detection. Using heaters of various lengths to initiate a small normal zone, a coil was quenched safely more than 70 times without degradation, with the maximum coil temperature reaching 280 K. Coils withstood a resistive voltage of tens of mV for seconds without quenching, showing the high stability of these coils and suggesting that the quench detection voltage shall be greater than 50 mV to not to falsely trigger protection. The hot spot temperature for the resistive voltage of the normal zone to reach 100 mV increases from ~40 K to ~80 K with increasing the operating wire current density Jo from 89 A/mm2 to 354 A/mm2 whereas for the voltage to reach 1 V, it increases from ~60 K to ~140 K, showing the increasing negative impact of slow normal zone propagation on quench detection with increasing Jo and the need to limit the quench detection voltage to < 1 V. These measurements, coupled with an analytical quench model, were used to access the impact of the maximum allowable voltage and temperature upon quench detection on the quench protection, assuming to limit the hot spot temperature to <300 K.
"Minimum input, maximum output, indeed!" Teaching Collocations ...
African Journals Online (AJOL)
Fifty-nine EFL college students participated in the study, and they received two 75-minute instructions between pre- and post-tests: one on the definition of colloca-tion and its importance, and the other on the skill of looking up collocational information in the Naver Dictionary — an English–Korean online dictionary. During ...
Maximum/minimum asymmetric rod detection
International Nuclear Information System (INIS)
Huston, J.T.
1990-01-01
This patent describes a system for determining the relative position of each control rod within a control rod group in a nuclear reactor. The control rod group having at least three control rods therein. It comprises: means for producing a signal representative of a position of each control rod within the control rod group in the nuclear reactor; means for establishing a signal representative of the highest position of a control rod in the control rod group in the nuclear reactor; means for establishing a signal representative of the lowest position of a control rod in the control rod group in the nuclear reactor; means for determining a difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; means for establishing a predetermined limit for the difference between the signal representative of the position of the highest control rod and the signal representative of the position of the lowest control rod; and means for comparing the difference between the signals with the predetermined limit. The comparing means producing an output signal when the difference between the signals exceeds the predetermined limit
Energy Technology Data Exchange (ETDEWEB)
McKenna-Lawlor, S.M.P. (Saint Patrick' s Coll., Maynooth (Ireland)); Afonin, V.V.; Gringauz, K.I. (AN SSSR, Moscow (USSR). Space Research Inst.) (and others)
Twin telescope particle detector systems SLED-1 and SLED-2, with the capability of monitoring electron and ion fluxes within an energy range spanning approximately 30 keV to a few megaelectron volts, were individually launched on the two spacecraft (Phobos-2 and Phobos-1, respectively) of the Soviet Phobos Mission to Mars and its moons in July 1988. A short description of the SLED instrument and a preliminary account of representative solar-related particle enhancements recorded by SLED-1 and SLED-2 during the Cruise Phase, and by SLED-1 in the near Martian environment (within the interval 25 July 1988-26 March 1989) are presented. These observations were made while the interplanetary medium was in the course of changing over from solar minimum- to solar maximum-dominated conditions and examples are presented of events associated with each of these phenomenological states. (author).
Svendsen, Jon C.; Tirsgaard, Bjørn; Cordero, Gerardo A.; Steffensen, John F.
2015-01-01
Intraspecific variation and trade-off in aerobic and anaerobic traits remain poorly understood in aquatic locomotion. Using gilthead sea bream (Sparus aurata) and Trinidadian guppy (Poecilia reticulata), both axial swimmers, this study tested four hypotheses: (1) gait transition from steady to unsteady (i.e., burst-assisted) swimming is associated with anaerobic metabolism evidenced as excess post exercise oxygen consumption (EPOC); (2) variation in swimming performance (critical swimming speed; Ucrit) correlates with metabolic scope (MS) or anaerobic capacity (i.e., maximum EPOC); (3) there is a trade-off between maximum sustained swimming speed (Usus) and minimum cost of transport (COTmin); and (4) variation in Usus correlates positively with optimum swimming speed (Uopt; i.e., the speed that minimizes energy expenditure per unit of distance traveled). Data collection involved swimming respirometry and video analysis. Results showed that anaerobic swimming costs (i.e., EPOC) increase linearly with the number of bursts in S. aurata, with each burst corresponding to 0.53 mg O2 kg−1. Data are consistent with a previous study on striped surfperch (Embiotoca lateralis), a labriform swimmer, suggesting that the metabolic cost of burst swimming is similar across various types of locomotion. There was no correlation between Ucrit and MS or anaerobic capacity in S. aurata indicating that other factors, including morphological or biomechanical traits, influenced Ucrit. We found no evidence of a trade-off between Usus and COTmin. In fact, data revealed significant negative correlations between Usus and COTmin, suggesting that individuals with high Usus also exhibit low COTmin. Finally, there were positive correlations between Usus and Uopt. Our study demonstrates the energetic importance of anaerobic metabolism during unsteady swimming, and provides intraspecific evidence that superior maximum sustained swimming speed is associated with superior swimming economy and
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Directory of Open Access Journals (Sweden)
Carlos Rogério de Mello
2010-04-01
Full Text Available Vazões máximas são grandezas hidrológicas aplicadas a projetos de obras hidráulicas e vazões mínimas são utilizadas para a avaliação das disponibilidades hídricas em bacias hidrográficas e comportamento do escoamento subterrâneo. Neste estudo, objetivou-se à construção de intervalos de confiança estatísticos para vazões máximas e mínimas diárias anuais e sua relação com as características fisiográficas das 6 maiores bacias hidrográficas da região Alto Rio Grande à montante da represa da UHE-Camargos/CEMIG. As distribuições de probabilidades Gumbel e Gama foram aplicadas, respectivamente, para séries históricas de vazões máximas e mínimas, utilizando os estimadores de Máxima Verossimilhança. Os intervalos de confiança constituem-se em uma importante ferramenta para o melhor entendimento e estimativa das vazões, sendo influenciado pelas características geológicas das bacias. Com base nos mesmos, verificou-se que a região Alto Rio Grande possui duas áreas distintas: a primeira, abrangendo as bacias Aiuruoca, Carvalhos e Bom Jardim, que apresentaram as maiores vazões máximas e mínimas, significando potencialidade para cheias mais significativas e maiores disponibilidades hídricas; a segunda, associada às bacias F. Laranjeiras, Madre de Deus e Andrelândia, que apresentaram as menores disponibilidades hídricas.Maximum discharges are applied to hydraulic structure design and minimum discharges are used to characterize water availability in hydrographic basins and subterranean flow. This study is aimed at estimating the confidence statistical intervals for maximum and minimum annual discharges and their relationship wih the physical characteristics of basins in the Alto Rio Grande Region, State of Minas Gerais. The study was developed for the six (6 greatest Alto Rio Grande Region basins at upstream of the UHE-Camargos/CEMIG reservoir. Gumbel and Gama probability distribution models were applied to the
Estimating minimum and maximum air temperature using MODIS ...
Indian Academy of Sciences (India)
in a wide range of applications in areas of ecology, hydrology ... stations, thus attracting researchers to make use ... simpler because of the lack of solar radiation effect .... water from the snow packed Himalayan region to ... tribution System (LAADS) webdata archive cen- ..... ing due to greenhouse gases is different for the air.
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
Maximum nonlocality and minimum uncertainty using magic states
Howard, Mark
2015-04-01
We prove that magic states from the Clifford hierarchy give optimal solutions for tasks involving nonlocality and entropic uncertainty with respect to Pauli measurements. For both the nonlocality and uncertainty tasks, stabilizer states are the worst possible pure states, so our solutions have an operational interpretation as being highly nonstabilizer. The optimal strategy for a qudit version of the Clauser-Horne-Shimony-Holt game in prime dimensions is achieved by measuring maximally entangled states that are isomorphic to single-qudit magic states. These magic states have an appealingly simple form, and our proof shows that they are "balanced" with respect to all but one of the mutually unbiased stabilizer bases. Of all equatorial qudit states, magic states minimize the average entropic uncertainties for collision entropy and also, for small prime dimensions, min-entropy, a fact that may have implications for cryptography.
Zero forcing parameters and minimum rank problems
Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.
2010-01-01
The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
How unprecedented a solar minimum was it?
Russell, C T; Jian, L K; Luhmann, J G
2013-05-01
The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.
Fields, Gary S.; Kanbur, Ravi
2005-01-01
Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
International Nuclear Information System (INIS)
Dam, H. van; Leege, P.F.A. de
1987-01-01
An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Reference respiratory waveforms by minimum jerk model analysis
Energy Technology Data Exchange (ETDEWEB)
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)
2015-09-15
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy
Reference respiratory waveforms by minimum jerk model analysis
International Nuclear Information System (INIS)
Anetai, Yusuke; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko; Ota, Seiichi
2015-01-01
Purpose: CyberKnife"® robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony"® mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife"®. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony"® mode, a tracking laser projection from CyberKnife"® was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
Rising above the Minimum Wage.
Even, William; Macpherson, David
An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Minimum critical crack depths in pressure vessels guidelines for nondestructive testing
International Nuclear Information System (INIS)
Crossley, M.R.; Townley, C.H.A.
1983-09-01
Estimates of the minimum critical depths which can be expected in high quality vessels designed to certain British and American Code rules are given. A simple means of allowing for fatigue crack growth in service is included. The data which are presented can be used to decide what sensitivity and what reporting levels should be employed during an ultrasonic inspection of a pressure vessel. It is emphasised that the minimum crack depths are those which would be relevant to a vessel in which the material is stressed to its maximum permitted value during operation. Stresses may, in practice, be significantly less than this. Less restrictive inspection standards may be established, if it were considered worthwhile to carry out a detailed stress analysis of the particular vessel under examination. (author)
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
Minimum Error Entropy Classification
Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A
2013-01-01
This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.
Do Minimum Wages Fight Poverty?
David Neumark; William Wascher
1997-01-01
The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Minimum Tracking Error Volatility
Luca RICCETTI
2010-01-01
Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...
Employment effects of minimum wages
Neumark, David
2014-01-01
The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
2010-02-08
... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Low Streamflow Forcasting using Minimum Relative Entropy
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Measurement of Minimum Bias Observables with ATLAS
Kvita, Jiri; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.
Comments on the 'minimum flux corona' concept
International Nuclear Information System (INIS)
Antiochos, S.K.; Underwood, J.H.
1978-01-01
Hearn's (1975) models of the energy balance and mass loss of stellar coronae, based on a 'minimum flux corona' concept, are critically examined. First, it is shown that the neglect of the relevant length scales for coronal temperature variation leads to an inconsistent computation of the total energy flux F. The stability arguments upon which the minimum flux concept is based are shown to be fallacious. Errors in the computation of the stellar wind contribution to the energy budget are identified. Finally we criticize Hearn's (1977) suggestion that the model, with a value of the thermal conductivity modified by the magnetic field, can explain the difference between solar coronal holes and quiet coronal regions. (orig.) 891 WL [de
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel; Cabello, Ana Marí a; Moran, Xose Anxelu G.; Massana, Ramon; Scharek, Renate
2016-01-01
and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer
Split-plot fractional designs: Is minimum aberration enough?
DEFF Research Database (Denmark)
Kulahci, Murat; Ramirez, Jose; Tobias, Randy
2006-01-01
Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
2017-08-21
number of neurons. Time is discretized and we assume any neuron can spike no more than once in a time bin. We have ν ≤ µ because ν is the probability of a...Comput. Appl. Math . 2000, 121, 331–354. 27. Shalizi, C.; Crutchfield, J. Computational mechanics: Pattern and prediction, structure and simplicity. J...Minimization of a Linearly Constrained Function by Partition of Feasible Domain. Math . Oper. Res. 1983, 8, 215–230. Entropy 2017, 19, 427 33 of 33 54. Candes, E
Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai
2017-11-01
Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Relationship between the minimum and maximum temperature thresholds for development in insects
Czech Academy of Sciences Publication Activity Database
Dixon, Anthony F. G.; Honěk, A.; Keil, P.; Kotela, M.A.A.; Šizling, A. L.; Jarošík, Vojtěch
2009-01-01
Roč. 23, č. 2 (2009), s. 257-264 ISSN 0269-8463 R&D Projects: GA MŠk(CZ) LC06073 Institutional research plan: CEZ:AV0Z60870520; CEZ:AV0Z60050516 Keywords : distribution * insects * thermal requirements for development * thermal window * thermal tolerance range * ectotherms Subject RIV: EG - Zoology Impact factor: 4.546, year: 2009
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Directory of Open Access Journals (Sweden)
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
On the maximum and minimum of two modified Gamma-Gamma variates with applications
Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim
2014-01-01
on these new results to present the performance analysis of (i) a dual-branch free-space optical selection combining diversity undergoing independent but not necessarily identically distributed Gamma-Gamma fading under the impact of pointing errors and of (ii
Maximum tech, minimum time. Response and cleanup of the Fidalgo Bay oil spill
International Nuclear Information System (INIS)
Pintler, L.R.
1991-01-01
A booster pump failure on a pipeline at Texaco's Anacortes refinery spilled more than 17,000 gallons of oil into Fidalgo Bay. A description is given of the spill control measures taken under Texaco's Spill Prevention and Control Countermeasures and facility contingency plans. The spill was addressed quickly, and containment booms were used to cordon off the spill. Vacuum trucks, rope mop machines and disk skimmers were used to collect the thickest concentrations of oil, and the oil and water collected was separated at the refinery's wastewater treatment centre. Nonwoven polypropylene sorbent pads, sweeps, booms and oil snares were used to clean up thinner concentrations of oil. Essential steps for a smooth spill response include the following: a comprehensive spill prevention and control countermeasures plan, training and regular drills and testing; immediate notification of appropriate regulatory agencies and company emergency response personnel; and the use of professional oil spill management contractors to assist in spill cleanup. 2 figs
The effect of land use change to maximum and minimum discharge in Cikapundung River Basin
Kuntoro, Arno Adi; Putro, Anton Winarto; Kusuma, M. Syahril B.; Natasaputra, Suardi
2017-11-01
Land use change are become issues for many river basin in the world, including Cikapundung River Basin in West Java. Cikapundung River is one of the main water sources of Bandung City water supply system. In the other hand, as one of the tributaries of Citarum River, Cikapundung also contributes to flooding in the Southern part of Bandung. Therefore, it is important to analyze the effect of land use change on Cikapundung river discharge, to maintain the reliability of water supply system and to minimize flooding in Bandung Basin. Land use map of Cikapundung River in 2009 shows that residential area (49.7%) and mixed farming (42.6%), are the most dominant land use type, while dry agriculture (19.4%) and forest (21.8%) cover the rest. The effect of land use change in Cikapundung River Basin is simulated by using Hydrological Simulation Program FORTRAN (HSPF) through 3 land use change scenarios: extreme, optimum, and existing. By using the calibrated parameters, simulation of the extreme land use change scenario with the decrease of forest area by 77.7% and increase of developed area by 57.0% from the existing condition resulted in increase of Qmax/Qmin ratio from 5.24 to 6.10. Meanwhile, simulation of the optimum land use change scenario with the expansion of forest area by 75.26% from the existing condition resulted in decrease of Qmax/Qmin ratio from 5.24 to 4.14. Although Qmax/Qmin ratio of Cikapundung is still relatively small, but the simulation shows the important of water resources analysis in providing river health indicator, as input for land use planning.
"A minimum of urbanism and a maximum of ruralism": the Cuban experience.
Gugler, J
1980-01-01
The case of Cuba provides social scientists with reasonably good information on urbanization policies and their implementation in 1 developing country committed to socialism. The demographic context is considered, and Cuban efforts to eliminate the rural-urban contradiction and to redefine the role of Havana are described. The impact of these policies is analyzed in terms of available data on urbanization patterns since January 1959 when the revolutionaries marched into Havana. Prerevolutionary urbanization trends are considered. Fertility in Cuba has declined simultaneously with mortality and even more rapidly. Projections assume a 1.85% annual growth rate, resulting in a population of nearly 15 million by the year 2000. Any estimate regarding the future trend in population growth must depend on prognosis of general living conditions and of specific government policies regarding contraception, abortion, female labor force participation, and child care facilities. If population growth in Cuba has been substantial, but less dramatic than that of many other developing countries, urban growth presents a similar picture. Cuba's highest rate of growth of the population living in urban centers with a population over 20,000, in any intercensal period during the 20th century, was 4.1%/year for 1943-1953. It dropped to 3.0% in the 1953-1970 period. Government policies achieved a measure of success in stemming the tide of rural-urban migration, but the aims of the revolutionary leadership went further. The objective was for urban dwellers to be involved in agriculture, and the living standards of the rural population were to be raised to approximate those of city dwellers. The goal of "urbanizing" the countryside found expression in a program designed to construct new small towns which could more easily be provided with services. A slowdown in the growth of Havana, and the concomitant weakening of its dominant position, was intended by the revolutionary leadership. Offical policies have been enunciated that connect the reduction in the dominance of Havana with the slowdown in urban growth and the urbanization of the countryside. Evidence is presented which suggests achievements along all of these dimensions, but by 1970 they were, as yet, quite limited.
Wage and Labor Standards Administration (DOL), Washington, DC.
This report describes the 1966 amendments to the Fair Labor Standards Act and summarizes the findings of three 1969 studies of the economic effects of these amendments. The studies found that economic growth continued through the third phase of the amendments, beginning February 1, 1969, despite increased wage and hours restrictions for recently…
Gurung, Prabin
2015-01-01
The thesis was written in order to find workable ideas and techniques of ecotourism for sustainable development and to find out the importance of ecotourism. It illustrates how ecotourism can play a beneficial role to visitors and local people. The thesis was based on ecotourism and its impact, the case study was Sauraha and Chitwan National Park. How ecotourism can be fruitful to local residents and nature, what are the drawbacks of ecotourism? Ecotourism also has negative impacts on both th...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Strong Solar Control of Infrared Aurora on Jupiter: Correlation Since the Last Solar Maximum
Kostiuk, T.; Livengood, T. A.; Hewagama, T.
2009-01-01
Polar aurorae in Jupiter's atmosphere radiate throughout the electromagnetic spectrum from X ray through mid-infrared (mid-IR, 5 - 20 micron wavelength). Voyager IRIS data and ground-based spectroscopic measurements of Jupiter's northern mid-IR aurora, acquired since 1982, reveal a correlation between auroral brightness and solar activity that has not been observed in Jovian aurora at other wavelengths. Over nearly three solar cycles, Jupiter auroral ethane emission brightness and solar 10.7 cm radio flux and sunspot number are positively correlated with high confidence. Ethane line emission intensity varies over tenfold between low and high solar activity periods. Detailed measurements have been made using the GSFC HIPWAC spectrometer at the NASA IRTF since the last solar maximum, following the mid-IR emission through the declining phase toward solar minimum. An even more convincing correlation with solar activity is evident in these data. Current analyses of these results will be described, including planned measurements on polar ethane line emission scheduled through the rise of the next solar maximum beginning in 2009, with a steep gradient to a maximum in 2012. This work is relevant to the Juno mission and to the development of the Europa Jupiter System Mission. Results of observations at the Infrared Telescope Facility (IRTF) operated by the University of Hawaii under Cooperative Agreement no. NCC5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This work was supported by the NASA Planetary Astronomy Program.
Maximum power operation of interacting molecular motors
DEFF Research Database (Denmark)
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
Why relevance theory is relevant for lexicography
DEFF Research Database (Denmark)
Bothma, Theo; Tarp, Sven
2014-01-01
This article starts by providing a brief summary of relevance theory in information science in relation to the function theory of lexicography, explaining the different types of relevance, viz. objective system relevance and the subjective types of relevance, i.e. topical, cognitive, situational...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...
Minimum Q Electrically Small Antennas
DEFF Research Database (Denmark)
Kim, O. S.
2012-01-01
Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....
Feedback brake distribution control for minimum pitch
Tavernini, Davide; Velenis, Efstathios; Longo, Stefano
2017-06-01
The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.
SS Cygni: The accretion disk in eruption and at minimum light
International Nuclear Information System (INIS)
Kiplinger, A.L.
1979-01-01
Absolute spectrophotometric observations of the dwarf nova SS Cygni have been obtained at maximum light, during the subsequent decline, and at minimum light. In order to provide a critical test of accretion disk theory, a model for a steady-state α-model accretion disk has been constructed which utilizes a grid of stellar energy distributions to synthesize the disk flux. Physical parameters for the accretion disk at maximum light are set by estimates of the intrinsic luminosity of the system that result from a desynthesis of a composite minimum light energy distribution. At maximum light, agreements between observational and theoretical continuum slopes and the Balmer jump are remarkably good. The model fails, however, during the eruption decline and at minimum light. It appears that the physical character of an accretion disk at minimum light must radiacally differ from the disk observed at maximum light
Fermat and the Minimum Principle
Indian Academy of Sciences (India)
Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...
Coupling between minimum scattering antennas
DEFF Research Database (Denmark)
Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans
1974-01-01
Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...
DEFF Research Database (Denmark)
Lioma, Christina; Larsen, Birger; Petersen, Casper
2016-01-01
train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Minimum airflow reset of single-duct VAV terminal boxes
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and
Quantum mechanics the theoretical minimum
Susskind, Leonard
2014-01-01
From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.
Minimum resolvable power contrast model
Qian, Shuai; Wang, Xia; Zhou, Jingjing
2018-01-01
Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.
Understanding the Minimum Wage: Issues and Answers.
Employment Policies Inst. Foundation, Washington, DC.
This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
A generic statistical methodology to predict the maximum pit depth of a localized corrosion process
International Nuclear Information System (INIS)
Jarrah, A.; Bigerelle, M.; Guillemot, G.; Najjar, D.; Iost, A.; Nianga, J.-M.
2011-01-01
Highlights: → We propose a methodology to predict the maximum pit depth in a corrosion process. → Generalized Lambda Distribution and the Computer Based Bootstrap Method are combined. → GLD fit a large variety of distributions both in their central and tail regions. → Minimum thickness preventing perforation can be estimated with a safety margin. → Considering its applications, this new approach can help to size industrial pieces. - Abstract: This paper outlines a new methodology to predict accurately the maximum pit depth related to a localized corrosion process. It combines two statistical methods: the Generalized Lambda Distribution (GLD), to determine a model of distribution fitting with the experimental frequency distribution of depths, and the Computer Based Bootstrap Method (CBBM), to generate simulated distributions equivalent to the experimental one. In comparison with conventionally established statistical methods that are restricted to the use of inferred distributions constrained by specific mathematical assumptions, the major advantage of the methodology presented in this paper is that both the GLD and the CBBM enable a statistical treatment of the experimental data without making any preconceived choice neither on the unknown theoretical parent underlying distribution of pit depth which characterizes the global corrosion phenomenon nor on the unknown associated theoretical extreme value distribution which characterizes the deepest pits. Considering an experimental distribution of depths of pits produced on an aluminium sample, estimations of maximum pit depth using a GLD model are compared to similar estimations based on usual Gumbel and Generalized Extreme Value (GEV) methods proposed in the corrosion engineering literature. The GLD approach is shown having smaller bias and dispersion in the estimation of the maximum pit depth than the Gumbel approach both for its realization and mean. This leads to comparing the GLD approach to the GEV one
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Minimum ionizing particle detection using amorphous silicon diodes
Energy Technology Data Exchange (ETDEWEB)
Xi, J.; Hollingsworth, R.E.; Buitrago, R.H. (Glasstech Solar, Inc., Wheat Ridge, CO (USA)); Oakley, D.; Cumalat, J.P.; Nauenberg, U. (Colorado Univ., Boulder (USA). Dept. of Physics); McNeil, J.A. (Colorado School of Mines, Golden (USA). Dept. of Physics); Anderson, D.F. (Fermi National Accelerator Lab., Batavia, IL (USA)); Perez-Mendez, V. (Lawrence Berkeley Lab., CA (USA))
1991-03-01
Hydrogenated amorphous silicon pin diodes have been used to detect minimum ionizing electrons with a pulse height signal-to-noise ratio exceeding 3. A distinct signal was seen for shaping times from 100 to 3000 ns. The devices used had a 54 {mu}m thick intrinsic layer and an active area of 0.1 cm{sup 2}. The maximum signal was 3200 electrons with a noise width of 950 electrons for a shaping time of 250 ns. (orig.).
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
Institute of Scientific and Technical Information of China (English)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.
Applicability of the minimum entropy generation method for optimizing thermodynamic cycles
International Nuclear Information System (INIS)
Cheng Xue-Tao; Liang Xin-Gang
2013-01-01
Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations. In this paper, it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed. For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates, it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered. However, the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included, because the total heat into the system of interest is not fixed. An irreversible Carnot cycle and an irreversible Brayton cycle are analysed. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed. (general)
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
The minimum yield in channeling
International Nuclear Information System (INIS)
Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.
2000-01-01
A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation
International Nuclear Information System (INIS)
Kwee, Regina
2010-01-01
Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)
Minimum deterrence and regional security. Section 2. Other regions
International Nuclear Information System (INIS)
Azikiwe, A.E.
1993-01-01
Compared to European political and security circumstance, minimum deterrence is less an illusion in other regions where weapon free zones already exist. It will continue to be relevant to the security of other regions. Strategic arms limitation should be pursued vigorously in a constructive and pragmatic manner, bearing in mind the need to readjust to new global challenges. The Comprehensive Test Ban Treaty is the linchpin on which the Non-proliferation Treaty rests
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Relevant test set using feature selection algorithm for early detection ...
African Journals Online (AJOL)
The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Isoflurane minimum alveolar concentration reduction by fentanyl.
McEwan, A I; Smith, C; Dyar, O; Goodman, D; Smith, L R; Glass, P S
1993-05-01
Isoflurane is commonly combined with fentanyl during anesthesia. Because of hysteresis between plasma and effect site, bolus administration of fentanyl does not accurately describe the interaction between these drugs. The purpose of this study was to determine the MAC reduction of isoflurane by fentanyl when both drugs had reached steady biophase concentrations. Seventy-seven patients were randomly allocated to receive either no fentanyl or fentanyl at several predetermined plasma concentrations. Fentanyl was administered using a computer-assisted continuous infusion device. Patients were also randomly allocated to receive a predetermined steady state end-tidal concentration of isoflurane. Blood samples for fentanyl concentration were taken at 10 min after initiation of the infusion and before and immediately after skin incision. A minimum of 20 min was allowed between the start of the fentanyl infusion and skin incision. The reduction in the MAC of isoflurane by the measured fentanyl concentration was calculated using a maximum likelihood solution to a logistic regression model. There was an initial steep reduction in the MAC of isoflurane by fentanyl, with 3 ng/ml resulting in a 63% MAC reduction. A ceiling effect was observed with 10 ng/ml providing only a further 19% reduction in MAC. A 50% decrease in MAC was produced by a fentanyl concentration of 1.67 ng/ml. Defining the MAC reduction of isoflurane by all the opioids allows their more rational administration with inhalational anesthetics and provides a comparison of their relative anesthetic potencies.
Minimum Delay Moving Object Detection
Lao, Dong
2017-11-09
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Delay Moving Object Detection
Lao, Dong
2017-01-08
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Minimum Delay Moving Object Detection
Lao, Dong; Sundaramoorthi, Ganesh
2017-01-01
We present a general framework and method for detection of an object in a video based on apparent motion. The object moves relative to background motion at some unknown time in the video, and the goal is to detect and segment the object as soon it moves in an online manner. Due to unreliability of motion between frames, more than two frames are needed to reliably detect the object. Our method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Kurz-Besson, Cathy B.; Lousada, José L.; Gaspar, Maria J.; Correia, Isabel E.; David, Teresa S.; Soares, Pedro M. M.; Cardoso, Rita M.; Russo, Ana; Varino, Filipa; Mériaux, Catherine; Trigo, Ricardo M.; Gouveia, Célia M.
2016-01-01
Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events. To address this question, tree-ring width and density chronologies were built for a Pinus pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI) multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011. We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster’s vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster’s production capacity and quality in response to more arid conditions in the near future in the region. PMID:27570527
Kurz-Besson, Cathy B; Lousada, José L; Gaspar, Maria J; Correia, Isabel E; David, Teresa S; Soares, Pedro M M; Cardoso, Rita M; Russo, Ana; Varino, Filipa; Mériaux, Catherine; Trigo, Ricardo M; Gouveia, Célia M
2016-01-01
Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events. To address this question, tree-ring width and density chronologies were built for a Pinus pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI) multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011. We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster's vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster's production capacity and quality in response to more arid conditions in the near future in the region.
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Youth minimum wages and youth employment
Marimpi, Maria; Koning, Pierre
2018-01-01
This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median
Do Some Workers Have Minimum Wage Careers?
Carrington, William J.; Fallick, Bruce C.
2001-01-01
Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…
Does the Minimum Wage Affect Welfare Caseloads?
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum income protection in the Netherlands
van Peijpe, T.
2009-01-01
This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its
RCoronae Borealis at the 2003 light minimum
Kameswara Rao, N.; Lambert, David L.; Shetrone, Matthew D.
2006-08-01
A set of five high-resolution optical spectra of R CrB obtained in 2003 March is discussed. At the time of the first spectrum (March 8), the star was at V = 12.6, a decline of more than six magnitudes. By March 31, the date of the last observation, the star at V = 9.3 was on the recovery to maximum light (V = 6). The 2003 spectra are compared with the extensive collection of spectra from the 1995-1996 minimum presented previously. Spectroscopic features common to the two minima include the familiar ones also seen in spectra of other R Coronae Borealis stars (RCBs) in decline: sharp emission lines of neutral and singly ionized atoms, broad emission lines including HeI, [NII] 6583 Å, Na D and CaII H & K lines, and blueshifted absorption lines of Na D, and KI resonance lines. Prominent differences between the 2003 and 1995-1996 spectra are seen. The broad Na D and Ca H & K lines in 2003 and 1995-1996 are centred approximately on the mean stellar velocity. The 2003 profiles are fit by a single Gaussian, but in 1995-1996 two Gaussians separated by about 200 km s-1 were required. However, the HeI broad emission lines are fit by a single Gaussian at all times; the emitting He and Na-Ca atoms are probably not colocated. The C2 Phillips 2-0 lines were detected as sharp absorption lines and the C2 Swan band lines as sharp emission lines in 2003, but in 1995-1996 the Swan band emission lines were broad and the Phillips lines were undetected. The 2003 spectra show CI sharp emission lines at minimum light with a velocity changing in 5 d by about 20 km s-1 when the velocity of `metal' sharp lines is unchanged; the CI emission may arise from shock-heated gas. Reexamination of spectra obtained at maximum light in 1995 shows extended blue wings to strong lines with the extension dependent on a line's lower excitation potential; this is the signature of a stellar wind, also revealed by published observations of the HeI 10830 Å line at maximum light. Changes in the cores of the
Minimum wage development in the Russian Federation
Bolsheva, Anna
2012-01-01
The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Minimum Delay Moving Object Detection
Lao, Dong
2017-05-14
This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Making Deferred Taxes Relevant
Brouwer, Arjan; Naarding, Ewout
2018-01-01
We analyse the conceptual problems in current accounting for deferred taxes and provide solutions derived from the literature in order to make International Financial Reporting Standards (IFRS) deferred tax numbers value-relevant. In our view, the empirical results concerning the value relevance of
Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.
2008-01-01
We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of
Minimum Additive Waste Stabilization (MAWS)
International Nuclear Information System (INIS)
1994-02-01
In the Minimum Additive Waste Stabilization(MAWS) concept, actual waste streams are utilized as additive resources for vitrification, which may contain the basic components (glass formers and fluxes) for making a suitable glass or glassy slag. If too much glass former is present, then the melt viscosity or temperature will be too high for processing; while if there is too much flux, then the durability may suffer. Therefore, there are optimum combinations of these two important classes of constituents depending on the criteria required. The challenge is to combine these resources in such a way that minimizes the use of non-waste additives yet yields a processable and durable final waste form for disposal. The benefit to this approach is that the volume of the final waste form is minimized (waste loading maximized) since little or no additives are used and vitrification itself results in volume reduction through evaporation of water, combustion of organics, and compaction of the solids into a non-porous glass. This implies a significant reduction in disposal costs due to volume reduction alone, and minimizes future risks/costs due to the long term durability and leach resistance of glass. This is accomplished by using integrated systems that are both cost-effective and produce an environmentally sound waste form for disposal. individual component technologies may include: vitrification; thermal destruction; soil washing; gas scrubbing/filtration; and, ion-exchange wastewater treatment. The particular combination of technologies will depend on the waste streams to be treated. At the heart of MAWS is vitrification technology, which incorporates all primary and secondary waste streams into a final, long-term, stabilized glass wasteform. The integrated technology approach, and view of waste streams as resources, is innovative yet practical to cost effectively treat a broad range of DOE mixed and low-level wastes
Impact of HIPAA's minimum necessary standard on genomic data sharing.
Evans, Barbara J; Jarvik, Gail P
2018-04-01
This article provides a brief introduction to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule's minimum necessary standard, which applies to sharing of genomic data, particularly clinical data, following 2013 Privacy Rule revisions. This research used the Thomson Reuters Westlaw database and law library resources in its legal analysis of the HIPAA privacy tiers and the impact of the minimum necessary standard on genomic data sharing. We considered relevant example cases of genomic data-sharing needs. In a climate of stepped-up HIPAA enforcement, this standard is of concern to laboratories that generate, use, and share genomic information. How data-sharing activities are characterized-whether for research, public health, or clinical interpretation and medical practice support-affects how the minimum necessary standard applies and its overall impact on data access and use. There is no clear regulatory guidance on how to apply HIPAA's minimum necessary standard when considering the sharing of information in the data-rich environment of genomic testing. Laboratories that perform genomic testing should engage with policy makers to foster sound, well-informed policies and appropriate characterization of data-sharing activities to minimize adverse impacts on day-to-day workflows.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Energy Technology Data Exchange (ETDEWEB)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Minimum emittance of three-bend achromats
International Nuclear Information System (INIS)
Li Xiaoyu; Xu Gang
2012-01-01
The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)
A Pareto-Improving Minimum Wage
Eliav Danziger; Leif Danziger
2014-01-01
This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...
The minimum wage in the Czech enterprises
Eva Lajtkepová
2010-01-01
Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...
Ancestral Sequence Reconstruction with Maximum Parsimony.
Herbst, Lina; Fischer, Mareike
2017-12-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Measurement of Minimum Bias Observables with the ATLAS detector
Kvita, Jiri; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes. It has also a significant relevance for the simulation of the environment at the LHC with many concurrent pp interactions (“pileup”). The ATLAS collaboration has provided new measurements of the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam currents, recorded at center of mass energies of 8 TeV and 13 TeV. The measurements cover a wide spectrum using charged particle selections with minimum transverse momentum of both 100 MeV and 500 MeV and in various phase space regions of low and high charged particle multiplicities.
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
Reforming the minimum wage: Toward a psychological perspective.
Smith, Laura
2015-09-01
The field of psychology has periodically used its professional and scholarly platform to encourage national policy reform that promotes the public interest. In this article, the movement to raise the federal minimum wage is presented as an issue meriting attention from the psychological profession. Psychological support for minimum wage reform derives from health disparities research that supports the causal linkages between poverty and diminished physical and emotional well-being. Furthermore, psychological scholarship relevant to the social exclusion of low-income people not only suggests additional benefits of financially inclusive policymaking, it also indicates some of the attitudinal barriers that could potentially hinder it. Although the national living wage debate obviously extends beyond psychological parameters, psychologists are well-positioned to evaluate and contribute to it. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Culturally Relevant Cyberbullying Prevention
Phillips, Gregory John
2017-01-01
In this action research study, I, along with a student intervention committee of 14 members, developed a cyberbullying intervention for a large urban high school on the west coast. This high school contained a predominantly African American student population. I aimed to discover culturally relevant cyberbullying prevention strategies for African American students. The intervention committee selected video safety messages featuring African American actors as the most culturally relevant cyber...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
The Maximum Mass Solar Nebula and the early formation of planets
Nixon, C. J.; King, A. R.; Pringle, J. E.
2018-03-01
Current planet formation theories provide successful frameworks with which to interpret the array of new observational data in this field. However, each of the two main theories (core accretion, gravitational instability) is unable to explain some key aspects. In many planet formation calculations, it is usual to treat the initial properties of the planet forming disc (mass, radius, etc.) as free parameters. In this paper, we stress the importance of setting the formation of planet forming discs within the context of the formation of the central stars. By exploring the early stages of disc formation, we introduce the concept of the Maximum Mass Solar Nebula (MMSN), as opposed to the oft-used Minimum Mass Solar Nebula (here mmsn). It is evident that almost all protoplanetary discs start their evolution in a strongly self-gravitating state. In agreement with almost all previous work in this area, we conclude that on the scales relevant to planet formation these discs are not gravitationally unstable to gas fragmentation, but instead form strong, transient spiral arms. These spiral arms can act as efficient dust traps allowing the accumulation and subsequent fragmentation of the dust (but not the gas). This phase is likely to populate the disc with relatively large planetesimals on short timescales while the disc is still veiled by a dusty-gas envelope. Crucially, the early formation of large planetesimals overcomes the main barriers remaining within the core accretion model. A prediction of this picture is that essentially all observable protoplanetary discs are already planet hosting.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Stochastic variational approach to minimum uncertainty states
Energy Technology Data Exchange (ETDEWEB)
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
30 CFR 281.30 - Minimum royalty.
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...
New Minimum Wage Research: A Symposium.
Ehrenberg, Ronald G.; And Others
1992-01-01
Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
2007-01-01
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY
Directory of Open Access Journals (Sweden)
B. Sizykh Grigory
2017-01-01
Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.
Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity
Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie
2017-07-01
Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.
An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation
Directory of Open Access Journals (Sweden)
Hao Zhang
2015-01-01
Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.
Averill, M.; Briggle, A.
2006-12-01
Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan
2009-01-01
Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast
Goseling, J.; Matsumoto, R.; Uyematsu, T.; Weber, J.H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast
Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.
2010-01-01
We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding
Is Information Still Relevant?
Ma, Lia
2013-01-01
Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…
Minimum emittance in TBA and MBA lattices
Xu, Gang; Peng, Yue-Mei
2015-03-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.
Minimum emittance in TBA and MBA lattices
International Nuclear Information System (INIS)
Xu Gang; Peng Yuemei
2015-01-01
For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)
Who Benefits from a Minimum Wage Increase?
John W. Lopresti; Kevin J. Mumford
2015-01-01
This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...
Wage inequality, minimum wage effects and spillovers
Stewart, Mark B.
2011-01-01
This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...
14 CFR 205.5 - Minimum coverage.
2010-01-01
... 18,000 pounds maximum payload capacity, carriers need only maintain coverage of $2,000,000 per... than 30 seats or 7,500 pounds maximum cargo payload capacity, and a maximum authorized takeoff weight... not be contingent upon the financial condition, solvency, or freedom from bankruptcy of the carrier...
EOG feature relevance determination for microsleep detection
Golz Martin; Wollner Sebastian; Sommer David; Schnieder Sebastian
2017-01-01
Automatic relevance determination (ARD) was applied to two-channel EOG recordings for microsleep event (MSE) recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC) and logarithmic power spectral densities (PSD) averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ) was used as ARD...
Impact of the Minimum Wage on Compression.
Wolfe, Michael N.; Candland, Charles W.
1979-01-01
Assesses the impact of increases in the minimum wage on salary schedules, provides guidelines for creating a philosophy to deal with the impact, and outlines options and presents recommendations. (IRT)
Quantitative Research on the Minimum Wage
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Determining minimum lubrication film for machine parts
Hamrock, B. J.; Dowson, D.
1978-01-01
Formula predicts minimum film thickness required for fully-flooded ball bearings, gears, and cams. Formula is result of study to determine complete theoretical solution of isothermal elasto-hydrodynamic lubrication of fully-flooded elliptical contacts.
Long Term Care Minimum Data Set (MDS)
U.S. Department of Health & Human Services — The Long-Term Care Minimum Data Set (MDS) is a standardized, primary screening and assessment tool of health status that forms the foundation of the comprehensive...
The SME gauge sector with minimum length
Energy Technology Data Exchange (ETDEWEB)
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Clinical Relevance of Adipokines
Directory of Open Access Journals (Sweden)
Matthias Blüher
2012-10-01
Full Text Available The incidence of obesity has increased dramatically during recent decades. Obesity increases the risk for metabolic and cardiovascular diseases and may therefore contribute to premature death. With increasing fat mass, secretion of adipose tissue derived bioactive molecules (adipokines changes towards a pro-inflammatory, diabetogenic and atherogenic pattern. Adipokines are involved in the regulation of appetite and satiety, energy expenditure, activity, endothelial function, hemostasis, blood pressure, insulin sensitivity, energy metabolism in insulin sensitive tissues, adipogenesis, fat distribution and insulin secretion in pancreatic β-cells. Therefore, adipokines are clinically relevant as biomarkers for fat distribution, adipose tissue function, liver fat content, insulin sensitivity, chronic inflammation and have the potential for future pharmacological treatment strategies for obesity and its related diseases. This review focuses on the clinical relevance of selected adipokines as markers or predictors of obesity related diseases and as potential therapeutic tools or targets in metabolic and cardiovascular diseases.
Wildemuth, Barbara M.
2009-01-01
A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft
Publish or perish: remaining academically relevant and visible in the ...
African Journals Online (AJOL)
To improve the visibility of scholars' works and make them relevant on the academic scene, electronic publishing will be advisable. This provides the potential to readers to search and locate the ar ticles at minimum time within one journal or across multiple journals. This includes publishing articles in journals that are ...
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Directory of Open Access Journals (Sweden)
Cathy Béatrice Kurz Besson
2016-08-01
Full Text Available Western Iberia has recently shown increasing frequency of drought conditions coupled with heatwave events, leading to exacerbated limiting climatic conditions for plant growth. It is not clear to what extent wood growth and density of agroforestry species have suffered from such changes or recent extreme climate events.To address this question, tree-ring width and density chronologies were built for a P. pinaster stand in southern Portugal and correlated with climate variables, including the minimum, mean and maximum temperatures and the number of cold days. Monthly and maximum daily precipitations were also analyzed as well as dry spells. The drought effect was assessed using the standardized precipitation-evapotranspiration (SPEI multi-scalar drought index, between 1 to 24-months. The climate-growth/density relationships were evaluated for the period 1958-2011.We show that both wood radial growth and density highly benefit from the strong decay of cold days and the increase of minimum temperature. Yet the benefits are hindered by long-term water deficit, which results in different levels of impact on wood radial growth and density. Despite of the intensification of long-term water deficit, tree-ring width appears to benefit from the minimum temperature increase, whereas the effects of long-term droughts significantly prevail on tree-ring density. Our results further highlight the dependency of the species on deep water sources after the juvenile stage. The impact of climate changes on long-term droughts and their repercussion on the shallow groundwater table and P. pinaster’s vulnerability are also discussed. This work provides relevant information for forest management in the semi-arid area of the Alentejo region of Portugal. It should ease the elaboration of mitigation strategies to assure P. pinaster’s production capacity and quality in response to more arid conditions in the near future in the region.
2015-12-15
propagating , planetary-scale waves (wavenumber 1 and wavenumber 2) in the lower thermosphere that are associated with different stratospheric conditions. To...prominent meridional propagation of wave activity from the mid- latitudes toward the tropics. In combination with strong eastward meridional wind shear, our...Neutral and Ionized Atmosphere, Whole Atmosphere Model, and WACCM-X. The comparison focuses on the zonal mean, planetary wave , and tidal variability in
2013-04-16
... brittleness and loss, gastrointestinal upsets, skin rash, garlic breath odor, fatigue, irritability, and... adult values on the basis of body weight and with a factor allowed for growth (Ref. 2). Although... infants 0 to 6 months of age is 750 milliliter (ml)/day; (2) a representative body weight for infants over...
Differential rotation of the Sun and the Maunder minimum of solar activity
International Nuclear Information System (INIS)
Ikhsanov, R.N.; Vitinskij, Yu.I.
1980-01-01
Nature of differential rotation of the Sun is discussed. Investigation of long term changes in differential rotation separately for two phase of 11 year cycle of the Sun activity is carried out. Data on heliographic coordinates for every day of all groups of the Sun spots for the years preceding the epoch of the minimum of the 11 year cycle and the Sun groups for the years of maximum from ''Greenwich Photoheliographic Results'' for 1875-1954 are used as initial material. It is shown that differential rotation of the Sun changes in time from one 11 year cycle of the Sun activity to another. This change is connected with the power of 11 year cycle. During the maximum phase of 11 year cycle differentiality of the rotation increases in the cycles where the cycle maximum is higher. Before the minimum of 11 year cycle rotation differentiability is lower in the cycles for which activity maximum is higher in the next 11 year cycle. Equatorial rate of the Sun rotation increases with the decrease in the cycle power when the maximum Wolf number is less than 110. The mentioned regularities took place both during Maunder minimum and before its beginning [ru
Minimum number of transfer units and reboiler duty for multicomponent distillation columns
International Nuclear Information System (INIS)
Pleşu, Valentin; Bonet Ruiz, Alexandra Elena; Bonet, Jordi; Llorens, Joan; Iancu, Petrica
2013-01-01
Some guidelines to evaluate distillation columns, considering only basic thermodynamic data and principles, are provided in this paper. The method allows a first insight to the problem by simple calculations, without requiring column variables to ensure rational use of energy and low environmental impact. The separation system is approached by two complementary ways: minimum and infinite reflux flow rate. The minimum reflux provides the minimum energy requirements, and the infinite reflux provides the feasibility conditions. The difficulty of separation can be expressed in terms of number of transfer units (NTU). The applicability of the method is not mathematically limited by the number of components in the mixture. It is also applicable to reactive distillation. Several mixtures, including reactive distillation, are rigorously simulated as illustrative examples, to verify the applicability of the approach. The separation of the mixtures, performed by distillation columns, is feasible if a minimum NTU can be calculated between the distillate and bottom products. Once verified the feasibility of the separation, the maximum thermal efficiency depends only on boiling point of bottom and distillate streams. The minimum energy requirements corresponding to the reboiler can be calculated from the maximum thermal efficiency, and the variation of entropy and enthalpy of mixing between distillate and bottom streams. -- Highlights: • Feasibility analysis complemented with difficulty of separation parameters • Minimum and infinite reflux simplified models for distillation columns • Minimum number of transfer units (NTU) for packed columns at early design stages • Calculation of minimum energy distillation requirements at early design stages • Thermodynamic cycle approach and efficiency for distillation columns
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Application of the maximum entropy production principle to electrical systems
International Nuclear Information System (INIS)
Christen, Thomas
2006-01-01
For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated
Direct maximum parsimony phylogeny reconstruction from genotype data
Directory of Open Access Journals (Sweden)
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
The minimum wage in the Czech enterprises
Directory of Open Access Journals (Sweden)
Eva Lajtkepová
2010-01-01
Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.
Basilio, Numa; Morice, Antoine H P; Marti, Geoffrey; Montagne, Gilles
2015-08-01
The aim of this study was to answer the question, Do drivers take into account the action boundaries of their car when overtaking? The Morice et al. affordance-based approach to visually guided overtaking suggests that the "overtake-ability" affordance can be formalized as the ratio of the "minimum satisfying velocity" (MSV) of the maneuver to the maximum velocity (V(max)) of the driven car. In this definition, however, the maximum acceleration (A(max)) of the vehicle is ignored. We hypothesize that drivers may be sensitive to an affordance redefined with the ratio of the "minimum satisfying acceleration" (MSA) to the A(max) of the car. Two groups of nine drivers drove cars differing in their A(max). They were instructed to attempt overtaking maneuvers in 25 situations resulting from the combination of five MSA and five MSV values. When overtaking frequency was expressed as a function of MSV and MSA, maneuvers were found to be initiated differently for the two groups. However, when expressed as a function of MSV/V(max) and MSA/A(max), overtaking frequency was quite similar for both groups. Finally, a multiple regression coefficient analysis demonstrated that overtaking decisions are fully explained by a composite variable comprising MSA/A(max) and the time required to reach MSV. Drivers reliably decide whether overtaking is safe (or not) by using low- and high-order variables taking into account their car's maximum velocity and acceleration, respectively, as predicted by "affordance-based control" theory. Potential applications include the design of overtaking assistance, which should exploit the MSA/A(max) variables in order to suggest perceptually relevant overtaking solutions. © 2015, Human Factors and Ergonomics Society.
[Relevant public health enteropathogens].
Riveros, Maribel; Ochoa, Theresa J
2015-01-01
Diarrhea remains the third leading cause of death in children under five years, despite recent advances in the management and prevention of this disease. It is caused by multiple pathogens, however, the prevalence of each varies by age group, geographical area and the scenario where cases (community vs hospital) are recorded. The most relevant pathogens in public health are those associated with the highest burden of disease, severity, complications and mortality. In our country, norovirus, Campylobacter and diarrheagenic E. coli are the most prevalent pathogens at the community level in children. In this paper we review the local epidemiology and potential areas of development in five selected pathogens: rotavirus, norovirus, Shiga toxin-producing E. coli (STEC), Shigella and Salmonella. Of these, rotavirus is the most important in the pediatric population and the main agent responsible for child mortality from diarrhea. The introduction of rotavirus vaccination in Peru will have a significant impact on disease burden and mortality from diarrhea. However, surveillance studies are needed to determine the impact of vaccination and changes in the epidemiology of diarrhea in Peru following the introduction of new vaccines, as well as antibiotic resistance surveillance of clinical relevant bacteria.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
Risk control and the minimum significant risk
International Nuclear Information System (INIS)
Seiler, F.A.; Alvarez, J.L.
1996-01-01
Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented
International Nuclear Information System (INIS)
Peřinová, Vlasta; Lukš, Antonín
2015-01-01
The SU(2) group is used in two different fields of quantum optics, the quantum polarization and quantum interferometry. Quantum degrees of polarization may be based on distances of a polarization state from the set of unpolarized states. The maximum polarization is achieved in the case where the state is pure and then the distribution of the photon-number sums is optimized. In quantum interferometry, the SU(2) intelligent states have also the property that the Fisher measure of information is equal to the inverse minimum detectable phase shift on the usual simplifying condition. Previously, the optimization of the Fisher information under a constraint was studied. Now, in the framework of constraint optimization, states similar to the SU(2) intelligent states are treated. (paper)
Minimum weight design of composite laminates for multiple loads
International Nuclear Information System (INIS)
Krikanov, A.A.; Soni, S.R.
1995-01-01
A new design method of constructing optimum weight composite laminates for multiple loads is proposed in this paper. A netting analysis approach is used to develop an optimization procedure. Three ply orientations permit development of optimum laminate design without using stress-strain relations. It is proved that stresses in minimum weight laminate reach allowable values in each ply with given load. The optimum ply thickness is defined at maximum value among tensile and compressive loads. Two examples are given to obtain optimum ply orientations, thicknesses and materials. For comparison purposes, calculations of stresses are done in orthotropic material using classical lamination theory. Based upon these calculations, matrix degrades at 30 to 50% of ultimate load. There is no fiber failure and therefore laminates withstand all applied loads in both examples
A minimum attention control center for nuclear power plants
International Nuclear Information System (INIS)
Meijer, C.H.
1986-01-01
Control Centers for Nuclear Power Plants have characteristically been designed for maximum attention by the operating staffs of these plants. Consequently, the monitoring, control and diagnostics oriented cognitive activities by these staffs, were mostly ''data-driven'' in nature. This paper addresses a control center concept, under development by Combustion Engineering, that promotes a more ''information-driven'' cognitive interaction process between the operator and the plant. The more ''intelligent'' and therefore less attentive nature of such interactive process utilizes computer implemented cognitive engineered algorithms. The underlying structure of these algorithms is based upon the Critical Function/Success Path monitoring principle. The paper highlights a typical implementation of the minimum attention concept for the handling of unfamiliar safety related events. (author)
Wind Turbine Down-regulation Strategy for Minimum Wake Deficit
DEFF Research Database (Denmark)
Ma, Kuichao; Zhu, Jiangsheng; N. Soltani, Mohsen
2017-01-01
Down-regulation mode of wind turbine is commonly used no matter for the reserve power for supporting ancillary service to the grid, power optimization in wind farm or reducing power loss in the fault condition. It is also a method to protect faulty turbine. A down-regulation strategy based...... on minimum wake deficit is proposed in this paper, for the power improvement of the downwind turbine in low and medium wind speed region. The main idea is to operate turbine work at an appropriate operating point through rotor speed and torque control. The effectiveness of the strategy is verified...... by comparing with maximum rotor speed strategy. The result shows that the proposed strategy can improve the power of downwind turbine effectively....
Minimum qualifications for nuclear criticality safety professionals
International Nuclear Information System (INIS)
Ketzlach, N.
1990-01-01
A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals
A minimum achievable PV electrical generating cost
International Nuclear Information System (INIS)
Sabisky, E.S.
1996-01-01
The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price
How to design your stand-by diesel generator unit for maximum reliability
International Nuclear Information System (INIS)
Kauffmann, W.M.
1979-01-01
Critical stand-by power applications, such as in a nuclear plant, or radio support stations, demand exacting guidelines for positive start, rapid acceleration, load acceptance with minimum voltage drop, and quick recovery to rated voltage. The design of medium-speed turbocharged and intercooled diesel-engine-generator for this purpose is considered. Selection of the diesel engine, size, and number of units, from the standpoint of cost, favors minimum number of units with maximum horsepower capability. Four-cycle diesels are available in 16 to 20 cyinders V-configurations, with 200 BMEP (brake mean-effective pressure) continuous and 250 BMEP peaking
Other relevant biological papers
International Nuclear Information System (INIS)
Shimizu, M.
1989-01-01
A considerable number of CRESP-relevant papers concerning deep-sea biology and radioecology have been published. It is the purpose of this study to call attention to them. They fall into three general categories. The first is papers of general interest. They are mentioned only briefly, and include text references to the global bibliography at the end of the volume. The second are papers that are not only mentioned and referenced, but for various reasons are described in abstract form. The last is a list of papers compiled by H.S.J. Roe specifically for this volume. They are listed in bibliographic form, and are also included in the global bibliography at the end of the volume
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Energy expenditure, economic growth, and the minimum EROI of society
International Nuclear Information System (INIS)
Fizaine, Florian; Court, Victor
2016-01-01
We estimate energy expenditure for the US and world economies from 1850 to 2012. Periods of high energy expenditure relative to GDP (from 1850 to 1945), or spikes (1973–74 and 1978–79) are associated with low economic growth rates, and periods of low or falling energy expenditure are associated with high and rising economic growth rates (e.g. 1945–1973). Over the period 1960–2010 for which we have continuous year-to-year data for control variables (capital formation, population, and unemployment rate) we estimate that, statistically, in order to enjoy positive growth, the US economy cannot afford to spend more than 11% of its GDP on energy. Given the current energy intensity of the US economy, this translates in a minimum societal EROI of approximately 11:1 (or a maximum tolerable average price of energy of twice the current level). Granger tests consistently reveal a one way causality running from the level of energy expenditure (as a fraction of GDP) to economic growth in the US between 1960 and 2010. A coherent economic policy should be founded on improving net energy efficiency. This would yield a “double dividend”: increased societal EROI (through decreased energy intensity of capital investment), and decreased sensitivity to energy price volatility. - Highlights: •We estimate energy expenditures as a fraction of GDP for the US, the world (1850–2012), and the UK (1300–2008). •Statistically speaking, the US economy cannot afford to allocate more than 11% of its GDP to energy expenditures in order to have a positive growth rate. •This corresponds to a maximum tolerable average price of energy of twice the current level. •In the same way, US growth is only possible if its primary energy system has at least a minimum EROI of approximately 11:1.
Phylogenetic Applications of the Minimum Contradiction Approach on Continuous Characters
Directory of Open Access Journals (Sweden)
Marc Thuillard
2009-01-01
Full Text Available We describe the conditions under which a set of continuous variables or characters can be described as an X-tree or a split network. A distance matrix corresponds exactly to a split network or a valued X-tree if, after ordering of the taxa, the variables values can be embedded into a function with at most a local maximum and a local minimum, and crossing any horizontal line at most twice. In real applications, the order of the taxa best satisfying the above conditions can be obtained using the Minimum Contradiction method. This approach is applied to 2 sets of continuous characters. The first set corresponds to craniofacial landmarks in Hominids. The contradiction matrix is used to identify possible tree structures and some alternatives when they exist. We explain how to discover the main structuring characters in a tree. The second set consists of a sample of 100 galaxies. In that second example one shows how to discretize the continuous variables describing physical properties of the galaxies without disrupting the underlying tree structure.
RR Tel: Determination of Dust Properties During Minimum Obscuration
Directory of Open Access Journals (Sweden)
Jurkić T.
2012-06-01
Full Text Available the ISO infrared spectra and the SAAO long-term JHKL photometry of RR Tel in the epochs during minimum obscuration are studied in order to construct a circumstellar dust model. the spectral energy distribution in the near- and the mid-IR spectral range (1–15 μm was obtained for an epoch without the pronounced dust obscuration. the DUSTY code was used to solve the radiative transfer through the dust and to determine the circumstellar dust properties of the inner dust regions around the Mira component. Dust temperature, maximum grain size, dust density distribution, mass-loss rate, terminal wind velocity and optical depth are determined. the spectral energy distribution and the long-term JHKL photometry during an epoch of minimum obscuration show almost unattenuated stellar source and strong dust emission which cannot be explained by a single dust shell model. We propose a two-component model consisting of an optically thin circmustellar dust shell and optically thick dust outside the line of sight in some kind of a flattened geometry, which is responsible for most of the observed dust thermal emission.
Directory of Open Access Journals (Sweden)
Gabere MN
2016-06-01
Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1
The radial distribution of cosmic rays in the heliosphere at solar maximum
McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.
2003-08-01
To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.
Discretization of space and time: determining the values of minimum length and minimum time
Roatta , Luca
2017-01-01
Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.
Pennsylvania State Dept. of Public Instruction, Harrisburg.
MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…
Dirac's minimum degree condition restricted to claws
Broersma, Haitze J.; Ryjacek, Z.; Schiermeyer, I.
1997-01-01
Let G be a graph on n 3 vertices. Dirac's minimum degree condition is the condition that all vertices of G have degree at least . This is a well-known sufficient condition for the existence of a Hamilton cycle in G. We give related sufficiency conditions for the existence of a Hamilton cycle or a
7 CFR 33.10 - Minimum requirements.
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Regulations § 33.10 Minimum requirements. No person shall... shipment of apples to any foreign destination unless: (a) Apples grade at least U.S. No. 1 or U.S. No. 1...
Minimum Risk Pesticide: Definition and Product Confirmation
Minimum risk pesticides pose little to no risk to human health or the environment and therefore are not subject to regulation under FIFRA. EPA does not do any pre-market review for such products or labels, but violative products are subject to enforcement.
The Minimum Distance of Graph Codes
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn
2011-01-01
We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....
Minimum maintenance solar pump | Assefa | Zede Journal
African Journals Online (AJOL)
A minimum maintenance solar pump (MMSP), Fig 1, has been simulated for Addis Ababa, taking solar meteorological data of global radiation, diffuse radiation and ambient air temperature as input to a computer program that has been developed. To increase the performance of the solar pump, by trapping the long-wave ...
Context quantization by minimum adaptive code length
DEFF Research Database (Denmark)
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....
7 CFR 35.13 - Minimum quantity.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Minimum quantity. 35.13 Section 35.13 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... part, transport or receive for transportation to any foreign destination, a shipment of 25 packages or...
Minimum impact house prototype for sustainable building
Götz, E.; Klenner, K.; Lantelme, M.; Mohn, A.; Sauter, S.; Thöne, J.; Zellmann, E.; Drexler, H.; Jauslin, D.
2010-01-01
The Minihouse is a prototupe for a sustainable townhouse. On a site of only 29 sqm it offers 154 sqm of urban life. The project 'Minimum Impact House' adresses two important questions: How do we provide living space in the cities without distroying the landscape? How to improve sustainably the
49 CFR 639.27 - Minimum criteria.
2010-10-01
... dollar value to any non-financial factors that are considered by using performance-based specifications..., DEPARTMENT OF TRANSPORTATION CAPITAL LEASES Cost-Effectiveness § 639.27 Minimum criteria. In making the... used where possible and appropriate: (a) Operation costs; (b) Reliability of service; (c) Maintenance...
Computing nonsimple polygons of minimum perimeter
Fekete, S.P.; Haas, A.; Hemmer, M.; Hoffmann, M.; Kostitsyna, I.; Krupke, D.; Maurer, F.; Mitchell, J.S.B.; Schmidt, A.; Schmidt, C.; Troegel, J.
2018-01-01
We consider the Minimum Perimeter Polygon Problem (MP3): for a given set V of points in the plane, find a polygon P with holes that has vertex set V , such that the total boundary length is smallest possible. The MP3 can be considered a natural geometric generalization of the Traveling Salesman
Minimum-B mirrors plus EBT principles
International Nuclear Information System (INIS)
Yoshikawa, S.
1983-01-01
Electrons are heated at the minimum B location(s) created by the multipole field and the toroidal field. Resulting hot electrons can assist plasma confinement by (1) providing mirror, (2) creating azimuthally symmetric toroidal confinement, or (3) creating modified bumpy torus
Completeness properties of the minimum uncertainty states
Trifonov, D. A.
1993-01-01
The completeness properties of the Schrodinger minimum uncertainty states (SMUS) and of some of their subsets are considered. The invariant measures and the resolution unity measures for the set of SMUS are constructed and the representation of squeezing and correlating operators and SMUS as superpositions of Glauber coherent states on the real line is elucidated.
Minimum Description Length Shape and Appearance Models
DEFF Research Database (Denmark)
Thodberg, Hans Henrik
2003-01-01
The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...
Faster Fully-Dynamic minimum spanning forest
DEFF Research Database (Denmark)
Holm, Jacob; Rotenberg, Eva; Wulff-Nilsen, Christian
2015-01-01
We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in O(log4 n/log logn) expected amortized time per operation, improving the O(log4 n) amortized bound of Holm et al. (STOC’98, JACM’01).We also provide a deterministic data...
Minimum Wage Effects throughout the Wage Distribution
Neumark, David; Schweitzer, Mark; Wascher, William
2004-01-01
This paper provides evidence on a wide set of margins along which labor markets can adjust in response to increases in the minimum wage, including wages, hours, employment, and ultimately labor income. Not surprisingly, the evidence indicates that low-wage workers are most strongly affected, while higher-wage workers are little affected. Workers…
Asymptotics for the minimum covariance determinant estimator
Butler, R.W.; Davies, P.L.; Jhun, M.
1993-01-01
Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
Binary cluster collision dynamics and minimum energy conformations
Energy Technology Data Exchange (ETDEWEB)
Muñoz, Francisco [Max Planck Institute of Microstructure Physics, Weinberg 2, 06120 Halle (Germany); Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Rogan, José; Valdivia, J.A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Varas, A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Nano-Bio Spectroscopy Group, ETSF Scientific Development Centre, Departamento de Física de Materiales, Universidad del País Vasco UPV/EHU, Av. Tolosa 72, E-20018 San Sebastián (Spain); Kiwi, Miguel, E-mail: m.kiwi.t@gmail.com [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile)
2013-10-15
The collision dynamics of one Ag or Cu atom impinging on a Au{sub 12} cluster is investigated by means of DFT molecular dynamics. Our results show that the experimentally confirmed 2D to 3D transition of Au{sub 12}→Au{sub 13} is mostly preserved by the resulting planar Au{sub 12}Ag and Au{sub 12}Cu minimum energy clusters, which is quite remarkable in view of the excess energy, well larger than the 2D–3D potential barrier height. The process is accompanied by a large s−d hybridization and charge transfer from Au to Ag or Cu. The dynamics of the collision process mainly yields fusion of projectile and target, however scattering and cluster fragmentation also occur for large energies and large impact parameters. While Ag projectiles favor fragmentation, Cu favors scattering due to its smaller mass. The projectile size does not play a major role in favoring the fragmentation or scattering channels. By comparing our collision results with those obtained by an unbiased minimum energy search of 4483 Au{sub 12}Ag and 4483 Au{sub 12}Cu configurations obtained phenomenologically, we find that there is an extra bonus: without increase of computer time collisions yield the planar lower energy structures that are not feasible to obtain using semi-classical potentials. In fact, we conclude that phenomenological potentials do not even provide adequate seeds for the search of global energy minima for planar structures. Since the fabrication of nanoclusters is mainly achieved by synthesis or laser ablation, the set of local minima configurations we provide here, and their distribution as a function of energy, are more relevant than the global minimum to analyze experimental results obtained at finite temperatures, and is consistent with the dynamical coexistence of 2D and 3D liquid Au clusters conformations obtained previously.
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
Planetary tides during the Maunder sunspot minimum
International Nuclear Information System (INIS)
Smythe, C.M.; Eddy, J.A.
1977-01-01
Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test
User perspectives on relevance criteria
DEFF Research Database (Denmark)
Maglaughlin, Kelly L.; Sonnenwald, Diane H.
2002-01-01
, partially relevant, or not relevant to their information need; and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, that were used by the participants when selecting passages that contributed or detracted from a document's relevance......This study investigates the use of criteria to assess relevant, partially relevant, and not-relevant documents. Study participants identified passages within 20 document representations that they used to make relevance judgments; judged each document representation as a whole to be relevant...... matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...
Directory of Open Access Journals (Sweden)
G. R. Pasha
2006-07-01
Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Trends in Intense Typhoon Minimum Sea Level Pressure
Directory of Open Access Journals (Sweden)
Stephen L. Durden
2012-01-01
Full Text Available A number of recent publications have examined trends in the maximum wind speed of tropical cyclones in various basins. In this communication, the author focuses on typhoons in the western North Pacific. Rather than maximum wind speed, the intensity of the storms is measured by their lifetime minimum sea level pressure (MSLP. Quantile regression is used to test for trends in storms of extreme intensity. The results indicate that there is a trend of decreasing intensity in the most intense storms as measured by MSLP over the period 1951–2010. However, when the data are broken into intervals 1951–1987 and 1987–2010, neither interval has a significant trend, but the intensity quantiles for the two periods differ. Reasons for this are discussed, including the cessation of aircraft reconnaissance in 1987. The author also finds that the average typhoon intensity is greater in El Nino years, while the intensity of the strongest typhoons shows no significant relation to El Nino Southern Oscillation.
Nowcasting daily minimum air and grass temperature
Savage, M. J.
2016-02-01
Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass
Maximum Entropy and Theory Construction: A Reply to Favretti
Directory of Open Access Journals (Sweden)
John Harte
2018-04-01
Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright
International Nuclear Information System (INIS)
Beer, M.
1980-01-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates
DEFF Research Database (Denmark)
Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee
2011-01-01
In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...
Minimum wakefield achievable by waveguide damped cavity
International Nuclear Information System (INIS)
Lin, X.E.; Kroll, N.M.
1995-01-01
The authors use an equivalent circuit to model a waveguide damped cavity. Both exponentially damped and persistent (decay t -3/2 ) components of the wakefield are derived from this model. The result shows that for a cavity with resonant frequency a fixed interval above waveguide cutoff, the persistent wakefield amplitude is inversely proportional to the external Q value of the damped mode. The competition of the two terms results in an optimal Q value, which gives a minimum wakefield as a function of the distance behind the source particle. The minimum wakefield increases when the resonant frequency approaches the waveguide cutoff. The results agree very well with computer simulation on a real cavity-waveguide system
Protocol for the verification of minimum criteria
International Nuclear Information System (INIS)
Gaggiano, M.; Spiccia, P.; Gaetano Arnetta, P.
2014-01-01
This Protocol has been prepared with reference to the provisions of article 8 of the Legislative Decree of May 26, 2000 No. 187. Quality controls of radiological equipment fit within the larger 'quality assurance Program' and are intended to ensure the correct operation of the same and the maintenance of that State. The pursuit of this objective guarantees that the radiological equipment subjected to those controls also meets the minimum criteria of acceptability set out in annex V of the aforementioned legislative decree establishing the conditions necessary to allow the functions to which each radiological equipment was designed, built and for which it is used. The Protocol is established for the purpose of quality control of radiological equipment of Cone Beam Computer Tomography type and reference document, in the sense that compliance with stated tolerances also ensures the subsistence minimum acceptability requirements, where applicable.
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Minimum Wage Laws and the Distribution of Employment.
Lang, Kevin
The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…
Deformed special relativity with an energy barrier of a minimum speed
International Nuclear Information System (INIS)
Nassif, Claudio
2011-01-01
Full text: This research aims to introduce a new principle of symmetry in the flat space-time by means of the elimination of the classical idea of rest, and by including a universal minimum limit of speed in the quantum world. Such a limit, unattainable by the particles, represents a preferred inertial reference frame associated with a universal background field that breaks Lorentz symmetry. So there emerges a new relativistic dynamics where a minimum speed forms an inferior energy barrier. One of the interesting implications of the existence of such a minimum speed is that it prevents the absolute zero temperature for an ultracold gas, according to the third law of thermodynamics. So we will be able to provide a fundamental dynamical explanation for the third law by means of a connection between such a phenomenological law and the new relativistic dynamics with a minimum speed. In other words we say that our relevant investigation is with respect to the problem of the absolute zero temperature in the thermodynamics of an ideal gas. We have made a connection between the 3 rd law of Thermodynamics and the new dynamics with a minimum speed by means of a relation between the absolute zero temperature (T = 0 deg K) and a minimum average speed (V) for a gas with N particles (molecules or atoms). Since T = 0 deg K is thermodynamically unattainable, we have shown this is due to the impossibility of reaching V from the new dynamics standpoint. (author)
Minimum intervention dentistry: periodontics and implant dentistry.
Darby, I B; Ngo, L
2013-06-01
This article will look at the role of minimum intervention dentistry in the management of periodontal disease. It will discuss the role of appropriate assessment, treatment and risk factors/indicators. In addition, the role of the patient and early intervention in the continuing care of dental implants will be discussed as well as the management of peri-implant disease. © 2013 Australian Dental Association.
Minimum quality standards and international trade
DEFF Research Database (Denmark)
Baltzer, Kenneth Thomas
2011-01-01
This paper investigates the impact of a non-discriminating minimum quality standard (MQS) on trade and welfare when the market is characterized by imperfect competition and asymmetric information. A simple partial equilibrium model of an international Cournot duopoly is presented in which a domes...... prefer different levels of regulation. As a result, international trade disputes are likely to arise even when regulation is non-discriminating....
''Reduced'' magnetohydrodynamics and minimum dissipation rates
International Nuclear Information System (INIS)
Montgomery, D.
1992-01-01
It is demonstrated that all solutions of the equations of ''reduced'' magnetohydrodynamics approach a uniform-current, zero-flow state for long times, given a constant wall electric field, uniform scalar viscosity and resistivity, and uniform mass density. This state is the state of minimum energy dissipation rate for these boundary conditions. No steady-state turbulence is possible. The result contrasts sharply with results for full three-dimensional magnetohydrodynamics before the reduction occurs
Minimum K_2,3-saturated Graphs
Chen, Ya-Chen
2010-01-01
A graph is K_{2,3}-saturated if it has no subgraph isomorphic to K_{2,3}, but does contain a K_{2,3} after the addition of any new edge. We prove that the minimum number of edges in a K_{2,3}-saturated graph on n >= 5 vertices is sat(n, K_{2,3}) = 2n - 3.
Minimum degree and density of binary sequences
DEFF Research Database (Denmark)
Brandt, Stephan; Müttel, J.; Rautenbach, D.
2010-01-01
For d,k∈N with k ≤ 2d, let g(d,k) denote the infimum density of binary sequences (x)∈{0,1} which satisfy the minimum degree condition σ(x+) ≥ k for all i∈Z with xi=1. We reduce the problem of computing g(d,k) to a combinatorial problem related to the generalized k-girth of a graph G which...
Optimal Control of Hypersonic Planning Maneuvers Based on Pontryagin’s Maximum Principle
Directory of Open Access Journals (Sweden)
A. Yu. Melnikov
2015-01-01
Full Text Available The work objective is the synthesis of simple analytical formula of the optimal roll angle of hypersonic gliding vehicles for conditions of quasi-horizontal motion, allowing its practical implementation in onboard control algorithms.The introduction justifies relevance, formulates basic control tasks, and describes a history of scientific research and achievements in the field concerned. The author reveals a common disadvantage of the other authors’ methods, i.e. the problem of practical implementation in onboard control algorithms.The similar tasks of hypersonic maneuvers are systemized according to the type of maneuver, control parameters and limitations.In the statement of the problem the glider launched horizontally with a suborbital speed glides passive in the static Atmosphere on a spherical surface of constant radius in the Central field of gravitation.The work specifies a system of equations of motion in the inertial spherical coordinate system, sets the limits on the roll angle and optimization criteria at the end of the flight: high speed or azimuth and the minimum distances to the specified geocentric points.The solution.1 A system of equations of motion is transformed by replacing the time argument with another independent argument – the normal equilibrium overload. The Hamiltonian and the equations of mated parameters are obtained using the Pontryagin’s maximum principle. The number of equations of motion and mated vector is reduced.2 The mated parameters were expressed by formulas using current movement parameters. The formulas are proved through differentiation and substitution in the equations of motion.3 The Formula of optimal roll-position control by condition of maximum is obtained. After substitution of mated parameters, the insertion of constants, and trigonometric transformations the Formula of the optimal roll angle is obtained as functions of the current parameters of motion.The roll angle is expressed as the ratio
Design for minimum energy in interstellar communication
Messerschmitt, David G.
2015-02-01
Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Bounds and maximum principles for the solution of the linear transport equation
International Nuclear Information System (INIS)
Larsen, E.W.
1981-01-01
Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Study on minimum heat-flux point during boiling heat transfer on horizontal plates
International Nuclear Information System (INIS)
Nishio, Shigefumi
1985-01-01
The characteristics of boiling heat transfer are usually shown by the boiling curve of N-shape having the maximum and minimum points. As for the limiting heat flux point, that is, the maximum point, there have been many reports so far, as it is related to the physical burn of heat flux-controlling type heating surfaces. But though the minimum heat flux point is related to the quench point as the problems in steel heat treatment, the core safety of LWRs, the operational stability of superconducting magnets, the start-up characteristics of low temperature machinery, the condition of vapor explosion occurrence and so on, the systematic information has been limited. In this study, the effects of transient property and the heat conductivity of heating surfaces on the minimum heat flux condition in the pool boiling on horizontal planes were experimentally examined by using liquid nitrogen. The experimental apparatuses for steady boiling, for unsteady boiling with a copper heating surface, and for unsteady boiling with a heating surface other than copper were employed. The boiling curves obtained with these apparatuses and the minimum heat flux point condition are discussed. (Kako, I.)
Maximum entropy principle and hydrodynamic models in statistical mechanics
International Nuclear Information System (INIS)
Trovato, M.; Reggiani, L.
2012-01-01
This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Decentralized Pricing in Minimum Cost Spanning Trees
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter
In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....
The Risk Management of Minimum Return Guarantees
Directory of Open Access Journals (Sweden)
Antje Mahayni
2008-05-01
Full Text Available Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization
Terzariol, Marco
2017-11-13
The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.
EOG feature relevance determination for microsleep detection
Directory of Open Access Journals (Sweden)
Golz Martin
2017-09-01
Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.
EOG feature relevance determination for microsleep detection
Directory of Open Access Journals (Sweden)
Golz Martin
2017-09-01
Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.
PENAFSIRAN HAKIM TERHADAP KETENTUAN PIDANA MINIMUM KHUSUS DALAM UNDANG-UNDANG TINDAK PIDANA KORUPSI
Directory of Open Access Journals (Sweden)
Ismail Rumadan
2013-11-01
provision in the formulation of minimum deliknya against perpetrators of corruption . It is certainly different from the general criminal provisions in the draft Criminal Law (Penal Code which is more familiar maximum penal provision . The results showed that the minimum pinadana special provisions in the law of corruption can bebreached so long as the judge has the legal resening or residenti proper ratio to a corruption case by looking at the size scale of the corruption case with consideration and interpretation of the patterns perspective, social - justice, moral justice and community justice decision was taken to drop the minimum punishment. Criminal punishment under the criminal provisions of the special minimum in some court decisions can be made by several criteria into consideration the provisions of the criminal judges deviate minimum , the criteria of the element of state assets or state economy as a result of the acts of corruption tiundak and criteria of the role and position of the defendant in acts of corruption.
Vertical and horizontal extension of the oxygen minimum zone in the eastern South Pacific Ocean
Fuenzalida, Rosalino; Schneider, Wolfgang; Garcés-Vargas, José; Bravo, Luis; Lange, Carina
2009-07-01
Recent hydrographic measurements within the eastern South Pacific (1999-2001) were combined with vertically high-resolution data from the World Ocean Circulation Experiment, high-resolution profiles and bottle casts from the World Ocean Database 2001, and the World Ocean Atlas 2001 in order to evaluate the vertical and horizontal extension of the oxygen minimum zone (oxygen minimum zone to be 9.82±3.60×10 6 km 2 and 2.18±0.66×10 6 km 3, respectively. The oxygen minimum zone is thickest (>600 m) off Peru between 5 and 13°S and to about 1000 km offshore. Its upper boundary is shallowest (zone in some places. Offshore, the thickness and meridional extent of the oxygen minimum zone decrease until it finally vanishes at 140°W between 2° and 8°S. Moving southward along the coast of South America, the zonal extension of the oxygen minimum zone gradually diminishes from 3000 km (15°S) to 1200 km (20°S) and then to 25 km (30°S); only a thin band is detected at ˜37°S off Concepción, Chile. Simultaneously, the oxygen minimum zone's maximum thickness decreases from 300 m (20°S) to less than 50 m (south of 30°S). The spatial distribution of Ekman suction velocity and oxygen minimum zone thickness correlate well, especially in the core. Off Chile, the eastern South Pacific Intermediate Water mass introduces increased vertical stability into the upper water column, complicating ventilation of the oxygen minimum zone from above. In addition, oxygen-enriched Antarctic Intermediate Water clashes with the oxygen minimum zone at around 30°S, causing a pronounced sub-surface oxygen front. The new estimates of vertical and horizontal oxygen minimum zone distribution in the eastern South Pacific complement the global quantification of naturally hypoxic continental margins by Helly and Levin [2004. Global distribution of naturally occurring marine hypoxia on continental margins. Deep-Sea Research I 51, 1159-1168] and provide new baseline data useful for studies on the
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Implementation Of The Local Minimum Wage In Malang City (A Case Study in Malang City 2014
Directory of Open Access Journals (Sweden)
Dhea Candra Dewi Candra Dewi
2015-04-01
Full Text Available Wage system in a framework of how wages set and defined in order to improve the welfare of worker. The Indonesian government attempt to set a minimum wage in accordance with the eligibility standard of living. The study intend to analize the policy of Local Minimum Wage in Malang City in 2014, its implementation and constraining factors of those Local Minimum Wages. The research uses interactive model analysis as introduced by Miles and Hubermann [6] that consist of data collection, data reduction, data display, and conclusion. Constraining factors seen at the respond given by relevant actors to the policy such as employer organizations, worker unions, wage councils, and local government. Firstly, company as employer organization does not use wage scale system as suggested by the policy. Secondly, lack of communication forum between company and worker union sounds very high. Thirdly, inability of small and big companies to pay minimum standard wages. Lastly, disagreement and different opinion about wage scale applied between local wage council, employer organization and workers union that often occurs in tripartite communication forum. Keywords: Employers Organization, Local Minimum Wage, Local Wage Council, Policy Implementation, Tripartite Communication forum, Workers Union.
Minimum dimension of an ITER like Tokamak with a given Q
Energy Technology Data Exchange (ETDEWEB)
Johner, J
2004-07-01
The minimum dimension of an ITER like tokamak with a given amplification factor Q is calculated for two values of the maximum magnetic field in the superconducting toroidal field coils. For ITERH-98P(y,2) scaling of the energy confinement time, it is shown that for a sufficiently large tokamak, the maximum Q is obtained for the operating point situated both at the maximum density and at the minimum margin with respect to the H-L transition. We have shown that increasing the maximum magnetic field in the toroidal field coils from the present 11.8 T to 16 T would result in a strong reduction of the machine size but has practically no effect on the fusion power. Values obtained for {beta}{sub N} are found to be below 2. Peak fluxes on the divertor plates with an ITER like divertor and a multi-machine expression for the power radiated in the plasma mantle, are below 10 MW/m{sup 2}.
Do minimum wages reduce poverty? Evidence from Central America ...
International Development Research Centre (IDRC) Digital Library (Canada)
2010-12-16
Dec 16, 2010 ... Raising minimum wages has traditionally been considered a way to protect poor ... However, the effect of raising minimum wages remains an empirical question ... More than 70 of Vietnamese entrepreneurs choose to start a ...
Maximum Runoff of the Flood on Wadis of Northern Part of Algeria ...
African Journals Online (AJOL)
Wadis of Algeria are characterized by a very irregular hydrological regime. The question of estimating the maximum flow of wadis is relevant. We propose in this paper a method based on an interpretation of the transformation of surface runoff in streamflow. The technique of account the maximal runoff of flood for the rivers ...
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Energy Technology Data Exchange (ETDEWEB)
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
30 CFR 56.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Does increasing the minimum wage reduce poverty in developing countries?
Gindling, T. H.
2014-01-01
Do minimum wage policies reduce poverty in developing countries? It depends. Raising the minimum wage could increase or decrease poverty, depending on labor market characteristics. Minimum wages target formal sector workers—a minority of workers in most developing countries—many of whom do not live in poor households. Whether raising minimum wages reduces poverty depends not only on whether formal sector workers lose jobs as a result, but also on whether low-wage workers live in poor househol...
Solving crystal structures with the symmetry minimum function
International Nuclear Information System (INIS)
Estermann, M.A.
1995-01-01
Unravelling the Patterson function (the auto-correlation function of the crystal structure) (A.L. Patterson, Phys. Rev. 46 (1934) 372) can be the only way of solving crystal structures from neutron and incomplete diffraction data (e.g. powder data) when direct methods for phase determination fail. The negative scattering lengths of certain isotopes and the systematic loss of information caused by incomplete diffraction data invalidate the underlying statistical assumptions made in direct methods. In contrast, the Patterson function depends solely on the quality of the available diffraction data. Simpson et al. (P.G. Simpson et al., Acta Crystallogr. 18 (1965) 169) showed that solving a crystal structure with a particular superposition of origin-shifted Patterson functions, the symmetry minimum function, is advantageous over using the Patterson function alone, for single-crystal X-ray data.This paper describes the extension of the Patterson superposition approach to neutron data and powder data by (a) actively using the negative regions in the Patterson map caused by negative scattering lengths and (b) using maximum entropy Patterson maps (W.I.F. David, Nature 346 (1990) 731). Furthermore, prior chemical knowledge such as bond lengths and angles from known fragments have been included. Two successful structure solutions of a known and a previously unknown structure (M. Hofmann, J. Solid State Chem., in press) illustrate the potential of this new development. ((orig.))
On a Minimum Problem in Smectic Elastomers
International Nuclear Information System (INIS)
Buonsanti, Michele; Giovine, Pasquale
2008-01-01
Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress
Minimum DNBR Prediction Using Artificial Intelligence
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Su; Kim, Ju Hyun; Na, Man Gyun [Chosun University, Gwangju (Korea, Republic of)
2011-05-15
The minimum DNBR (MDNBR) for prevention of the boiling crisis and the fuel clad melting is very important factor that should be consistently monitored in safety aspects. Artificial intelligence methods have been extensively and successfully applied to nonlinear function approximation such as the problem in question for predicting DNBR values. In this paper, support vector regression (SVR) model and fuzzy neural network (FNN) model are developed to predict the MDNBR using a number of measured signals from the reactor coolant system. Also, two models are trained using a training data set and verified against test data set, which does not include training data. The proposed MDNBR estimation algorithms were verified by using nuclear and thermal data acquired from many numerical simulations of the Yonggwang Nuclear Power Plant Unit 3 (YGN-3)
Image Segmentation Using Minimum Spanning Tree
Dewi, M. P.; Armiati, A.; Alvini, S.
2018-04-01
This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.
Statistical physics when the minimum temperature is not absolute zero
Chung, Won Sang; Hassanabadi, Hassan
2018-04-01
In this paper, the nonzero minimum temperature is considered based on the third law of thermodynamics and existence of the minimal momentum. From the assumption of nonzero positive minimum temperature in nature, we deform the definitions of some thermodynamical quantities and investigate nonzero minimum temperature correction to the well-known thermodynamical problems.
12 CFR 564.4 - Minimum appraisal standards.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...
29 CFR 505.3 - Prevailing minimum compensation.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to the...
An Empirical Analysis of the Relationship between Minimum Wage ...
African Journals Online (AJOL)
An Empirical Analysis of the Relationship between Minimum Wage, Investment and Economic Growth in Ghana. ... In addition, the ratio of public investment to tax revenue must increase as minimum wage increases since such complementary changes are more likely to lead to economic growth. Keywords: minimum wage ...
Minimum Covers of Fixed Cardinality in Weighted Graphs.
White, Lee J.
Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…
Minimum Price Guarantees In a Consumer Search Model
M.C.W. Janssen (Maarten); A. Parakhonyak (Alexei)
2009-01-01
textabstractThis paper is the first to examine the effect of minimum price guarantees in a sequential search model. Minimum price guarantees are not advertised and only known to consumers when they come to the shop. We show that in such an environment, minimum price guarantees increase the value of
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
Minimum Wages and Skill Acquisition: Another Look at Schooling Effects.
Neumark, David; Wascher, William
2003-01-01
Examines the effects of minimum wage on schooling, seeking to reconcile some of the contradictory results in recent research using Current Population Survey data from the late 1970s through the 1980s. Findings point to negative effects of minimum wages on school enrollment, bolstering the findings of negative effects of minimum wages on enrollment…
Minimum Wage Effects on Educational Enrollments in New Zealand
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
41 CFR 50-201.1101 - Minimum wages.
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...
29 CFR 4.159 - General minimum wage.
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...
29 CFR 783.43 - Computation of seaman's minimum wage.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Computation of seaman's minimum wage. 783.43 Section 783.43...'s minimum wage. Section 6(b) requires, under paragraph (2) of the subsection, that an employee...'s minimum wage requirements by reason of the 1961 Amendments (see §§ 783.23 and 783.26). Although...
24 CFR 891.145 - Owner deposit (Minimum Capital Investment).
2010-04-01
... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum Capital...
12 CFR 931.3 - Minimum investment in capital stock.
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank, both...
9 CFR 147.51 - Authorized laboratory minimum requirements.
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Authorized laboratory minimum requirements. 147.51 Section 147.51 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE... Authorized Laboratories and Approved Tests § 147.51 Authorized laboratory minimum requirements. These minimum...
Using ANFIS for selection of more relevant parameters to predict dew point temperature
International Nuclear Information System (INIS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Petković, Dalibor; Yee, Por Lip; Mansor, Zulkefli
2016-01-01
Highlights: • ANFIS is used to select the most relevant variables for dew point temperature prediction. • Two cities from the central and south central parts of Iran are selected as case studies. • Influence of 5 parameters on dew point temperature is evaluated. • Appropriate selection of input variables has a notable effect on prediction. • Considering the most relevant combination of 2 parameters would be more suitable. - Abstract: In this research work, for the first time, the adaptive neuro fuzzy inference system (ANFIS) is employed to propose an approach for identifying the most significant parameters for prediction of daily dew point temperature (T_d_e_w). The ANFIS process for variable selection is implemented, which includes a number of ways to recognize the parameters offering favorable predictions. According to the physical factors influencing the dew formation, 8 variables of daily minimum, maximum and average air temperatures (T_m_i_n, T_m_a_x and T_a_v_g), relative humidity (R_h), atmospheric pressure (P), water vapor pressure (V_P), sunshine hour (n) and horizontal global solar radiation (H) are considered to investigate their effects on T_d_e_w. The used data include 7 years daily measured data of two Iranian cities located in the central and south central parts of the country. The results indicate that despite climate difference between the considered case studies, for both stations, V_P is the most influential variable while R_h is the least relevant element. Furthermore, the combination of T_m_i_n and V_P is recognized as the most influential set to predict T_d_e_w. The conducted examinations show that there is a remarkable difference between the errors achieved for most and less relevant input parameters, which highlights the importance of appropriate selection of input parameters. The use of more than two inputs may not be advisable and appropriate; thus, considering the most relevant combination of 2 parameters would be more suitable
Objective Bayesianism and the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Do citizens have minimum medical knowledge? A survey
Directory of Open Access Journals (Sweden)
Steurer-Stey Claudia
2007-05-01
Full Text Available Abstract Background Experts defined a "minimum medical knowledge" (MMK that people need for understanding typical signs and/or risk factors of four relevant clinical conditions: myocardial infarction, stroke, chronic obstructive pulmonary disease and HIV/AIDS. We tested to what degree Swiss adult citizens satisfy this criterion for MMK and whether people with medical experience have acquired better knowledge than those without. Methods Questionnaire interview in a Swiss urban area with 185 Swiss citizens (median age 29 years, interquartile range 23 to 49, 52% male. We obtained context information on age, gender, highest educational level, (paramedical background and specific health experience with one of the conditions in the social surrounding. We calculated the proportion of MMK and examined whether citizens with medical background (personal or professional would perform better compared to other groups. Results No single citizen reached the full MMK (100%. The mean MMK was as low as 32% and the range was 0 -72%. Surprisingly, multivariable analysis showed that participants with a university degree (n = 84; β (95% CI +3.7% MMK (0.4–7.1 p = 0.03, (paramedical background (n = 34; +6.2% MMK (2.0–10.4, p = 0.004 and personal illness experience (n = 96; +4.9% MMK (1.5–8.2, p = 0.004 had only a moderately higher MMK than those without, while age and sex had no effect on the level of MMK. Interaction between university degree and clinical experience (personal or professional showed no effect suggesting that higher education lacks synergistic effect. Conclusion This sample of Swiss citizens did not know more than a third of the MMK. We found little difference within groups with medical experience (personal or professional, suggesting that there is a consistent and dramatic lack of knowledge in the general public about the typical signs and risk factors of relevant clinical conditions.
Profiles of Dialogue for Relevance
Directory of Open Access Journals (Sweden)
Douglas Walton
2016-12-01
Full Text Available This paper uses argument diagrams, argumentation schemes, and some tools from formal argumentation systems developed in artificial intelligence to build a graph-theoretic model of relevance shown to be applicable (with some extensions as a practical method for helping a third party judge issues of relevance or irrelevance of an argument in real examples. Examples used to illustrate how the method works are drawn from disputes about relevance in natural language discourse, including a criminal trial and a parliamentary debate.
Imani, Moslem; Kao, Huan-Chin; Lan, Wen-Hau; Kuo, Chung-Yen
2018-02-01
The analysis and the prediction of sea level fluctuations are core requirements of marine meteorology and operational oceanography. Estimates of sea level with hours-to-days warning times are especially important for low-lying regions and coastal zone management. The primary purpose of this study is to examine the applicability and capability of extreme learning machine (ELM) and relevance vector machine (RVM) models for predicting sea level variations and compare their performances with powerful machine learning methods, namely, support vector machine (SVM) and radial basis function (RBF) models. The input dataset from the period of January 2004 to May 2011 used in the study was obtained from the Dongshi tide gauge station in Chiayi, Taiwan. Results showed that the ELM and RVM models outperformed the other methods. The performance of the RVM approach was superior in predicting the daily sea level time series given the minimum root mean square error of 34.73 mm and the maximum determination coefficient of 0.93 (R2) during the testing periods. Furthermore, the obtained results were in close agreement with the original tide-gauge data, which indicates that RVM approach is a promising alternative method for time series prediction and could be successfully used for daily sea level forecasts.
Minimum Energy Requirements in Complex Distillation Arrangements
Energy Technology Data Exchange (ETDEWEB)
Halvorsen, Ivar J.
2001-07-01
Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in
Minimum Energy Requirements in Complex Distillation Arrangements
Energy Technology Data Exchange (ETDEWEB)
Halvorsen, Ivar J
2001-07-01
Distillation is the most widely used industrial separation technology and distillation units are responsible for a significant part of the total heat consumption in the world's process industry. In this work we focus on directly (fully thermally) coupled column arrangements for separation of multicomponent mixtures. These systems are also denoted Petlyuk arrangements, where a particular implementation is the dividing wall column. Energy savings in the range of 20-40% have been reported with ternary feed mixtures. In addition to energy savings, such integrated units have also a potential for reduced capital cost, making them extra attractive. However, the industrial use has been limited, and difficulties in design and control have been reported as the main reasons. Minimum energy results have only been available for ternary feed mixtures and sharp product splits. This motivates further research in this area, and this thesis will hopefully give some contributions to better understanding of complex column systems. In the first part we derive the general analytic solution for minimum energy consumption in directly coupled columns for a multicomponent feed and any number of products. To our knowledge, this is a new contribution in the field. The basic assumptions are constant relative volatility, constant pressure and constant molar flows and the derivation is based on Underwood's classical methods. An important conclusion is that the minimum energy consumption in a complex directly integrated multi-product arrangement is the same as for the most difficult split between any pair of the specified products when we consider the performance of a conventional two-product column. We also present the Vmin-diagram, which is a simple graphical tool for visualisation of minimum energy related to feed distribution. The Vmin-diagram provides a simple mean to assess the detailed flow requirements for all parts of a complex directly coupled arrangement. The main purpose in the first
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Relevance theory: pragmatics and cognition.
Wearing, Catherine J
2015-01-01
Relevance Theory is a cognitively oriented theory of pragmatics, i.e., a theory of language use. It builds on the seminal work of H.P. Grice(1) to develop a pragmatic theory which is at once philosophically sensitive and empirically plausible (in both psychological and evolutionary terms). This entry reviews the central commitments and chief contributions of Relevance Theory, including its Gricean commitment to the centrality of intention-reading and inference in communication; the cognitively grounded notion of relevance which provides the mechanism for explaining pragmatic interpretation as an intention-driven, inferential process; and several key applications of the theory (lexical pragmatics, metaphor and irony, procedural meaning). Relevance Theory is an important contribution to our understanding of the pragmatics of communication. © 2014 John Wiley & Sons, Ltd.
Latitude and Power Characteristics of Solar Activity at the End of the Maunder Minimum
Ivanov, V. G.; Miletsky, E. V.
2017-12-01
Two important sources of information about sunspots in the Maunder minimum are the Spörer catalog (Spörer, 1889) and observations of the Paris observatory (Ribes and Nesme-Ribes, 1993), which cover in total the last quarter of the 17th and the first two decades of the 18th century. These data, in particular, contain information about sunspot latitudes. As we showed in (Ivanov et al., 2011; Ivanov and Miletsky, 2016), dispersions of sunspot latitude distributions are tightly related to sunspot indices, and we can estimate the level of solar activity in the past using a method which is not based on direct calculation of sunspots and weakly affected by loss of observational data. The latitude distributions of sunspots in the time of transition from the Maunder minimum to the regular regime of solar activity proved to be wide enough. It gives evidences in favor of, first, not very low cycle no.-3 (1712-1723) with the Wolf number in maximum W = 100 ± 50, and, second, nonzero activity in the maximum of cycle no.-4 (1700-1711) W = 60 ± 45. Therefore, the latitude distributions in the end of the Maunder minimum are in better agreement with the traditional Wolf numbers and new revisited indices of activity SN and GN (Clette et al., 2014; Svalgaard and Schatten, 2016) than with the GSN (Hoyt and Schatten, 1998); the latter provide much lower level of activity in this epoch.
Clinical relevance in anesthesia journals
DEFF Research Database (Denmark)
Lauritsen, Jakob; Møller, Ann M
2006-01-01
The purpose of this review is to present the latest knowledge and research on the definition and distribution of clinically relevant articles in anesthesia journals. It will also discuss the importance of the chosen methodology and outcome of articles.......The purpose of this review is to present the latest knowledge and research on the definition and distribution of clinically relevant articles in anesthesia journals. It will also discuss the importance of the chosen methodology and outcome of articles....
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
Minimum relative entropy, Bayes and Kapur
Woodbury, Allan D.
2011-04-01
The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean
An electromagnetism-like method for the maximum set splitting problem
Directory of Open Access Journals (Sweden)
Kratica Jozef
2013-01-01
Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.
Ba 5s photoionization in the region of the second Cooper minimum
International Nuclear Information System (INIS)
Whitfield, S B; Wehlitz, R; Dolmatov, V K
2011-01-01
We investigate the 5s angular distribution parameter and partial photoionization cross section of atomic Ba in the region of the second Cooper minimum covering a photon energy region from 120 to 260 eV. We observe a strong drop in the Ba 5s β value from 2.0, reaching a minimum of 1.57 ± 0.07 at a photon energy of 150 eV. The β value then slowly rises back towards its nominal value of 2.0 at photon energies beyond the minimum. Our measured 5s partial cross section also shows a pronounced dip around 170 eV due to interchannel coupling with the Ba 4d photoelectrons. After combining our measurements with previous experimental values at lower photon energies, we obtain a consistent data set spanning the photon energy range prior to the onset of the partial cross section maximum and through the cross section minimum. We also calculate the 5s partial cross section under several different levels of approximation. We find that the generalized random-phase approximation with exchange calculation models the shape and position of the combined experimental cross section data set rather well after incorporating experimental ionization energies and a shift in the photon energy scale.
An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA
Directory of Open Access Journals (Sweden)
Jiye HUANG
2014-05-01
Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Minimum Bias Measurements at the LHC
AUTHOR|(INSPIRE)INSPIRE-00022031; The ATLAS collaboration
2016-01-01
Inclusive charged particle measurements at hadron colliders probe the low-energy nonperturbative region of QCD. Pseudorapidity distributions of charged-particles produced in pp collisions at 13 TeV have been measured by the CMS experiment. The ATLAS collaboration has measured the inclusive charged particle multiplicity and its dependence on transverse momentum and pseudorapidity in special data sets with low LHC beam current, recorded at a center-of-mass energy of 13 TeV. The measurements present the first detailed studies in inclusive phase spaces with a minimum transverse momentum of 100 MeV and 500 MeV. The distribution of electromagnetic and hadronic energy in the very forward phase-space has been measured with the CASTOR calorimeters located at a pseudorapidity of -5.2 to -6.6 in the very forward region of CMS. The energy distributions are very powerful benchmarks to study the performance of MPI in hadronic interactions models at 13 TeV collision energy. All measurements are compared with predictions of ...
Topside measurements at Jicamarca during solar minimum
Directory of Open Access Journals (Sweden)
D. L. Hysell
2009-01-01
Full Text Available Long-pulse topside radar data acquired at Jicamarca and processed using full-profile analysis are compared to data processed using more conventional, range-gated approaches and with analytic and computational models. The salient features of the topside observations include a dramatic increase in the T_{e}/T_{i} temperature ratio above the F peak at dawn and a local minimum in the topside plasma temperature in the afternoon. The hydrogen ion fraction was found to exhibit hyperbolic tangent-shaped profiles that become shallow (gradually changing above the O^{+}-H^{+} transition height during the day. The profile shapes are generally consistent with diffusive equilibrium, although shallowing to the point of changes in inflection can only be accounted for by taking the effects of E×B drifts and meridional winds into account. The SAMI2 model demonstrates this as well as the substantial effect that drifts and winds can have on topside temperatures. Significant quiet-time variability in the topside composition and temperatures may be due to variability in the mechanical forcing. Correlations between topside measurements and magnetometer data at Jicamarca support this hypothesis.
Topside measurements at Jicamarca during solar minimum
Directory of Open Access Journals (Sweden)
D. L. Hysell
2009-01-01
Full Text Available Long-pulse topside radar data acquired at Jicamarca and processed using full-profile analysis are compared to data processed using more conventional, range-gated approaches and with analytic and computational models. The salient features of the topside observations include a dramatic increase in the Te/Ti temperature ratio above the F peak at dawn and a local minimum in the topside plasma temperature in the afternoon. The hydrogen ion fraction was found to exhibit hyperbolic tangent-shaped profiles that become shallow (gradually changing above the O+-H+ transition height during the day. The profile shapes are generally consistent with diffusive equilibrium, although shallowing to the point of changes in inflection can only be accounted for by taking the effects of E×B drifts and meridional winds into account. The SAMI2 model demonstrates this as well as the substantial effect that drifts and winds can have on topside temperatures. Significant quiet-time variability in the topside composition and temperatures may be due to variability in the mechanical forcing. Correlations between topside measurements and magnetometer data at Jicamarca support this hypothesis.
Designing from minimum to optimum functionality
Bannova, Olga; Bell, Larry
2011-04-01
This paper discusses a multifaceted strategy to link NASA Minimal Functionality Habitable Element (MFHE) requirements to a compatible growth plan; leading forward to evolutionary, deployable habitats including outpost development stages. The discussion begins by reviewing fundamental geometric features inherent in small scale, vertical and horizontal, pressurized module configuration options to characterize applicability to meet stringent MFHE constraints. A proposed scenario to incorporate a vertical core MFHE concept into an expanded architecture to provide continuity of structural form and a logical path from "minimum" to "optimum" design of a habitable module. The paper describes how habitation and logistics accommodations could be pre-integrated into a common Hab/Log Module that serves both habitation and logistics functions. This is offered as a means to reduce unnecessary redundant development costs and to avoid EVA-intensive on-site adaptation and retrofitting requirements for augmented crew capacity. An evolutionary version of the hard shell Hab/Log design would have an expandable middle section to afford larger living and working accommodations. In conclusion, the paper illustrates that a number of cargo missions referenced for NASA's 4.0.0 Lunar Campaign Scenario could be eliminated altogether to expedite progress and reduce budgets. The plan concludes with a vertical growth geometry that provides versatile and efficient site development opportunities using a combination of hard Hab/Log modules and a hybrid expandable "CLAM" (Crew Lunar Accommodations Module) element.
Minimum nonuniform graph partitioning with unrelated weights
Makarychev, K. S.; Makarychev, Yu S.
2017-12-01
We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Energy Technology Data Exchange (ETDEWEB)
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru
Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.
2016-02-01
The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.
Communication: Minimum in the thermal conductivity of supercooled water: A computer simulation study
Energy Technology Data Exchange (ETDEWEB)
Bresme, F., E-mail: f.bresme@imperial.ac.uk [Chemical Physics Section, Department of Chemistry, Imperial College, London SW7 2AZ, United Kingdom and Department of Chemistry, Norwegian University of Science and Technology, Trondheim 7491 (Norway); Biddle, J. W.; Sengers, J. V.; Anisimov, M. A. [Institute for Physical Science and Technology, and Department of Chemical and Biomolecular Engineering, University of Maryland, College Park, Maryland 20742 (United States)
2014-04-28
We report the results of a computer simulation study of the thermodynamic properties and the thermal conductivity of supercooled water as a function of pressure and temperature using the TIP4P-2005 water model. The thermodynamic properties can be represented by a two-structure equation of state consistent with the presence of a liquid-liquid critical point in the supercooled region. Our simulations confirm the presence of a minimum in the thermal conductivity, not only at atmospheric pressure, as previously found for the TIP5P water model, but also at elevated pressures. This anomalous behavior of the thermal conductivity of supercooled water appears to be related to the maximum of the isothermal compressibility or the minimum of the speed of sound. However, the magnitudes of the simulated thermal conductivities are sensitive to the water model adopted and appear to be significantly larger than the experimental thermal conductivities of real water at low temperatures.
Communication: Minimum in the thermal conductivity of supercooled water: A computer simulation study
International Nuclear Information System (INIS)
Bresme, F.; Biddle, J. W.; Sengers, J. V.; Anisimov, M. A.
2014-01-01
We report the results of a computer simulation study of the thermodynamic properties and the thermal conductivity of supercooled water as a function of pressure and temperature using the TIP4P-2005 water model. The thermodynamic properties can be represented by a two-structure equation of state consistent with the presence of a liquid-liquid critical point in the supercooled region. Our simulations confirm the presence of a minimum in the thermal conductivity, not only at atmospheric pressure, as previously found for the TIP5P water model, but also at elevated pressures. This anomalous behavior of the thermal conductivity of supercooled water appears to be related to the maximum of the isothermal compressibility or the minimum of the speed of sound. However, the magnitudes of the simulated thermal conductivities are sensitive to the water model adopted and appear to be significantly larger than the experimental thermal conductivities of real water at low temperatures
G+K 1Σ+/sub g/ double-minimum excited state of H2
International Nuclear Information System (INIS)
Glover, R.M.; Weinhold, F.
1977-01-01
We have obtained a Born--Oppenheimer potential curve for the previously uncharacterized third 1 Σ + /sub g/ state of H 2 , using a correlated 20-term wavefunction of generalized James--Coolidge type. We find this potential curve to have a double-minimum character, with the inner (Rydberg-like) and outer (''ionic'') wells having minima at about 1.99 and 3.30 bohr, respectively, and an intervening maximum at 2.76 bohr. Unlike the extensively studied E+F double-minimum state, the outer well here appears to be the deeper, by some 450 cm -1 in our calculation. The inner and outer minima can apparently be associated with spectral lines that in experimental tables have previously been attributed to distinct G and K electronic states. The appropriate spectroscopic term symbol of this combined state is accordingly G+K 1 Σ + /sub g/ (1ssigma3dsigma+2pπ 2 )
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Maximum power flux of auroral kilometric radiation
International Nuclear Information System (INIS)
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Flow Convergence Caused by a Salinity Minimum in a Tidal Channel
Directory of Open Access Journals (Sweden)
John C. Warner
2006-12-01
transport through a constant direction density gradient. (4 A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
Callis, L. B.; Natarajan, M.
1986-01-01
Photochemical calculations along 'diabatic trajectories' in the meridional phase are used to search for the cause of the dramatic springtime minimum in Antarctic column ozone. The results indicate that the minimum is principally due to catalytic destruction of ozone by high levels of total odd nitrogen. Calculations suggest that these levels of odd nitrogen are transported within the polar vortex and during the polar night from the middle to upper stratosphere and lower mesosphere to the lower stratosphere. The possibility that these levels are related to the 11-year solar cycle and are increased by enhanced formation in the thermosphere and mesosphere during solar maximum conditions is discussed.
Digital Repository Service at National Institute of Oceanography (India)
Sijinkumar, A.V.; Nath, B.N.; Possnert, G.; Aldahan, A.
abundance, and which matches well with the Pacific records influenced by the Kuroshio Current. Additionally, two significant minimum events of P. obliquiloculata are also seen during the Younger Dryas (YD) and late Last Glacial Maximum (LGM, 20-18 cal ka... (LGM, 20-18 cal ka BP, Younger Dryas (YD, 13-10.5 cal ka BP) and late Holocene (4.5-3 cal ka BP). Northern core SK 168 shows distinctive minimum events and the intensity of variation reduces towards the south. The Holocene PME is a little longer...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...
Half-width at half-maximum, full-width at half-maximum analysis
Indian Academy of Sciences (India)
addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.
Shippingport: A relevant decommissioning project
International Nuclear Information System (INIS)
Crimi, F.P.
1988-01-01
Because of Shippingport's low electrical power rating (72 MWe), there has been some misunderstanding on the relevancy of the Shippingport Station Decommissioning Project (SSDP) to a modern 1175 MWe commercial pressurized water reactor (PWR) power station. This paper provides a comparison of the major components of the reactor plant of the 72 MWe Shippingport Atomic Power Station and an 1175 MWe nuclear plant and the relevancy of the Shippingport decommissioning as a demonstration project for the nuclear industry. For the purpose of this comparison, Portland General Electric Company's 1175 MWe Trojan Nuclear Plant at Rainier, Oregon, has been used as the reference nuclear power plant. 2 refs., 2 figs., 1 tab
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Tail Risk Constraints and Maximum Entropy
Directory of Open Access Journals (Sweden)
Donald Geman
2015-06-01
Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.
Transverse micro-erosion meter measurements; determining minimum sample size
Trenhaile, Alan S.; Lakhan, V. Chris
2011-11-01
Two transverse micro-erosion meter (TMEM) stations were installed in each of four rock slabs, a slate/shale, basalt, phyllite/schist, and sandstone. One station was sprayed each day with fresh water and the other with a synthetic sea water solution (salt water). To record changes in surface elevation (usually downwearing but with some swelling), 100 measurements (the pilot survey), the maximum for the TMEM used in this study, were made at each station in February 2010, and then at two-monthly intervals until February 2011. The data were normalized using Box-Cox transformations and analyzed to determine the minimum number of measurements needed to obtain station means that fall within a range of confidence limits of the population means, and the means of the pilot survey. The effect on the confidence limits of reducing an already small number of measurements (say 15 or less) is much greater than that of reducing a much larger number of measurements (say more than 50) by the same amount. There was a tendency for the number of measurements, for the same confidence limits, to increase with the rate of downwearing, although it was also dependent on whether the surface was treated with fresh or salt water. About 10 measurements often provided fairly reasonable estimates of rates of surface change but with fairly high percentage confidence intervals in slowly eroding rocks; however, many more measurements were generally needed to derive means within 10% of the population means. The results were tabulated and graphed to provide an indication of the approximate number of measurements required for given confidence limits, and the confidence limits that might be attained for a given number of measurements.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
The Distribution of the Sample Minimum-Variance Frontier
Raymond Kan; Daniel R. Smith
2008-01-01
In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...
Minimum Wages and Teen Employment: A Spatial Panel Approach
Charlene Kalenkoski; Donald Lacombe
2011-01-01
The authors employ spatial econometrics techniques and Annual Averages data from the U.S. Bureau of Labor Statistics for 1990-2004 to examine how changes in the minimum wage affect teen employment. Spatial econometrics techniques account for the fact that employment is correlated across states. Such correlation may exist if a change in the minimum wage in a state affects employment not only in its own state but also in other, neighboring states. The authors show that state minimum wages negat...
30 CFR 75.1431 - Minimum rope strength.
2010-07-01
..., including rotation resistant). For rope lengths less than 3,000 feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet...
Minimum Wages and the Distribution of Family Incomes
Dube, Arindrajit
2017-01-01
Using the March Current Population Survey data from 1984 to 2013, I provide a comprehensive evaluation of how minimum wage policies influence the distribution of family incomes. I find robust evidence that higher minimum wages shift down the cumulative distribution of family incomes at the bottom, reducing the share of non-elderly individuals with incomes below 50, 75, 100, and 125 percent of the federal poverty threshold. The long run (3 or more years) minimum wage elasticity of the non-elde...
Dramatic lives and relevant becomings
DEFF Research Database (Denmark)
Henriksen, Ann-Karina; Miller, Jody
2012-01-01
of marginality into positions of relevance. The analysis builds on empirical data from Copenhagen, Denmark, gained through ethnographic fieldwork with the participation of 20 female informants aged 13–22. The theoretical contribution proposes viewing conflicts as multi-linear, multi-causal and non...
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
30 CFR 57.19021 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...
30 CFR 77.1431 - Minimum rope strength.
2010-07-01
... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast
Directory of Open Access Journals (Sweden)
Matsumoto Ryutaroh
2010-01-01
Full Text Available We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations. It is shown that if coding and routing solutions are using the same transmission range, the benefit in d-dimensional networks is at least . Moreover, it is shown that if the transmission range can be optimized for routing and coding individually, the benefit in 2-dimensional networks is at least 3. Our results imply that codes following a decode-and-recombine strategy are not always optimal regarding energy efficiency.
Bozym, David J; Uralcan, Betül; Limmer, David T; Pope, Michael A; Szamreta, Nicholas J; Debenedetti, Pablo G; Aksay, Ilhan A
2015-07-02
We use electrochemical impedance spectroscopy to measure the effect of diluting a hydrophobic room temperature ionic liquid with miscible organic solvents on the differential capacitance of the glassy carbon-electrolyte interface. We show that the minimum differential capacitance increases with dilution and reaches a maximum value at ionic liquid contents near 5-10 mol% (i.e., ∼1 M). We provide evidence that mixtures with 1,2-dichloroethane, a low-dielectric constant solvent, yield the largest gains in capacitance near the open circuit potential when compared against two traditional solvents, acetonitrile and propylene carbonate. To provide a fundamental basis for these observations, we use a coarse-grained model to relate structural variations at the double layer to the occurrence of the maximum. Our results reveal the potential for the enhancement of double-layer capacitance through dilution.
On the Maximum Speed of Matter
Raftopoulos, Dionysios G.
2013-09-01
In this paper we examine the analytical production of the Lorentz Transformation regarding its fundamental conclusion i.e. that the speed of Light in vacuum is the uppermost limit for the speed of matter, hence superluminal speeds are unattainable. This examination covers the four most prominent relevant sources of bibliography: Albert Einstein's historic paper (1905) titled: "On the Electrodynamics of moving Bodies" on which his Special Relativity Theory is founded. His famous textbook titled: "Relativity, The Special and General Theory", A. P. French's textbook titled "Special Relativity", Wolfgang Rindler's textbook titled: "Essential Relativity". Special emphasis is placed on the critical analysis of Einstein's gedanken experiment as it is presented in his original paper, where he considers a moving, straight, rigid rod at the ends of which there are two clocks, whose synchronization is checked according to his own definition as given in part 1 of his paper. By applying the second fundamental hypothesis (principle) of SRT, we arrive at the conclusion that this noetic experiment can be concluded only if the rod's speed V with regards the stationary system and measured from it, is less than the speed of light C also with regards the stationary system and measured from it. In the opposite case, said noetic experiment would be meaningless as it could never be concluded for the Observer of the stationary system, at least in the Euclidean Space. Finally, we show that in all four cases under examination the relationship v definite and rigid law of Physics forbidding matter to travel with superluminal velocity in vacuum.
Training and minimum wages: first evidence from the introduction of the minimum wage in Germany
Directory of Open Access Journals (Sweden)
Lutz Bellmann
2017-06-01
Full Text Available Abstract We analyze the short-run impact of the introduction of the new statutory minimum wage in Germany on further training at the workplace level. Applying difference-in-difference methods to data from the IAB Establishment Panel, we do not find a reduction in the training incidence but a slight reduction in the intensity of training at treated establishments. Effect heterogeneities reveal that the negative impact is mostly driven by employer-financed training. On the worker level, we observe a reduction of training for medium- and high-skilled employees but no significant effects on the training of low-skilled employees.
Minimum weight protection - Gradient method; Protection de poids minimum - Methode du gradient
Energy Technology Data Exchange (ETDEWEB)
Danon, R.
1958-12-15
After having recalled that, when considering a mobile installation, total weight has a crucial importance, and that, in the case of a nuclear reactor, a non neglectable part of weight is that of protection, this note presents an iterative method which results, for a given protection, to a configuration with a minimum weight. After a description of the problem, the author presents the theoretical formulation of the gradient method as it is applied to the concerned case. This application is then discussed, as well as its validity in terms of convergence and uniqueness. Its actual application is then reported, and possibilities of practical applications are evoked.
Feedback Limits to Maximum Seed Masses of Black Holes
International Nuclear Information System (INIS)
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-01-01
The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum entropy production rate in quantum thermodynamics
Energy Technology Data Exchange (ETDEWEB)
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Determination of Minimum Air Clearances for a 420kV Novel Unibody Composite Cross-Arm
DEFF Research Database (Denmark)
Jahangiri, Tohid; Bak, Claus Leth; Silva, Filipe Miguel Faria da
2015-01-01
One of the most important requirements of any overhead line tower is determining the air clearances between live parts and earthed parts such as phase conductor and tower structure. In contrast to traditional steel lattice towers, the recently introduced fully composite pylon is completely made....... This paper presents the insulation coordination studies to determine minimum required air clearances on the unibody cross-arm. The procedure and relevant equations to calculate minimum air clearances to avoid flashover between phases’ conductors as well as top phase conductor and shield wire are based...
Mass loss from the southern half of the Greenland Ice Sheet since the Little Ice Age Maximum
DEFF Research Database (Denmark)
Kjeldsen, Kristian Kjellerup; Kjær, Kurt H.; Bjørk, Anders Anker
Northern hemisphere temperatures reached their Holocene minimum and most glaciers reached their maximum during The Little Ice Age (LIA), but the timing of specific cold intervals is site-specific. In southern Greenland, we have compiled data from organic matter incorporated in LIA sediments, used...... retreat. Our results show that the advance of glaciers during the LIA occurs early after the Medieval Warm Period terminating soon after 1200 AD and culminates c. 1500-1600 AD. Historical maps also show that many glaciers on the western coast occupy a still-stand near the LIA maximum until 1900 AD before...
Liu, Peng; Wang, Xiaoli
2017-01-01
A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...
Hadder, Eric Michael
There are many computer aided engineering tools and software used by aerospace engineers to design and predict specific parameters of an airplane. These tools help a design engineer predict and calculate such parameters such as lift, drag, pitching moment, takeoff range, maximum takeoff weight, maximum flight range and much more. However, there are very limited ways to predict and calculate the minimum control speeds of an airplane in engine inoperative flight. There are simple solutions, as well as complicated solutions, yet there is neither standard technique nor consistency throughout the aerospace industry. To further complicate this subject, airplane designers have the option of using an Automatic Thrust Control System (ATCS), which directly alters the minimum control speeds of an airplane. This work addresses this issue with a tool used to predict and calculate the Minimum Control Speed on the Ground (VMCG) as well as the Minimum Control Airspeed (VMCA) of any existing or design-stage airplane. With simple line art of an airplane, a program called VORLAX is used to generate an aerodynamic database used to calculate the stability derivatives of an airplane. Using another program called Numerical Propulsion System Simulation (NPSS), a propulsion database is generated to use with the aerodynamic database to calculate both VMCG and VMCA. This tool was tested using two airplanes, the Airbus A320 and the Lockheed Martin C130J-30 Super Hercules. The A320 does not use an Automatic Thrust Control System (ATCS), whereas the C130J-30 does use an ATCS. The tool was able to properly calculate and match known values of VMCG and VMCA for both of the airplanes. The fact that this tool was able to calculate the known values of VMCG and VMCA for both airplanes means that this tool would be able to predict the VMCG and VMCA of an airplane in the preliminary stages of design. This would allow design engineers the ability to use an Automatic Thrust Control System (ATCS) as part
The Improved Relevance Voxel Machine
DEFF Research Database (Denmark)
Ganz, Melanie; Sabuncu, Mert; Van Leemput, Koen
The concept of sparse Bayesian learning has received much attention in the machine learning literature as a means of achieving parsimonious representations of features used in regression and classification. It is an important family of algorithms for sparse signal recovery and compressed sensing....... Hence in its current form it is reminiscent of a greedy forward feature selection algorithm. In this report, we aim to solve the problems of the original RVoxM algorithm in the spirit of [7] (FastRVM).We call the new algorithm Improved Relevance Voxel Machine (IRVoxM). Our contributions...... and enables basis selection from overcomplete dictionaries. One of the trailblazers of Bayesian learning is MacKay who already worked on the topic in his PhD thesis in 1992 [1]. Later on Tipping and Bishop developed the concept of sparse Bayesian learning [2, 3] and Tipping published the Relevance Vector...
Study of the X-ray binary AM Herculis. II - Spectrophotometry at maximum light
International Nuclear Information System (INIS)
Voikhanskaia, N.F.
1980-01-01
The spectrum of the AM Her system at maximum light is analyzed, and a comparison is made between the spectra when the system is at different levels of brightness. At maximum light the equivalent line widths fluctuate rapidly on a time scale of about 1 min at all phases of the orbit period. As the brightness drops, the system becomes less strongly excited consequently, the high-excitation elements represented in the spectrum first fade and then vanish. At maximum light the bulk of the radiation comes from the hottest and densest parts of the luminous region. As the light wanes the contribution of their radiation to the total light of the system diminishes, and the radiation of the cooler, more tenuous parts of the emission region becomes perceptible. In addition, the pronounced change in the shape of the emission-line profiles during the orbital period at minimum light implies a considerable amount of irregularity in the region producing the lines, unlike the uniform emission region at maximum light
International Nuclear Information System (INIS)
Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui
2016-01-01
A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.
Energy Technology Data Exchange (ETDEWEB)
Hernandez, P. [Lawrence Berkeley Lab., CA (United States)
1995-02-01
This paper is an expansion of engineering notes prepared in 1961 to address the question of how to wind circular coils so as to obtain the maximum axial field with the minimum volume of conductor. At the time this was a germain question because of the advent of superconducting wires which were in very limited supply, and the rapid push for generation of very high fields, with little concern for uniformity.
Six months into Myanmar's minimum wage: Reflecting on progress ...
International Development Research Centre (IDRC) Digital Library (Canada)
2016-04-25
Apr 25, 2016 ... Participants examined recent results from an IDRC-funded enterprise survey, ... of a minimum wage, and how they have coped with the new situation.” ... Debate on the impact of minimum wages on employment continues ...
The impact of minimum wages on youth employment in Portugal
S.C. Pereira
2003-01-01
textabstractFrom January 1, 1987, the legal minimum wage for workers aged 18 and 19 in Portugal was uprated to the full adult rate, generating a 49.3% increase between 1986 and 1987 in the legal minimum wage for this age group. This shock is used as a ?natural experiment? to evaluate the impact of
The Impact Of Minimum Wage On Employment Level And ...
African Journals Online (AJOL)
This research work has been carried out to analyze the critical impact of minimum wage of employment level and productivity in Nigeria. A brief literature on wage and its determination was highlighted. Models on minimum wage effect are being look into. This includes research work done by different economist analyzing it ...
42 CFR 84.134 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.134... Respirators § 84.134 Respirator containers; minimum requirements. Supplied-air respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...
42 CFR 84.1134 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84... Combination Gas Masks § 84.1134 Respirator containers; minimum requirements. (a) Except as provided in paragraph (b) of this section each respirator shall be equipped with a substantial, durable container...
42 CFR 84.197 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.197... Cartridge Respirators § 84.197 Respirator containers; minimum requirements. Respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...
42 CFR 84.174 - Respirator containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.174... Air-Purifying Particulate Respirators § 84.174 Respirator containers; minimum requirements. (a) Except..., durable container bearing markings which show the applicant's name, the type of respirator it contains...
42 CFR 84.74 - Apparatus containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Apparatus containers; minimum requirements. 84.74...-Contained Breathing Apparatus § 84.74 Apparatus containers; minimum requirements. (a) Apparatus may be equipped with a substantial, durable container bearing markings which show the applicant's name, the type...
14 CFR 91.155 - Basic VFR weather minimums.
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Basic VFR weather minimums. 91.155 Section...) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Flight Rules Visual Flight Rules § 91.155 Basic VFR weather minimums. (a) Except as provided in paragraph (b) of this section and...
42 CFR 422.382 - Minimum net worth amount.
2010-10-01
... that CMS considers appropriate to reduce, control or eliminate start-up administrative costs. (b) After... section. (c) Calculation of the minimum net worth amount—(1) Cash requirement. (i) At the time of application, the organization must maintain at least $750,000 of the minimum net worth amount in cash or cash...
7 CFR 1610.5 - Minimum Bank loan.
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Minimum Bank loan. 1610.5 Section 1610.5 Agriculture Regulations of the Department of Agriculture (Continued) RURAL TELEPHONE BANK, DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.5 Minimum Bank loan. A Bank loan will not be made unless the applicant qualifies for a Bank...
5 CFR 551.601 - Minimum age standards.
2010-01-01
... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year... subject to its child labor provisions, with certain exceptions not applicable here. (b) 18-year minimum... occupation found and declared by the Secretary of Labor to be particularly hazardous for the employment of...
76 FR 15368 - Minimum Security Devices and Procedures
2011-03-21
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures... concerning the following information collection. Title of Proposal: Minimum Security Devices and Procedures... security devices and procedures to discourage robberies, burglaries, and larcenies, and to assist in the...
76 FR 30243 - Minimum Security Devices and Procedures
2011-05-24
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A... respect to the installation, maintenance, and operation of security devices and procedures to discourage...
12 CFR 567.2 - Minimum regulatory capital requirement.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum regulatory capital requirement. 567.2... Regulatory Capital Requirements § 567.2 Minimum regulatory capital requirement. (a) To meet its regulatory capital requirement a savings association must satisfy each of the following capital standards: (1) Risk...
Minimum bias measurement at 13 TeV
Orlando, Nicola; The ATLAS collaboration
2017-01-01
The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes and to simulate the environment at the LHC with many concurrent pp interactions (pile-up). We summarise the ATLAS minimum bias measurements with proton-proton collision at 13 TeV center-of-mass-energy at the Large Hadron Collider.
Solving the minimum flow problem with interval bounds and flows
Indian Academy of Sciences (India)
... with crisp data. In this paper, the idea of Ghiyasvand was extended for solving the minimum ﬂow problem with interval-valued lower, upper bounds and ﬂows. This problem can be solved using two minimum ﬂow problems with crisp data. Then, this result is extended to networks with fuzzy lower, upper bounds and ﬂows.
47 CFR 25.205 - Minimum angle of antenna elevation.
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Minimum angle of antenna elevation. 25.205 Section 25.205 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.205 Minimum angle of antenna elevation. (a) Earth station...
77 FR 43196 - Minimum Internal Control Standards and Technical Standards
2012-07-24
... NATIONAL INDIAN GAMING COMMISSION 25 CFR Parts 543 and 547 Minimum Internal Control Standards [email protected] . SUPPLEMENTARY INFORMATION: Part 543 addresses minimum internal control standards (MICS) for Class II gaming operations. The regulations require tribes to establish controls and implement...
12 CFR 3.6 - Minimum capital ratios.
2010-01-01
... should have well-diversified risks, including no undue interest rate risk exposure; excellent control... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE...
Minimum Competencies in Undergraduate Motor Development. Guidance Document
National Association for Sport and Physical Education, 2004
2004-01-01
The minimum competency guidelines in Motor Development described herein at the undergraduate level may be gained in one or more motor development course(s) or through other courses provided in an undergraduate curriculum. The minimum guidelines include: (1) Formulation of a developmental perspective; (2) Knowledge of changes in motor behavior…
30 CFR 77.606-1 - Rubber gloves; minimum requirements.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Rubber gloves; minimum requirements. 77.606-1... COAL MINES Trailing Cables § 77.606-1 Rubber gloves; minimum requirements. (a) Rubber gloves (lineman's gloves) worn while handling high-voltage trailing cables shall be rated at least 20,000 volts and shall...
42 CFR 84.117 - Gas mask containers; minimum requirements.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Gas mask containers; minimum requirements. 84.117... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Gas Masks § 84.117 Gas mask containers; minimum requirements. (a) Gas masks shall be equipped with a substantial...
State cigarette minimum price laws - United States, 2009.
2010-04-09
Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
30 CFR 18.97 - Inspection of machines; minimum requirements.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inspection of machines; minimum requirements... TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Field Approval of Electrically Operated Mining Equipment § 18.97 Inspection of machines; minimum...
12 CFR 615.5330 - Minimum surplus ratios.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Minimum surplus ratios. 615.5330 Section 615.5330 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Surplus and Collateral Requirements § 615.5330 Minimum...
19 CFR 144.33 - Minimum quantities to be withdrawn.
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Minimum quantities to be withdrawn. 144.33 Section 144.33 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... Warehouse § 144.33 Minimum quantities to be withdrawn. Unless by special authority of the Commissioner of...
The impact of minimum wage adjustments on Vietnamese wage inequality
DEFF Research Database (Denmark)
Hansen, Henrik; Rand, John; Torm, Nina
Using Vietnamese Labour Force Survey data we analyse the impact of minimum wage changes on wage inequality. Minimum wages serve to reduce local wage inequality in the formal sectors by decreasing the gap between the median wages and the lower tail of the local wage distributions. In contrast, local...